Union building never gets anywhere without lots and lots of one-on-one conversations in the workplace. Whether it’s persuading workers to join, persuading members to take roles in collective actions, resolving conflicts, or negotiating between members and employers, a lot of these conversations can be difficult.
That places high expectations on the union reps who need to have them. And whilst this comes naturally to a lot of reps, many of them also need support from their unions.
The training offer from unions in this area is often very strong. However, after a training session it can often be a while before reps get to put their learning into practice. That means it can still be daunting for a new rep when they need to initiate conversations that they have been trained for.
A role for AI?
The TUC and PCS wanted to see if there was a way that Artificial Intelligence (AI) could help reps to practice and develop their skills in important organising conversations as a supplement and follow up to training sessions.
In the project, we sought to prototype an AI intervention to support reps and test this prototype to establish whether it could be a useful area for deeper work.
Risks and mitigation
AI is justifiably controversial in many industries, especially public service. Too often the focus of AI in public service has been on cost cutting and automation, rather than service enhancement.
In this project, PCS were determined to look at whether AI could augment rather than replace traditional union training. Training is a shared experience for reps, which has many other benefits in building a stronger and better-connected movement.
AI chatbots can involve high reputational risk for organisations that deploy them. Some use natural language processing to match queries to pre-written information resources, and this is relatively low-risk. Others also use generative AI to write custom responses, based on a specified collection of background data, such as policies or research documents. These are riskier as the user may be unaware that a chatbot is presenting a hallucination based on the training data. Users may also attempt to trick the bot into saying things that look bad as official responses from the organisation.
We decided that although this chatbot technically falls into the higher risk type, the risk was actually far lower because of the context. The interaction is billed as a simulated chat, not authoritative advice. The bot is also representing a fictional coworker, rather than a personification of the union itself. So even if it were to come up with an odd or incorrect response in conversation, this would be much less jarring to a user. This bot is also designed for internal use within the union, with union reps being the main users rather than people who need authoritative help or advice.
Data protection is also a major risk. Many third-party AI models will use data from chats to train their model. We don’t want to run the risk of any personal data inadvertently being typed into the chat, or sensitive information from the union’s perspective, in a way that could be incorporated back into the model. So we only used services that are compliant with GDPR and can guarantee chats will not be used for training.
Research
It is important to test out hypotheses, and the project did this initially through discussions with PCS education and organising officials, to make use of their expertise.
We sat in as observers in PCS training for a group of reps on recruitment conversations. After hearing a presentation on the principles of successful recruitment conversations, members of the group had to role play it for themselves, giving them more practical insight into how they might use the learning. This was especially helpful in terms of how we developed feedback for users.
We also made use of training materials and organising resources from the union, to train the model so that the outcomes would be closer to the approach PCS have developed.
Development approach
This project was developed for PCS and the TUC Digital Lab by grassroots digital group Campaign Lab and development consultancy Poteris.
The tool is built using a standard modern web stack, comprising NextJS/Typescript with Supabase. The LLM integration directly uses OpenAI platform APIs, with an agreement that any data cannot be used for training.
The development was iterative, allowing us to initially rapidly prototype a basic version of the tool, and to incorporate feedback from user testing of the prototype as the project progressed.
A library of training data was drawn together from PCS resources as well as broader demographic segmentations. We then used these resources and a process of prompt engineering to create custom persona prompts and feedback prompts for the model. We iterated these prompts until they gave outputs that were realistic and passable.
The structure of the tool allows these prompts to be amended easily to test the effectiveness of amendments.
The prototype
Rather than building a comprehensive service before exposing it to users, Poteris built enough of a service to allow us to test and see if we were on the right track. We only focused on one type of conversation – recruiting a new member – as each other scenario we included would need more research and background data.
The prototype was developed to have a simple and clear user interface, using design conventions in keeping with other interactions that reps would likely be comfortable with in their daily lives. It is also fully functional on mobile devices.
Using an OpenAI API allowed the bot to access advanced Large Language Model functionality on a pay-as-you-go basis. For a prototype this approach is ideal as it has low usage costs whilst the service is only being tested by smaller numbers of users.
Testing and findings
We tested the prototype tool with 9 PCS reps, in 1-1 interviews. Most were online, but we managed to get one interview in person as well, which helped in observing more closely how a user interacted with the tool.
We wanted to get a range of perspectives in tests, so we had to offer a variety of testing slots – including evenings and weekends as well as weekdays – to make sure different groups of reps were able to take part. We offered a small incentive for reps’ time, in the form of an online shopping voucher. This can often help encourage a wider range of people to take part in testing, rather than leaving it only to those motivated enough to volunteer their time, who may have different and distorting perspectives on the product being tested.
We agreed a script for the interviews in advance, so we knew that they would be tested under similar circumstances, though different people would be conducting the actual tests. We looked for questions that would help us understand the reps’ situation and needs around recruitment conversations. We then looked for questions that established common things we wanted to gain insights on – the realism of the scenario and persona, the user interface, the feedback mechanism and so on.
We recorded the interviews and distilled insights from them. That included observing how the reps used the tool, and what barriers might be causing them problems. We also noted things they found off-putting in the design or content (so we should look at changing), or things they particularly liked (so we could build on them).
Most of the reps used PCs to access the tool, though we did get feedback from different devices (tablet, phone, Linux), which was very helpful in identifying how the technology available to different users allowed them to get more or less out of interacting with it.
Overall, the results from the tests were extremely positive. All the reps said they could see value in the tool and thought it could be a helpful addition to training courses. Most said something like this would be useful for themselves, even as more experienced reps.
“This is a really good tool. I wish I’d had it as a new rep. In fact, even though I’m an established Rep, I would hope that it would be made available to all of us because sometimes we need the practice.”
“For someone like myself who has social hurdles to get over, that can be quite daunting. I can say “right, I’ve done the course” but who do I go to for that little bit more support?”
Reps were very positive about the relevance and realism of the personas and conversations (given the artificial premise of writing a conversation in longer text segments). They found the user interface mostly straightforward and had very few problems understanding what they were supposed to do, or how to do it.
“It did seem like it was natural things that people would say.”
We had a lot of helpful comments about how the tool could develop in future, such as particular scenarios that could be helpful, or ways to save and build on feedback scores over time.
“I thought the actual simulation was really good, and the outcome part was really, really useful.”
Following the testing, the insights were used to refine elements of the user interface and the content of the tool, addressing points of difficulty or confusion that had been uncovered in the test.
Going forward
We believe the testing shows there would be a good degree of utility in this product for union reps if developed further, and also a willingness amongst many reps to engage.
Particular areas of use would be for less experienced reps who have just gone through training. It could also be useful in building confidence for reps who find social interactions more challenging.
The prototype could be adapted to a variety of other types of conversation scenarios. For example, reps suggested conversations around retention, testing out talking points for new campaigns, or simulating negotiations and representation meetings, particularly on specialist topics such as health and safety and reasonable adjustments.
Another key improvement could be to extend the knowledge base the tool is working from, in particular when it is giving feedback to the user in relation to key technical points the union would want them to consider for this sort of conversation.
It could also be extended with an interface for organisers to set scenarios for reps to practice, have links automatically sent to the reps, and easily see the level of engagement. To do this it could integrate with other tools used by the union, including to require reps and organisers to log in to access it.
Rewards like badges for achieving high scores or completing conversations with a challenging generated persona could help with engagement, as well as starting with “easier” conversation counterparts and progressively adding challenge or varying segments.
It could also be extended to include a voice chat version, whether for the conversation itself or for a discussion of the feedback depending on the user’s preference. That might improve the user experience particularly on a mobile, where typing is more difficult.
Open Source
We are also open sourcing the work done so far. Poteris and Campaign Lab are committed to making learning available to other progressive organisations that would like to develop their own organising tools.
The code for the bot is available in an online repository for anyone to download and work with. This doesn’t include the third-party AI model (which needs an API key for ChatGPT) or the training data from PCS. But it does include the user interface and the framework for the chat and assessment.
Alternatively, any unions interested in picking up the work for themselves could contact the TUC Digital Lab or Poteris to discuss making use of it directly.
Try it out yourself
The prototype can be seen online at: https://repcoach.org.uk – Please have a go and let us know your thoughts on whether this kind of approach could have other use cases in unions.