AI-generated stickers representing the views that AI is transformative, overhyped or hazardous. Worn by attendees to the event as conversation-starters. Photo: TUC

What could AI do for unions? And where should we start?

On 7th March, the TUC Digital Lab held our first workshop on generative AI for unions at the NUJ’s offices in King’s Cross. Over 40 colleagues attended from across TUC affiliated unions, showing the deep interest in a topic that’s causing so much hype, but also concern, for both union members, and unions as organisations.

It’s this latter area – whether AI might help or affect unions as organisations – that this Digital Lab session focused on. We had a long day, hearing from expert speakers, grappling with the complex questions and trade-offs that AI poses for us, and finishing with an opportunity to directly experiment with some of the most popular generative AI tools to see how they might solve specific problems.

There are a wide range of possible responses to AI, encompassing the political, philosophical, economic, organisational, strategic and tactical. It obviously wasn’t possible to cover them all in such a short time, but the conversation was wide ranging, highlighting areas where Digital Lab participants felt unions needed to learn more, and begin a programme of collaborative work.

Why talk about AI?

AI hype is around every corner. 

Commentators tell us this is big, transformative stuff where AI ushers in a world of abundance, solving problems across health, climate change and clean energy, agriculture, work and more. On the flipside, the fear is that it leads us in a negative direction, where AI entrenches negative trends in society, overtakes our ability to manage it, or ultimately decides it no longer needs us.

We are currently in an “AI summer” where there’s massive excitement and investment. But, to date, every such summer has been followed by an “AI winter” over the last several decades, where apparent advances are followed by dead ends and disappointment.

For the current wave, if you start with optimism, generative AI (our focus for this session) can do incredible things, writing words, music, poetry, generating ideas, speaking languages, creating images, drawings, photos and diagrams, analysing data and more. It can help you do the boring things to save you time. It can extend your abilities, helping you try to do things you can’t do already. 

ChatGPT, which started this explosion 18 months ago, seemed to offer much of this out of the box, and as a result, it is (apparently) the fastest consumer software to 100M+ users in history.

But you can also be pessimistic, because generative AI doesn’t “know” anything, it merely predicts what it thinks will come next. It also lacks wider context, gets facts wrong, replicates human biases, is fiddly to use, struggles to produce things that are truly distinctive, is expensive, poses privacy risks, has high financial and environmental costs, exploits the labour of its sources and trainers, is insufficiently transparent, under-regulated and, ultimately, dehumanises and devalues us.

More practically on the pessimistic side of the ledger, though the big AI companies continue to release new versions of their software with more capabilities (usually the ability to ‘read’ more text for context), there’s some evidence that the usage of their services has already started to decline, as people try them out, struggle to find routine uses and wonder what the fuss was about.

It would be wrong though to say that these tools will end up useless, even if things may turn out pretty different to the techno-optimists’ predictions. There’s simply too much at stake for the companies involved for them to deliver no meaningful change to the way we do our work. A lot will change over the coming years, and this is the start of a journey for us all.

What views did people bring with them?

As people arrived, based on what they knew already (it’s been very hard to escape the hype and coverage) we asked them to consider four questions to open up the conversation:

  1. Do you feel AI is transformative, harmful or over-hyped?
  2. What excites you about it? And what worries you?
  3. Is your union ready for AI? Where might you start?
  4. What don’t you know that you’d like to know?

Given the diversity of unions and roles in the room, we heard a wide range of responses. On the first question, answers ranged from “both” to “somewhere in between” to “that’s not the right question at all”. For the second, participants were excited about the idea of being able to extend their stretched resources, but the worries were manifold, around issues of privacy, consistency, transparency, trust and job replacement. 

No union thought it was “ready for AI”, and none had a specific AI plan, strategy, set of guiding principles or working group established. Though at the same time, all unions believed that some staff and reps were likely to be experimenting with it already, in the absence of this support. As a result, the desire for knowledge was very great, with participants keen to learn as much as possible.

So we started by hearing from some folks who have been there and done that (at least as much as that’s possible for such new tech).

Hearing from the experts

We then heard from three experts, each offering different perspectives on how organisations can approach AI. 

First was Mimmi Toreheim from the comms team at union federation LO Sweden, who have been developing their understanding of the potential for using AI in their day-to-day work, as a live innovation project.

Working with the Research Institute of Sweden (RIS), they started with a series of workshops for team members, understanding what generative AI was, and how common tools such as ChatGPT, Adobe Firefly and Microsoft Copilot worked. In the workshops, team members started to identify potential use cases in their own work, where an AI tool might save them time, or simply a task for them. 

Team members then experimented with these tasks, reporting back to the rest of the team at regular drop-in sessions with RIS, to swap insights or ask questions. 

Speechwriters used the AI for research, or to fine-tune difficult parts of their speech drafts, running a section through the tool to find other potential ways to communicate the same concepts. 

Social media content editors used it to summarise long reports, discovering potential points of interest for different audiences that they could use in social posts. It also helped them to generate draft social content from longer texts, such as making questions and answers for Instagram quizzes.

Website colleagues used the tools to get a better understanding of how they could optimise particular pages for search engines. Generating summaries from AI helped show how algorithms might view the content on their pages, and whether they should reprioritise this.

Designers have used AI to adapt images, such as extending image backgrounds, where designs needed space for copy to be added. They sourced a dedicated Digital Rights Management (DRM) platform for their licensed photography, allowing them to search for assets by specifying what they wanted in a picture in natural language, rather than needing to tag images so extensively in order to find them later. 

Overall the project has been a success, identifying a number of areas where staff time could be made more productive, allowing communicators to focus on more complex parts of their work. They’ve also learned some important lessons during the project.

The approach LO took, of voluntary and well-supported experimentation, helped considerably when it came to working with the different groups of colleagues who did or did not want to use AI. Setting safe usage guidelines, and getting people to share their learning with colleagues, helped those who were keen to engage. And they were often the best people to bring other colleagues along with them, showing that the time they spent on the project was paying off for them. 

LO’s experimentation also helped them understand the critical balance between using AI for efficiency and ensuring the integrity and authenticity of their work. For example, they had to consider the security of working on sensitive or unreleased content and data – making sure they were using only those tools that they knew would respect their organisational data. 

They developed guidelines to safeguard against reputational risk from the misuse of AI, such as clarifying the limited situations in which AI-generated imagery could be allowed. They devised red lines such as never using AI to depict people. 

And their journey has shown them the importance of familiarising communications staff with AI, not just for immediate organisational benefits but also for enhancing staff’s own employability and adaptability in a rapidly evolving job market.

Second was Craig Parker from Microsoft’s Social Impact team, who took us through the ways CoPilot (Microsoft’s AI assistant) is being woven into their suite of software. It’s available on several different levels:

  • CoPilot for Windows: A basic AI assistant chatbot, powered  which appears in its own window for Windows 11 users, or via https://www.bing.com/chat 
  • CoPilot for Microsoft 365: Additional AI functions integrated into many Microsoft Office suite apps. 
  • CoPilot studio: A tool to build and license your union’s own custom AI chatbots. 

Craig ran live demos of CoPilot for Office, drafting documents in Word, responding to emails in Outlook, scheduling meetings, transcribing them with Teams, analysing data in Excel spreadsheets and more. 

The group had questions for him, particularly about the way that corporate data might be used to train the model (his answer: it isn’t, but only so long as the user is logged into an enterprise Microsoft 365 account that the union provides).

He also discussed the licensing cost. Unions don’t currently  qualify for non-profit pricing (this is a topic we agreed to take up), so we have to pay full costs for our Office 365 accounts and any deeper CoPilot functionality. 

Having an enterprise Office account entitles users to use a version of CoPilot chat that secures corporate data and doesn’t save chats or train its AI on your content. However, unions should be aware that any reps using it without a union-paid Microsoft 365 account will not have this level of data protection.

CoPilot for Microsoft 365 is an additional subscription for every 365 user, costing another £300 a year on top of the Office license. This is growing all the time in functionality, with new Office apps releasing CoPilot enhancements on a regular basis. As this is a copy of CoPilot within your Microsoft tenant, you can train it on internal document sets to better reflect your context, without allowing Microsoft to train the AI on them more widely.

Finally, we heard from Hannah O’Rourke from Campaign Lab, who took us through some tools for campaigning that she’s been developing with hackathon participants. 

These included some really interesting ideas, where people had built layers and interfaces on top of the AI models allowing additional or tailored functions.

For example, one wizard allowed users to generate customised campaign letters for different groups of voters, based on profiling characteristics and preferences.

Another was a chatbot to train grassroots campaigners for doorstep conversations, by mimicking a realistic back and forth dialogue, then rating the quality of the interaction and providing feedback.

How might unions actually use AI?

Based on our work in the morning, participants spent the afternoon working on principles and questions to help unions think about how to use AI in a responsible, pragmatic way.

The group proposed a long list of questions (we’ve grouped them here, and put the ones voted most important at the top of each section):

Principles and structuresUsageTechnologyOrganisational
What policy do we need?

How can AI, unions and democracy co-exist?

How can we answer our members’ concerns about the use of AI at their work?

How can we work to avoid bias in our input and output from AI?

How can we align our AI work with a wider set of union principles?
How can we use data with AI legally and responsibly?

How can we stay factual?

How can we avoid bias?

How do we ensure a human is still ultimately accountable?
How do we avoid over-reliance on big tech and/or a single set of tools?

How do we choose the right tools?
How can we manage our members’ concerns about our use of AI?

How can we collaborate as unions?

How can we ensure we’re training ourselves properly?

Each of these challenging questions are ones that unions will want to be reassured about, develop policies and plans for and experiment with over the coming months and years, before adoption of AI tools could ever become ‘business as usual’. As for all organisations grappling with the potential scale of change ahead, there’s a lot to consider.

Trying out AI tools

Based on everything we’d discussed during the day, we asked participants to suggest concrete ideas they’d like to explore further, picking groups to discuss and test out. These included:

  • A chatbot to answer questions from a defined set of HR policies.
  • Creating a plan for an organising campaign at a large online retailer.
  • Analysing the responses to survey questions.
  • Creating plans to organise hard-to-reach and migrant workers
  • A meeting minute taker and formatter.
  • A retention predictor.

We weren’t able to build them all, but all groups had a go at something. One of the main limitations people found was that the free version of ChatGPT currently doesn’t allow much interaction outside of the model (e.g. with the web, or with uploaded files). The groups with access to ChatGPT+ (£20/month) were able to do a bit more, armed with a more powerful model and the ability to create standalone GPTs (which are like single purpose AI ‘bots’, tailored by pre-setting a preferred context, style of answer or specific information to work from). 

These custom GPTs turned out to be useful for ‘reading’ policy documents and answering questions about them in an accessible way. That’s not to say this would be ready for real world usage after 15 minutes of configuration, or at all. But with proper testing, from a number of users across a lot of questions, you would be better able to gauge its reliability. 

Another experiment included trying to get ChatGPT to write a plan for an organising campaign. While many of the ideas it suggested were perfectly valid, some of the supporting evidence it provided for its plan seemed made up (it didn’t pass the sniff test for the experienced organiser running the experiment) and, when questioned, it admitted as much.

One group worked on analysing survey responses. It worked well to produce summaries and syntheses of free text data, but less so on generating charts or addressing specific statistical analysis. This highlighted some of the limitations of ChatGPT, which performs worse in data analysis than specialist tools such as Perplexity. But this also showed the dilemma for unions in limiting the number of new tools being used, versus using the right tool for the job. 

The group working on connecting with migrant workers found potential in the use of translation into any language, but expert knowledge in the team quickly showed up the superficial level of ChatGPT’s understanding of the area without any specialist training information. More worrying was its propensity to simply make up convincing-sounding suggestions where it didn’t know the answer, which less expert users would find harder to spot.

In sum then, the groups were quickly able to set up experiments, but their accuracy and comprehensiveness were definitely open to question. 

What next?

The last activity of the day asked Digital Lab participants to try and describe what they’d need to move forward with a programme of work around AI for their union.

Suggestions included:

  • The creation of template policies/guidance for AI use in unions.
  • Development of more real-world case studies.
  • Research into different key tools, to better assess some of their consequences.
  • More trainings, particularly practical use of specific tools such as Microsoft CoPilot.
  • Glossaries of AI concepts as they might apply to unions, to help leaders better understand implications.
  • Development of a helpful prompt library for union workflows.
  • Ongoing networking on the topic between TUC affiliates.

We’ll be following up on some of these ideas over the coming months. 

If you have further ideas for how unions should think about the adoption of generative AI ideas, are looking for support for a project or pilot, or want to find another union to collaborate with, please get in touch.


Sam Jeffers (of Join Together and The Shop) is a consultant to the TUC Digital Lab, and facilitator for the Digital Lab workshop series.