Red Train Blog

Ramblings to the left

The Red Train Blog is a left leaning politics blog, which mainly focuses on British politics and is written by two socialists. We are Labour Party members, for now, and are concerned about issues such as inequality, nationalisation, housing, the NHS and peace. What you will find here is a discussion of issues that affect the Labour Party, the wider left and politics as a whole.

  • Home
  • Topics
    • Topics
    • EU referendum
    • The Crisis in the Labour Party
  • Art
  • Books
  • About us
  • Search

The pressure on mental health services under late capitalism and how art therapy can help in the fight against the far-right: A conversation with activist and art psychotherapist Cat MacGregor

February 23, 2026 by Alastair J R Ball in Activist interview

On a winter call, activist and art psychotherapist Cat MacGregor talks about building “third spaces” for collective care; from The Post Bar sessions that blur support group and art workshop, to a worker-led art therapy co-op trying to survive in London despite funding droughts, and an increasingly rigid mental health system.

Alastair: How did the art therapy open group on processing the rise of the far-right at The Post Bar go?

Cat: “Yeah, it was really interesting session. It’s different from ways I’ve worked previously, which was really engaging. We were doing one of these groups that blur the lines between an art therapy group or a support space. That’s something I’ve been thinking about in terms of the work I do, and how we navigate these different spaces and communities utilizing these third spaces and what comes up through that.

“We had a nice small group. It was a workable size. It was about 10 of us, in the end, it was quite fluid. It was great to have a mix of members, mix of backgrounds, mix of experiences, in terms of people who weren’t brought up in the UK. People who have immigrated here, and people from the UK.”

What themes surfaced in the room? 

“I’m thinking about how to hold the confidentiality of the space, but still speaking to the kind of theme we had. There was quite a bit of sharing of experiences of navigating these conversations around xenophobia and racism, and what comes up in terms of microaggressions, or experiences of being othered, particularly by those who are not from the UK, and thinking about how that’s managed, or engaged with, the frustrations around that and the specific way that hits with British culture, as with wellbeing this very polite or backhanded culture with meaning not being overtly said, or things being alluded to.”

You mentioned anti-fascist organising came up too. What did that look like?

“It was interesting. Some of the participants were talking about having those kind of explorative conversations with people who maybe are on the far-right, and they were talking about that experience of engaging them. We got to think about the different positions people are in within our anti-Fascist movement. About who gets those kind of privileges to safely have certain conversations. That was quite an important conversation as well. It was very focused on how do we engage and how do we bring empathy.”

Can you give an overview of your practice and how you got here? 

“I began training as an art psychotherapist about eight years ago, coming up to nine years ago. I came from an art background, went to art school and did the usual thing of, what on earth do I do with this? And where do I take this? I don’t know anyone in the arts. I don’t know how to navigate this, but I really wanted to be doing something that had social value.

“I was looking at a way of marrying those things, and for me, interestingly, the way it came together was I did some activism in the West Bank about 10 years ago, just before I moved to London, and I met a couple of art therapists there, who I ran some projects.”

What shaped your political approach to therapy as you trained?

“I got really lucky. I got to do one of my placements at a therapeutic community called Studio Upstairs, which is in Dalston, and it’s been going since the 30s. It was really born out of the anti-psychiatry movement, and thinking about mental ill health through a social lens, rather than the medical model. I found that to be a formative experience, and it really aligned with my politics and thinking.”

You’ve been outspoken about how the system pushes art therapy into rigid models. What did you see in public services?

“After I qualified, I found myself in these jobs in the public sector that were just utterly gruelling and really disempowering. A lot of art therapy work, particularly in CAMHS, which is the Child and Adolescent Mental Health Services, where most of the jobs are, have been bastardised into becoming these CBT roles, and in some jobs they’re now making art therapists do mandatory CBT training.

“So they’re really trying to turn all of the mental health services into this homogenous block. It just was a way of working that I couldn’t really comply with for very long.”

And in the charity sector, did it feel different?

“So I went to work in the violence against women sector. I got to practice in a way that was much more aligned with myself as a practitioner, but also with my politics, but the scarcity of resources meant that there was so much of what I felt as exploitation, a lot of poor management and really very toxic work environments. I think a lot of people I know who work in charities experience the same things.”

You’ve worked with grassroots organisations too. What’s that work, and what are the barriers?

“It’s a community run charity started in response to the Grenfell Fire disaster, that’s like a very small charity. The problem we have there is a complete lack of funding and an ongoing, complicated dynamic with the local authority.”

Is that what pushed you toward founding a workers’ co-op?

“I guess all these experiences were what led to thinking about setting up a workers cooperative and trying to find a new way of doing things. To attempt to create a new way of working. I was looking for a way of working outside of exploitation, because I couldn’t find any models where that [exploitation] doesn’t become a part of it, where you’re not having to really sacrifice a big part of who you are as a practitioner and as a worker.”

Tell me more about TAT, what it is, and what you’ve built so far.

“The co-op that I set up with two other art therapists, we call it TAT, but it’s The Art Therapy co-op. We set that up a bit over four years ago now. We set up our own space where we’re doing a mixture of private practice work and voluntary work. It feels like we’re very much on a journey. Literally, I’ve had a meeting with someone for a space so I think it might be finally becoming a reality.”

You shifted the co-op’s model toward worker support. What does that look like in practice?

“So we started setting up these sessions called ‘not a workshop’ and that was just to have an afternoon of open art making and being together, and they’re open to anyone who works in the charity sector or public sector, education, care, paid or unpaid. We call it people supporting others in unsupportive environments.”

You also ran an alternative art therapy conference. What were you responding to?

“The British Association of art therapists have a conference, but it’s like £200 to £250 pounds to go for a day and sit in the Wellcome Collection. It's just so unattainable. So we decided to set up our own one. We did that this year. We had about 50 attendees, which was our max. We were sold out. We did an open call out, with lots of responses to facilitate different sessions using the unconference model. It was great and so empowering.”

Your work is explicitly political, and you argue that art keeps the “mystery” intact. Can you explain that?

“All things are political, and hold politics within them, whether we engage with it or notice it. It’s always there and we are unable to remove it.

“Thinking about the brain as some sort of computer system that you just need to enter the right codes into, which is a really unmagical and a kind of depressing way to be with the sheer complexity and mystery of what it is to be human.”

How does that link to the anti-far-right work you’re developing?

“My thinking about that group in particular came out from my own personal experience as an activist. The experience that I had on the anti-Unite the Kingdom demonstration. I was really feeling this experience of being at this tipping point where they [the far-right] were in control of London that day.

“It did ignite these responses in me about the experience of fear and needing a space to process the emotional impact of this. What happens to the emotional life of all of us as individuals who are having to carry this burden? There’s so much to despair about, and it can feel so relentless. How do we keep that fight going and that longevity that’s needed.”

Finally: what’s on the horizon for you and the co-op?

“Next year we will keep going with our unconference and developing that to be a bigger one. Also, hopefully finding a space to become our home. I’m also looking at setting up a workers’ assembly.

“The system doesn’t want us to exist, and it’s pushing us down and trying to morph us into something else. How do we create work outside of that system? Because we know the need is there. We are just going to have to do it ourselves. Community is the only way. Trying to build a secure base, because we need to get to the point of having a secure base so that we can steady ourselves and brace for the storm.”

Related posts
andrzejrembowski-microphone-4319526_640.jpg
Feb 23, 2026
The pressure on mental health services under late capitalism and how art therapy can help in the fight against the far-right: A conversation with activist and art psychotherapist Cat MacGregor
Feb 23, 2026
Feb 23, 2026
Trump-rally.jpg
Feb 15, 2026
Why does Trump get away with this unhinged foreign policy?
Feb 15, 2026
Feb 15, 2026
polling-station.jpg
Jan 8, 2026
It’s a new year so here are some things that give me hope right now
Jan 8, 2026
Jan 8, 2026
February 23, 2026 /Alastair J R Ball
Activist interview
Comment

Better content moderation for better discourse: A conversation with Madhuri Rahman of WeLivedIt.Ai

December 11, 2025 by Alastair J R Ball in Technology, Activist interview

Online discourse has been soured by extremism and hate speech. This is especially the case for marginalised people who experience more abuse online and whose content gets moderated more severely. 

Surely there is a way to improve how we debate online? Maybe AI and large language models could be useful in detecting hate speech? Especially if the communities most at risk from toxic online content were able to use their lived experience to help train the AI models.

That’s what WeLivedIt.Ai aims to do, to use marginalised peoples’ lived experience to improve content moderation. I chatted to Madhuri Rahman, co-founder of WeLivedIt.Ai, to talk about their work, big tech, the current ‘one size fits all’ approach to moderation and The Online Safety Act 2023.

Alastair: What prompted the creation of WeLivedIt.ai, and what gap in moderation and online safety were you aiming to fill?

Madhuri: “My co-founder and I met in the token engineering space. We were looking at voting mechanisms and that led us to think about lived experience in technology development in general. Hate speech and online toxicity being one of those crucial areas within which having lived experience of the problem gives you a unique perspective in designing a solution, because if you've experienced certain marginalisation then you're more likely to receive a backlash and hate in a particular area, and you're more likely to recognise it when it's more subtle. That is what led us to then look at solutions that are AI-driven.”

What distinguishes WeLivedIt.ai from the existing moderation tools platforms already use?

“At the moment, one size fits all moderation only really benefits big tech platforms, and then with servers like Discord, there's a lot of human moderation going on.

“The first thing [WeLivedIt.ai offers] is entering your context to tailor the search. The second thing is being able to collaboratively define what toxicity looks like in your space by voting on real data examples, and then seeing what the consensus has been on different types of comments and speech. The third thing is using [this data] to train the model. What we also want to do then is to have the model training be as transparent as possible.

“What we're trying to do, which is different, is to make it all friendly for the individual user, because all content moderation tools out there now are like: ‘Okay, we'll do more context aware moderation,’ but they're still consultancy firms [where] you have to go through an individual who asks your needs, and the data already comes labelled. So, [there are] biases that exist in these big tech models. We're working towards moving away from that once a community puts in the work to labelling the training data that is filtered out from their online space. So, it's not random data. It's their lived experience, which is why we called it WeLivedIt.ai because we're trying to connect lived experience to this development of a big tech solution that could help lots of people.”

How does the lived experience of marginalised people online inform the tool?

“We’re trying to focus on people that experience hate speech directly to start with. So, you've got death threats, violence, demonising and dehumanising language, like negative character; rhetoric that just calls people crazy, for example, or aggressive. These are the different intensity levels, and we're starting with that first one, death threats, the kind of hate speech that's about online safety.

“This is hate speech identification. We're not stopping anyone from posting anything or saying anything, but the person that has lived experience of the problem has a choice as to whether or not they want to see that.

“Then you've got things like: “make me a sandwich.” Which of these categories would this fit into? The connotations around that kind of thing, these large language models in general won't pick up, because that's really nuanced, it's a trending piece of hate speech, but if you're aware of Andrew Tate and the misogynist rhetoric that is going around online, then you can submit that [and] the community can vote on whether they want that there or not.”

How are you making sure you are getting a broad representation of lived experience of all different areas of hate speech?

“That's an ongoing challenge, but our approach differs significantly from existing moderation tools. We're not trying to create one-size-fits-all solutions, which inevitably struggle to represent everyone. That means we don't face the same pressure to build a single, universal dataset that works everywhere. The question isn't 'have you represented everyone perfectly?' - that's impossible. The question is: 'When a collective representing a marginalised community joins your platform, can they teach the system about their toxicity patterns better than they could teach Twitter or Meta?’

“We think yes, because:

  • They control the configuration directly

  • They provide examples from their lived experience

  • They can correct the AI when it's wrong

  • They're not competing with billions of other users for the platform's attention

“We're also realistic: This requires communities to have the capacity to onboard, configure, and maintain their models. That's a significant ask, especially for already-marginalised groups. That's why it's important for us to ensure that we acknowledge and centre mental wellbeing and digital agency in the work we're doing. People aren't just involved to take on the labour of fixing a system that works against them. They're accessing community and people-power to reclaim online spaces from the toxic minority.

“We're trying to build a system where representation can happen bottom-up rather than top-down. We'd rather launch with transparency about limitations and improve through partnership with communities than claim we've solved a problem we haven't.”

Minority content is often over-moderated. How do you deal with that challenge?

“There's a lot of research which still shows that marginalised communities get over moderated, which is crazy because they're the ones that are experiencing a lot more of the online toxicity, but because, for example, LGBT content is more likely to be historically seen as sexual and adult, there's been a lot of over purging of that content online.

“We're starting at the safety level, but when I talk to a lot of people and we talk about hate speech, they will then talk about politicians [and] hate speech in terms of divisive speech in the way people talk about others, like specific identities or communities. It's similar rhetoric when a new kind of identity becomes the target for the divide and rule political strategy that uses the same kind of language. It's always ‘they're inherently violent’ ‘they're unnatural’ ‘they are a danger to children, to women’. All the same kinds of narratives come out and that is something that I'm exploring. What [we’re doing] right now is we're looking at filtering out pure hate to keep people safe.”

Are you seeing new forms of hate speech with generative AI, such as deepfakes or bots?

“We haven’t focused on that yet, but I am having a call with the LGBT Foundation because they recently ran a national campaign called ‘This is what a woman looks like’ and it was a trans positive campaign, and it ended up being derailed. They [the derailers] used certain pictures that were nasty and then were posting them to disrupt the campaign entirely. It was definitely bots. 

“I want to talk to them and understand what specific patterns they saw during the campaign, and what they did, how they responded to it, and whether that's something that content moderation can do more to catch and deal with.

“In fact, I think it would be easier if you know what your campaign is and what you're going to post, having the context, being able to engage in data training for the model earlier, would be easier. It's just about whether the organization wants to engage in doing it or not.

“That's definitely a big, I was going to say, upcoming risk factor, but it's already really quite nasty for some of the deep fake videos that's coming out. It's something that content moderation definitely should be flagging.”

The UK’s Online Safety Act has just come into effect. How do you see regulation shaping your work?

“Tech companies will now have to assess the risk their platforms pose of disseminating the kind of racist misinformation that fuelled last year’s summer riots. The fact that there is something that now says that tech companies have to do that is good, it just doesn't really solve the issue that these content moderation tools don't work.”

Looking ahead, what’s your vision for the future of WeLivedIt.ai?

“At the moment we're working with journalists, but we want to expand to people who are visible online. That is because people who historically have faced a lot of toxicity are really good at identifying it.

“[We’re] also trying to capture more research and data around what the problem of online toxicity looks like, but also how we can improve the way AI is trained to solve that problem as well. We want to do a lot more.

“With the Online Safety Act, by 2027 the part of it that will come into effect is the part where platforms will have to report on disinformation. At the moment, the part that's come into effect is in relation to child safety, but in 2027 they'll be reporting on the risk that their platforms pose in disseminating misinformation. So, we would like to be able to contribute to that conversation, whilst also helping people that have lived experience of the problem.”

“Also, we will lay data to train the model on catching [hate speech] better. That's something that then we can also work with platforms and organizations, so they can see what the level of misogyny is on their platform by accessing that data. We want to create an ecosystem and try and contribute to improving regulation. It's a whole new minefield anyway, with tech regulation, but what would be the ideal is completely shifting the way tech platforms are regulated while keeping people safe online.”

You can find out more about WeLivedIt.ai by visiting their website or following them on LinkedIn to keep up to date with their mission to protect marginalised communities from hate speech online.

Related posts
Technology, Activist interview
Better content moderation for better discourse: A conversation with Madhuri Rahman of WeLivedIt.Ai
Technology, Activist interview
Technology, Activist interview
Technology
Why social media platforms spread the worst political messages
Technology
Technology
Technology
TikTok has many problems, but the hysteria around this app distracts us from the larger problem of unregulated tech companies
Technology
Technology
December 11, 2025 /Alastair J R Ball
Technology, Activist interview
Comment

Powered by Squarespace

Related posts
andrzejrembowski-microphone-4319526_640.jpg
Feb 23, 2026
The pressure on mental health services under late capitalism and how art therapy can help in the fight against the far-right: A conversation with activist and art psychotherapist Cat MacGregor
Feb 23, 2026
Feb 23, 2026
Trump-rally.jpg
Feb 15, 2026
Why does Trump get away with this unhinged foreign policy?
Feb 15, 2026
Feb 15, 2026
polling-station.jpg
Jan 8, 2026
It’s a new year so here are some things that give me hope right now
Jan 8, 2026
Jan 8, 2026
Keir_Starmer.jpg
Jan 1, 2026
2025: The year of “stability”
Jan 1, 2026
Jan 1, 2026
Dec 11, 2025
Better content moderation for better discourse: A conversation with Madhuri Rahman of WeLivedIt.Ai
Dec 11, 2025
Dec 11, 2025
Oct 31, 2025
Farage’s new immigration plan is cruelty as a governing principle
Oct 31, 2025
Oct 31, 2025
Trump-rally.jpg
Sep 30, 2025
Dr. Strangelove goes to Tehran: The hottest new war nobody ordered
Sep 30, 2025
Sep 30, 2025
Sep 16, 2025
The right finally discovers that life sucks for Millennials, but guess who they still blame
Sep 16, 2025
Sep 16, 2025
960px-Official_portrait_of_Angela_Rayner_MP_crop_2,_2024.jpg
Sep 9, 2025
Rayner’s resignation is another unforced error from Labour
Sep 9, 2025
Sep 9, 2025
Union-Jack.jpg
Aug 20, 2025
Who really holds power? The cultural illusion of middle-class dominance
Aug 20, 2025
Aug 20, 2025