« Back to Resource Hub

How your charity can avoid creating ‘AI slop’

Headshot of Nicholas

7 May 2026

by Nicholas McDonald

Communications and Social Media Manager at Media Trust.

Article

Article

What is AI slop - and how can your charity avoid it?

AI is making it easier than ever to create content and get less interesting tasks done quicker. 

From drafting blog posts and social captions, to summarising reports and generating ideas, what used to take hours can now take minutes. For busy charity teams with limited time and resources, that is a huge opportunity. 

But there are still notable challenges when using AI for content creation. We’re seeing a rise in what’s being referred to as “AI slop”, which is content created by generative AI that is low in effort and quality, often produced as clickbait or to gain money. And as more of it fills our feeds, standing out (and staying credible) is getting harder. For charities, this matters more than most as our work depends on trust. 

In this guide we will explore how to spot AI slop, avoid creating it and how to make sure your organisation isn’t accidentally amplifying it.

What is AI slop?

AI slop is low-quality, generic or misleading content generated using AI. It often appears as videos or images created with tools like ChatGPT, Google Gemini or Claude. This kind of digital content floods feeds and feels like filler content, often vague, repetitive and lacking real insight. Yet in some cases, it can attention-grabbing and oddly enticing to watch. 

You may have heard of Fruit Love Island, an entirely AI-generated TikTok series where anthropomorphised fruit engage in reality TV-level drama. Content like this is often explicit or sensational, and it can lean on harmful stereotypes to keep viewers hooked. 

But AI slop isn’t limited to TikTok. What’s now being referred to as ‘brain rot’ content can appear anywhere, whenever content is generated quickly and carelessly, prioritising speed or clicks over accuracy, clarity or thoughtful storytelling. 

Ultimately, poor AI generated content is often generated quickly, left unchecked and often misleading or potentially harmful. 

How to spot AI slop

Keep an eye out for content that: 

  • Sounds generic: Could this apply to any charity? If yes, it’s probably AI filler.
  • Makes claims without proof: Stats, quotes or facts should always be checkable and have linked sources.
  • Feels repetitive: Same phrases and zero nuance.
  • Appears ‘off’: AI-generated images often have subtle glitches, like extra fingers, weird backgrounds and mismatched proportions.
  • Leans on shock or scandal: AI slop often uses sensationalism to grab attention. 

How to avoid creating AI slop

Now you know what AI slop actually is, how can you avoid creating it in your charity’s communications and content? This isn’t necessarily about avoiding AI altogether, it’s more about how you use it, when to use it and how often you use it. The key is to treat AI as a tool that supports your content creation, not one that creates it for you. 

Start with a clear idea in mind

Know what you’re trying to say, who your audience is and what you want it to achieve before you open any AI tool. This helps avoid generic, unnecessary outputs.

Use AI as a starting point, not a final writer

Treat AI as a way to support your thinking and drafting, not replacing it. The real quality comes from your writing, editing and judgement.

Be specific with your prompts

The more context you give (such as audience, tone and purpose), the more useful the final result will be. Vague prompts often lead to vague content.

Fact-check everything

Always verify stats, claims and quotes with sources and dataAI can sound confident even when it’s wrong, so nothing should be taken at face value.

Add human insight and lived experience

Ground your content in real perspectives, organisational knowledge or lived experience to make it meaningful and credible.

Cut vague or filler language

If something sounds generic or could apply to any organisation, it probably needs refining or removing.

Keep your charity’s values and voice front and centre

Make sure the final content still sounds like you by thoroughly editing, not like ChatGPT!

Sense-check for accuracy, sensitivity and bias

Review content carefully to make sure it doesn’t unintentionally mislead, exclude or misrepresent people or communities.

Don’t overproduce just because you can

AI makes content faster to produce, but that doesn’t mean more is better. Your charity should always focus on quality, purpose and relevance.

How to avoid AI slop on your feed

It’s not just about what you create, it’s also about what you choose to share and engage with on your community’s social accounts. Sharing AI slop, even unintentionally, can spread misinformation and weaken trust in your own channels.

This is where community moderation is especially important. Make sure you or your team are actively reviewing comments, posts and interactions on your channels and feeds, not just publishing content and moving on. AI-generated or misleading content can easily appear in replies, tags or shared posts, so having a clear approach to moderation helps protect the integrity of your space. Check out Strawberry Social’s AI and Moderation Hub for more!

Ultimately, strong community moderation helps protect trust by ensuring your channels remain a credible, safe community-led space in the face of rising AI slop content.

Your next steps

Using AI for content creation isn’t necessarily the problem, it’s how it’s used and reviewed. By staying thoughtful, intentional and human-led in how you create, share and moderate content, your charity can use AI to enhance their work without losing the trust and authenticity that sits at the heart of everything you do. 

Be sure to keep an eye out for upcoming cohorts of our AI Charity Academy and read our charity guide to AI prompting. 

Was this resource helpful?

Related Resources