How creators can volunteer with charities and make a real impact
Discover how creators can use their skills to support charities, build meaningful connections and make a real difference through volunteering.
Posted 5 May 2026
What is AI slop - and how can your charity avoid it?
AI is making it easier than ever to create content and get less interesting tasks done quicker.
From drafting blog posts and social captions, to summarising reports and generating ideas, what used to take hours can now take minutes. For busy charity teams with limited time and resources, that is a huge opportunity.
But there are still notable challenges when using AI for content creation. We’re seeing a rise in what’s being referred to as “AI slop”, which is content created by generative AI that is low in effort and quality, often produced as clickbait or to gain money. And as more of it fills our feeds, standing out (and staying credible) is getting harder. For charities, this matters more than most as our work depends on trust.
In this guide we will explore how to spot AI slop, avoid creating it and how to make sure your organisation isn’t accidentally amplifying it.
AI slop is low-quality, generic or misleading content generated using AI. It often appears as videos or images created with tools like ChatGPT, Google Gemini or Claude. This kind of digital content floods feeds and feels like filler content, often vague, repetitive and lacking real insight. Yet in some cases, it can attention-grabbing and oddly enticing to watch.
You may have heard of Fruit Love Island, an entirely AI-generated TikTok series where anthropomorphised fruit engage in reality TV-level drama. Content like this is often explicit or sensational, and it can lean on harmful stereotypes to keep viewers hooked.
But AI slop isn’t limited to TikTok. What’s now being referred to as ‘brain rot’ content can appear anywhere, whenever content is generated quickly and carelessly, prioritising speed or clicks over accuracy, clarity or thoughtful storytelling.
Ultimately, poor AI generated content is often generated quickly, left unchecked and often misleading or potentially harmful.
Keep an eye out for content that:
Now you know what AI slop actually is, how can you avoid creating it in your charity’s communications and content? This isn’t necessarily about avoiding AI altogether, it’s more about how you use it, when to use it and how often you use it. The key is to treat AI as a tool that supports your content creation, not one that creates it for you.
Know what you’re trying to say, who your audience is and what you want it to achieve before you open any AI tool. This helps avoid generic, unnecessary outputs.
Treat AI as a way to support your thinking and drafting, not replacing it. The real quality comes from your writing, editing and judgement.
The more context you give (such as audience, tone and purpose), the more useful the final result will be. Vague prompts often lead to vague content.
Always verify stats, claims and quotes with sources and data. AI can sound confident even when it’s wrong, so nothing should be taken at face value.
Ground your content in real perspectives, organisational knowledge or lived experience to make it meaningful and credible.
If something sounds generic or could apply to any organisation, it probably needs refining or removing.
Make sure the final content still sounds like you by thoroughly editing, not like ChatGPT!
Review content carefully to make sure it doesn’t unintentionally mislead, exclude or misrepresent people or communities.
AI makes content faster to produce, but that doesn’t mean more is better. Your charity should always focus on quality, purpose and relevance.
It’s not just about what you create, it’s also about what you choose to share and engage with on your community’s social accounts. Sharing AI slop, even unintentionally, can spread misinformation and weaken trust in your own channels.
This is where community moderation is especially important. Make sure you or your team are actively reviewing comments, posts and interactions on your channels and feeds, not just publishing content and moving on. AI-generated or misleading content can easily appear in replies, tags or shared posts, so having a clear approach to moderation helps protect the integrity of your space. Check out Strawberry Social’s AI and Moderation Hub for more!
Ultimately, strong community moderation helps protect trust by ensuring your channels remain a credible, safe community-led space in the face of rising AI slop content.
Using AI for content creation isn’t necessarily the problem, it’s how it’s used and reviewed. By staying thoughtful, intentional and human-led in how you create, share and moderate content, your charity can use AI to enhance their work without losing the trust and authenticity that sits at the heart of everything you do.
Be sure to keep an eye out for upcoming cohorts of our AI Charity Academy and read our charity guide to AI prompting.
Discover how creators can use their skills to support charities, build meaningful connections and make a real difference through volunteering.
Posted 5 May 2026
Explore Passion4Social’s free webinar and learn how to present your charity’s data in a way that’s clear and accessible to all.
Posted 23 April 2026
Explore Passion4Social's free webinar to make sure your charity is creating documents that are accessible to all.
Posted 9 April 2026