Sifting Through Slop
Finding value in a world awash with AI content
It’s not often things start with the Catholic Church here in 2025, but in this case, it did. In the spring of ‘23, pictures of Pope Francis greeting people wearing an enormous white puffer jacket began percolating on Reddit. Though the pictures gave the Pope a more aggressive look than one typically expects from the papal style, there wasn’t anything that could be called offensive about them. Perhaps the Pope had found a way to blend Catholic traditions with hip-hop culture. Alas, the truth proved to be banal; the Pope does not own a puffer jacket, and does not wear it in Rome during the summer. The pictures - real, lifelike, and featuring one of the most important figures in the world - had been generated with Midjourney’s AI model. “I thought the pope’s puffer jacket was real and didn’t give it a second thought,” tweeted Chrissy Teigen. “No way am I surviving the future of technology.”1
A quick look around suggests many of us are unlikely to survive the future of technology, at least intellectually. In the absence of anything that could be called legislation, AI-generated memes, images, bots, and videos have proliferated and multiplied. Sometimes they’re helpful; Professor Ethan Mollick continually makes otter videos to track the progress of AI. Mostly, they’re gags: updates on old memes but blended with the faces of friends or celebrities to avoid using the now-standard white text labels. Any last remnants of the line between celebrity and politics became erased, or, more precisely, buried, as the Trump campaign regularly posts AI memes mocking and maligning its critics. Now, even old-school mayoral candidates, such as Andrew Cuomo, are using AI to create videos of themselves to attract eyeballs on social media.
And it’s not just politicians: 70% of images online are AI-generated.2 By next year, it’s forecasted that 90% of online content will be machine-made.3 This content - sometimes amusing, sometimes offensive, and nearly always mediocre - has come to be called “slop” by the powers that be. “Slop” feels like a fitting word - it’s an ugly word that captures the liquid, unpleasant nature of the torrent of AI content that now engulfs us.
One can’t help but remember a simpler time when a picture was worth a thousand words and a video was evidence. Today, a picture is worth merely a few tokens, and a video proves nothing. While we would all like to believe that this information is confined to a single social network or political movement, only the most partisan among us could convince themselves that this was true. The fact is, there are likable, gullible people of every stripe, and they are sadly being sucked in, entranced, and misled by a torrent of content.
The result has been widespread confusion and misinformation. As the war in Ukraine rages on, mingling the very modern - drones, social media, and live streaming - with the very old - violence, greed, and destruction - it’s become increasingly difficult to separate AI images from untouched ones. Similarly, after the October 7th attack on Israel and the subsequent war in Gaza, partisans learned that videos that one did not like could be labeled “AI” and then dismissed out of hand. And these are topics that have been well-covered by traditional media. Somewhat ironically, one might have thought that health and wellness would be the most human topic. But the sector is driven by people who are typically unhappy or in pain, searching for a customized solution that they believe has been overlooked by traditional medicine, making it a perfect hunting ground for AI bots.
All of this raises important questions for those of us who don’t want to be duped. How are we supposed to learn and think in this new environment? Where do we go for accurate news? Who do we trust? Though there are no easy, timeless answers for these questions, here I’ll speak to some of them in hopes that the answers may help us both.
A word on me - this research comes out of both my academic research and my day job as a management consultant. As a computational social scientist, I studied the evaluation of novel ideas, examining fields—from film to the life sciences—that require assessing ideas at scale when there is little to go on. As a consultant, I’ve worked with hundreds of clients across the globe and in every industry, utilizing enormous data sets, including patents and social media, to assess and evaluate different strategies and growth opportunities. My professional life has been built around finding and assessing opportunities using large datasets, in which much of the data is simply useless. To do that, I’ve had to weave together findings from psychology, economics, statistics, and sociology, as I’ve found that no single discipline alone can provide all the answers. Although I won’t pretend that I’ve solved this problem and will never be fooled or miss a story, I have had the chance to test and refine these principles in a wide range of domains, and I’m confident they’ve enabled me to do my job and research more effectively.
To avoid being one of ‘those people,’ the first step is to think more broadly than just ‘AI.’ The problem we face is that we’re trying to learn and assess our environment to make informed decisions - as a citizen, employee, business owner, family member, etc. - and we lack the necessary information. What will the markets do? How will home prices fluctuate? Is my neighborhood safe? We are trying to learn about our world, and specifically, about things that we cannot uncover on our own. Realistically, I have no way of measuring crime in my neighborhood, nor of global temperatures; I have to read these things somewhere. That makes me dependent on my information feed for the quality of my decisions; the more I improve it, the better off I am.
The second half of the problem is that we have too much information. In the old days of the consulting profession, we would be sent to the library to find a few meager facts (I suspect, just enough to create a bar chart). Today, our role is to parse and compare data sources, as everyone feels overwhelmed by the sheer volume of data. The problem isn’t that we can’t find anything on a topic, but that we find too much. And what we find often contradicts, fits poorly, or confuses us.
So it’s not just a question of spotting ‘truth,’ when it comes down the trough. How we get information shapes how we think. Scrolling through bot-comments and AI memes doesn’t just suck up time; it also changes the way our brains are wired. As with so many things, the Greeks recognized this first: Plato warned Socrates against the written word, arguing that it would make us forgetful by providing a clear, preservable record. Once we had that, we would lose the ability to remember long pieces of information.
Now, everyone laughs at him, but he wasn’t wrong. Consider the Odyssey. An enormous book that undergraduate students at our best universities require several weeks to read, if they can read it at all. But, in Plato’s culture, this was bar-room entertainment. Homer, the author of both The Odyssey and The Iliad, was a blind poet who would sing and recite these poems for memory as a backdrop to parties. It may be hard to shake it off, but he was the Taylor Swift of his age. But not even TSwift, as enormous as her oeuvre has become, has to recall a body of work as large as the entire Trojan War, with all its subplots and political innuendos. Today, most of us are doing well just to remember a handful of phone numbers.
While writing things down may limit how much we can remember, the written word comes with enormous benefits (discussed below). The trouble is that as the steady march of technology makes information easier and faster, we simply have to think less, changing the structure of our brains over time. In Amusing Ourselves to Death, Neil Postman shows how television changed our attention spans and, consequently, our political debates.4 And, in The Shallows, Nicholas Carr shows how social media scrolling makes it harder for us to focus.5 While Steve Jobs famously said that the computer would be a bicycle for the mind, GPS acts more like a wheelchair, causing our brains to lose their spatial reasoning capabilities over time as we get pushed around, rather than standing up and building our own muscles. 6
There is not, at least that I’m aware, a definitive “slop” study on how exactly AI content will rewire our brains (though there is a lot being done about how using AI rather than thinking makes us dumber). But the points all these thinkers made - from Plato to Carr - still hold: how you consume information, and what you consume, will raise or lower your mental faculties over time. Simply put, you will never be a sharp, intelligent person if you consume large quantities of terrible information. Garbage in, garbage out, as the data scientists say.
With this in mind, the next step is to picture what ‘good’ looks like. Plato’s protests notwithstanding, as others have noted, there are many advantages to a highly literate society, namely, that writing creates distance between the reader and the material. We have to process words to read, and that gives us control and space to think. Audio, images, and especially video work differently; our brain automatically absorbs them, so we remove the filtering process.
When this happens, we lose two things: First, it becomes harder to question what we’re imbibing. If we read a controversial line in an article, it’s easy to stop mid-paragraph and ask ourselves, “Do I really buy this?” It’s harder to stop listening to a podcast or video, or to stop looking at a meme, in the same way. The brain naturally wants to continue. When this happens, we simply question the information we’re absorbing less. Relatedly, we savor what we consume less. CS Lewis wrote that what makes a book ‘good’ is that it encourages you to read it well - to dwell on every word, to pause after chapters to reflect, to be worth re-reading.7 A good book bears thinking about. When we move into a world of passive, AI-generated content, we never hold on to any of it and therefore never truly enjoy it. We enjoy the act of consuming the content, not the content itself.
Putting this together, ‘good’ looks like digesting information in a way that allows us to enjoy what we consume while questioning it. We’re thinking through what we’re hearing and seeing, not simply imbibing. The hallmark of people who do this well is that at any point, they can tell you what they think on a particular topic, why, and with how much confidence. You want your news feed, whatever the medium, to be set up so that you know what you’re being told and you know how you feel about it. And, most importantly, so you retain the evidence and data that led you to your conclusions.
Which naturally brings us to the point everyone is most concerned about: information quality. How do I know what to trust? It’s an important point, so I’ll answer the question conceptually and then tactically. Conceptually, the field of economics has consistently found that, when overloaded with information, the rational response is to rely on the reputation of the source as a filter. So, rather than trying to read every news article or tweet on a subject, we naturally go to sources we trust, and that’s the smart thing to do.8
That doesn’t give us license just to follow people we agree with and be done with it, however. “Trust” does not mean “pleases.” We are looking for sources with a strong reputation for quality - factual, comprehensive, insightful pieces. Not things that justify our opinions. Additionally, and this is even harder, we have to be ruthless about removing people who don’t live up to that standard. The key is read skeptically, as though our sources’ reputation were on the line, because it is, or should be - if they fall in quality, they should fall from your good graces.
Tactically, there are a couple of things you can do to make this easier. First, a “source” is not necessarily the institution but often a particular person. Even august publications like The New York Times now see themselves more as ‘platforms’ than ‘papers of record,’ and so they should be treated as such - use their platform to find the specific authors or sub-sections who write on topics you care about and follow them. You can safely ignore the rest. If they start to deviate from your standards, find new ones. Second, on the subject of standards, be honest with yourself about what ‘quality’ means to you. Much has been said about information bubbles, and people only following those of their own political stripe. But, that misses the point; following morons of the opposite political stripe won’t make you less-partisan - it will make you worse! You’ll believe that everyone with views different from yours is crazy.
Instead, look for authors that accurately - honestly, factually, and transparently - tell you about a subject. They may do it from your political angle, or they may not. However, if you later find that they missed key facts or drew unwarranted conclusions, disregard them. It’s very easy to change alerts, or rejigger your followers - information is a business (as every publisher will tell you), so don’t be afraid to cut your losses. As long as we listen to crappy people, platforms will push them onto us.
None of this means you have to give up social media or become an expert at looking at an image and teasing out what’s AI and what’s not (a nearly impossible task, at this point). It does mean you have to proactively find good sources and limit mindless scrolling through your feed, slowly gathering impressions and changing opinions subconsciously. The key to avoiding AI slop isn’t spotting images; it’s cultivating your feed. Unless you’re a very unusual person, you’ll absolutely come across AI memes and text online - and that’s okay. The key is to push those interactions into the realm of entertainment, rather than the areas that concern you or that influence your opinions on meaningful subjects.
Neil Postman claimed TV is at its best when it’s at its worst. He believed that when television attempted to be intellectual, it failed and misled people. However, when it attempted to simply be entertaining, that was when it was at its healthiest. Although I think TV has, to an extent, graduated past this very low bar, AI and social media have not. They are at their best when they are entertaining - when dances go viral and songs become popular. They are at their worst when they try to shift your opinion on critical personal and civic issues within 90 seconds or 280 characters. Recognizing this, and enjoying them for what they are, is the critical secret to avoiding being misled by their slop. It’s not easy, but mastering it will be the key to our intellectual survival.
Simon Ellery, “Fake Photos of Pope Francis in a Puffer Jacket Go Viral, Highlighting the Power and Peril of AI,” CBS News, March 28, 2023, https://www.cbsnews.com/news/pope-francis-puffer-jacket-fake-photos-deepfake-power-peril-of-ai/.
Shalwa, “AI in Social Media: Key Statistics 2025,” ArtSmart Blog, March 12, 2025, https://artsmart.ai/blog/ai-in-social-media-statistics/
Maggie Harrison, “Experts: 90% of Online Content Will Be AI-Generated by 2026,” The Living Library, accessed August 31, 2025, https://thelivinglib.org/experts-90-of-online-content-will-be-ai-generated-by-2026/
Neil Postman, Amusing Ourselves to Death: Public Discourse in the Age of Show Business (New York: Penguin Books, 1985).
Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains (New York: W. W. Norton & Company, 2010).
Louisa Dahmani and Véronique D. Bohbot, “Habitual Use of GPS Negatively Impacts Spatial Memory during Self-Guided Navigation,” Scientific Reports 10, no. 1 (April 14, 2020): 6310. https://doi.org/10.1038/s41598-020-62877-0
C. S. Lewis, An Experiment in Criticism (Cambridge: Cambridge University Press, 1961)
C.F. George A. Akerlof, “The Market for ‘Lemons’: Quality Uncertainty and the Market Mechanism,” in Uncertainty in Economics, ed. Peter Diamond and Michael Rothschild (New York: Academic Press, 1978), 235–5




