
Our CEO breaks down why navigating the rise of AI-generated content will take radical honesty in Inc. Magazine.
Whether you like it or not, AI-generated content is inevitable. The benefits for content production and efficiency are too great to ignore for cost-conscious brands. So, the focus should be on how to use it more thoughtfully. Especially since, as we’re discovering in real time, AI-generated content also comes with some pretty serious downsides.
For one, if AI becomes the default for everything, feeds will feel more synthetic, mirroring the overly manicured tone and aesthetic that have become AI’s hallmark. When too much content feels machine-generated, the platforms start to lose texture. Social media only works because it feels alive. If it starts to feel automated at scale, people will disengage or shift toward smaller, more private spaces where content feels more human. The health of the platforms depends on human energy. If that erodes, so does the value.
But also, trust will become more fragile. Users already question what’s real. The more AI-generated content floods feeds, the higher people’s levels of skepticism becomes. That doesn’t mean engagement disappears, but it becomes more transactional. Less emotional with decreases in loyalty.
Brands should take this seriously—those brands that protect trust will come up on top every single time. Those that treat AI like a volume machine will burn out their audience. It’s that straightforward.
That’s why brands have a responsibility to disclose when they are using AI-generated content on social, especially when any content could reasonably be mistaken for something real. If you’re creating AI humans, AI testimonials, AI environments that look like lived experiences, that needs context. The more realistic AI gets, the more disclosure matters. So, brands need to lead the way in embracing a little radical transparency—protecting the trust that they’ve worked so hard to build.
Leading by example
Now, not every use of AI needs a huge disclaimer. If AI is being used behind the scenes for editing, brainstorming, or production support, that’s different. But when AI changes the reality of what’s being presented, brands should say so. To most brands, this probably sounds like a risk, but transparency comes with its own benefits.
For example, H&M’s recent use of AI is noteworthy because they didn’t sneak it in, hoping audiences wouldn’t notice. Instead, the company has been very public about its AI journey, working with models and agencies to build “digital twins” of about 30 real models, for use in social and advertising campaigns. As importantly, they did it with consent and labeling so audiences aren’t misled, as well as working out compensation deals while allowing models to retain ownership of their likenesses. And while that doesn’t mean everyone liked it, it was a step toward ethical transparency.
In a different case, Heinz’s took an even more transparent and brash approach. Their campaign tasked AI with generating images of “ketchup,” and most of the outputs ended up looking like Heinz. Instead of hiding the tech, they shared the results and invited people to play along. The point was proving how strong their brand is. It felt like a wink instead of a workaround—one that put their use of the technology front and center, earning headlines and awards in the process.
Of course, not all transparency is well received. Coca-Cola’s 2025 AI-created reimagining of its classic “Holidays Are Coming” campaign became one of the biggest AI debates of the year. But the controversy wasn’t simply about AI’s use as a tool. The backlash was about the emotional gap. For a yearly campaign that people associate with nostalgia and emotion, the AI execution was seen as cold, uncanny, and soulless. It showed a lack of understanding of the audience’s connection to the brand and its emotional touchstones.
So then, how can brands avoid the same mistakes, using AI transparently without eroding the relationship people have placed in them as storytellers or cultural voices? They should ask themselves:
“Would our audience feel misled if they knew how this was made?” If the answer is yes, rethink it. If you have to hope no one asks how it was created, that’s already a red flag.
“Are we using AI to enhance creativity or to cut corners?” There’s a huge difference. One builds something better while the other creates faster. And while faster may save you money, it does not ensure good results.
“Does this align with how we want our brand to be perceived long term?” Efficiency today shouldn’t cost credibility tomorrow. Short-term gains are never worth long-term doubt about who you are as a company.
“Are we protecting human jobs and voices where it matters?” Automation should support teams without erasing them thoughtlessly. If the tech is replacing the perspective that makes you distinct, you’re hollowing out your own brand.
“If this content goes viral for the wrong reason, can we defend the decision publicly?” You should be able to explain it with confidence. If your response would sound defensive, the decision likely wasn’t strong enough to begin with.
Sure, AI can increase output and reduce costs. That’s attractive. But in the long term, if a brand’s voice becomes indistinguishable from everyone else using the same tools, growth will slow. It only gets worse if you’re hiding your use of AI in the hopes of avoiding blowback. The truth is that while audiences don’t demand perfection, they do demand clarity. That’s why transparency on AI matters. And it’s the brands that can strategically use AI content while being honest and thoughtful about it that will win on social while everyone else is catching up.
Featured Image: Getty Images