When Authenticity Becomes Ambiguous

October 20, 2025

If you’ve scrolled Instagram lately, there’s a chance you’ve met Baddie Betty. Her silver hair is perfectly coiffed, and her confidence radiates through every reel. Her captions are sharp and charmingly wise. She reminds followers that age is just a number, that joy is timeless and that style is a state of mind.

 

She describes herself as “82 years young, iconic and fabulous.”

 

When I first saw her account, I thought it was great to see someone in her 80s owning the influencer space with such palpable confidence and charisma. I sent one of her videos to my husband and said as much.

 

He replied, “Oh yeah, she's the one they are saying is AI-generated.”

 

I couldn't believe it. Because if she wasn't real, then authenticity itself had become a performance, and I'd just applauded the algorithm.

 

 

Meet Baddie Betty: 82 and Quite Possibly Algorithmic

Baddie Betty posts about everything from fashion and self-love to aging gracefully and embracing confidence at every stage of life.

 

One of her recent videos shows her sipping a martini in oversized sunglasses, saying with perfect comedic timing:

 

“Every wrinkle tells a story — and mine all have great punchlines.”

 

The comments are full of admiration:

 

“You’re my new role model.”

 

“Goals!”

 

“Can we be best friends?”

 

While there is no public confirmation that Baddie Betty is AI-generated, several signs — from visual anomalies in her videos to her connections with AI-focused media platforms — have sparked online debate about her true identity.

 

Whether human, hybrid or entirely synthetic, one thing is clear: she’s mastered the art of connection.

 

 

When the Algorithm Learns to Influence Emotion

Regardless of who or what Baddie Betty turns out to be, her popularity points to a much larger shift: the mainstream arrival of AI-driven personas capable of building audiences, shaping opinions and earning trust at human scale.

 

We’ve talked for years about deepfakes that trick the eye, but we’ve now entered a new wave of AI personalities that achieve something far more complex. They trick the heart.

 

Across platforms, these virtual figures are gaining millions of followers. They tell jokes, share opinions, even “collaborate” with brands. They don’t sleep, age or make off-brand mistakes. They’re designed to evoke the same feelings of trust, empathy and aspiration as their human counterparts.

 

The potential for creativity and connection is enormous. At the same time, so is the potential for manipulation.

 

These synthetic creators can evoke warmth and nostalgia with precision, connecting emotionally and shaping perception. But in the wrong hands, that same power could be turned toward disinformation or social engineering.

 

 

What Happens When Authenticity Becomes the Attack Surface

For cybersecurity and marketing leaders alike, this new wave of human-like digital personas is the next frontier of influence — and risk:

 

  1. Emotional Engineering at Scale: Synthetic influencers can replicate tone, vulnerability and humor so precisely that audiences respond instinctively. The next evolution of social engineering may not come from phishing emails, but rather familiar faces we already follow.
  2. Synthetic Identity Loops: What happens when fake personas start interacting with other AI accounts, amplifying each other’s content and creating feedback loops of false consensus?
  3. Brand Risk and Impersonation: If an AI influencer suddenly began promoting harmful content or a malicious link, who would know the difference? And who would be responsible?
  4. Audience Desensitization: As our feeds fill with near-perfect synthetic humans, we may begin to question everything, even legitimate voices. That erosion of trust could become one of cybersecurity’s toughest long-term challenges.

 

 

Why This Moment Matters

AI isn’t just changing how we create, it’s changing how we believe. That means every brand, every executive, every team needs to think about:

 

  • How we disclose AI-generated content
  • How we protect brand identity from synthetic mimicry
  • How we help audiences distinguish authentic influence from algorithmic illusion

 

Authenticity has become the newest attack surface, and the most powerful competitive advantage. In the end, trust isn't built by technology, it's built by truth.

 

 

The Takeaway

The real story isn’t about who Baddie Betty is. It’s about how easily technology can blur the line between authentic and artificial connection.

 

Her story underscores how quickly influence can be manufactured, how emotion can be modeled, and how trust can be scaled by design.

 

That doesn’t make these tools inherently dangerous. In fact, they open extraordinary creative and commercial possibilities. But they also demand a new level of awareness, not only about what we see, but importantly, why we believe it.

 

As AI becomes more capable of mirroring human behavior, our greatest security challenge may no longer be keeping intruders out, it may be learning to see clearly in a world designed to feel real.

 

 

A Timely Reminder

October is Cybersecurity Awareness Month, and it’s the perfect moment to pause and ask:

 

  • How is your organization approaching authenticity and AI risk?
  • Are your marketing and security teams working together to anticipate new threat vectors like synthetic media?

 

Now is the time to engage with your company’s CISO, IT, marketing and communications leaders on these questions and to ensure your brand is ready for what’s next.

 

And if you’re looking for a trusted partner to help navigate this evolving landscape, Optiv is here to help.

 

Heather Rim
CHIEF MARKETING OFFICER | OPTIV
Heather Rim is chief marketing officer for Optiv. She is a board member of the USC Alumni Association and the USC Annenberg Center for PR Board of Advisors.