Return to site

Something Wicked This Way Comes And It Has Your Face

October 23, 2025

In 1983, Disney released Something Wicked This Way Comes. It wasn't your typical Disney film. Kids walked in expecting something fun, uplifting and magical. They walked out with nightmares about spiders that would last a lifetime. I was one of them.

There's irony in Sora 2 launching the same month as Halloween and also being hailed the same way: fun, uplifting, capable of magical feats. IIn 1983, Disney released Something Wicked This Way Comes. It wasn't your typical Disney film. Kids walked in expecting something fun, uplifting and magical. They walked out with nightmares about spiders that would last a lifetime. I was one of them.

There's irony in Sora 2 launching the same month as Halloween and also being hailed the same way: fun, uplifting, capable of magical feats. In reality, it's showing itself to be a key to a world that previously only existed in Black Mirror episodes. Within 72 hours of launch, the internet had resurrected Martin Luther King Jr. to spew racist garbage, put Tupac Shakur on Mr. Rogers' show in a warm but curseword laden back and forth, turned SpongeBob into a meth cook, and deepfaked Bryan Cranston's voice. The company's response? 'Oops, we'll fix that.'

This isn't a product launch gone wrong at all, in reality it's the latest chapter in our 30-year disaster curve, where tech's "move fast and break things" playbook turns promise into peril and fits flawlessly into the Harambe timeline. The core issue isn’t about just celebrity and copyright holder rights. It has the potential to rapidly dismantle social cohesion. Hyperbole? Nope, and I can prove it.

Act I: How the 1996 Communications Decency Act Update Built the Permission Structure for Sin

In 1996, tech and telecom companies pushed Section 230 of the Communications Decency Act with a simple pitch: don’t stifle innovation. Platforms shouldn’t be liable for what users post. It sounded noble; the web was young.

But Section 230 became a permanent shield, allowing companies to monetize harm without consequence. As long as platforms removed content after they were “aware” of it, they were fine. And proving awareness? Nearly impossible. That loophole powered Facebook’s entire business model.

Fast-forward to 2025: Sora 2 launched using the same playbook. Enable violations at scale, hit #1 in the App Store, wait for outrage, patch it later. If OpenAI could implement safeguards in three days, why not before launch? Because chaos fuels growth and three decades of precedent proved this model is viable.

The used equation:

  • Train AI on copyrighted material = $0 cost (who's going to know until wide use)
  • Generate hype-driven growth = Massive valuation boost
  • Get sued = Pay pennies on the dollar (years down the line)
  • Keep the training data = Permanent competitive advantage
  • Repeat

Act II: We've Seen This Movie Before (It Ended Badly)

Facebook built an empire on Section 230’s framework, insisting it was “just a platform connecting people.” In practice, it optimized for one thing: engagement which increased ads served, even if engagement meant rage.

Then came Cambridge Analytica in 2018. Facebook apologized, tweaked a few knobs, and carried on. The business model didn’t change because the law didn’t change.

The damage compounded annually. Foreign state actors amplifying the # of posts created by bot farms positioning Americans against one another to promote their own country's interests. What started as "fake news" in 2016 metastasized into QAnon, election denial, and a population that can't agree on basic provable facts including as to whether or not despite all other planets being a sphere, only the earth is flat. Each platform promised to "connect the world" while quietly hollowing out trust in institutions, experts, and reality itself with UGC distributed on their platforms. There’s now a theory the internet is dead with studies showing more than 50% of user generated content in 2024 was done by bots - not people.

The infrastructure didn't just enable harm because it was designed to monetize it, it enabled harm because guardrails are expensive to implement, maintain and it (often reasonably) slows progress. Now it seems we're handing the same keys to AI.

Act III: A Nightmare on AI Street

What makes Sora 2 exponentially more dangerous than Facebook’s early days is simple: deepfakes don’t need to build an audience - they inherit one.

Zuckerberg’s generation trained billions to share before verifying, to trust viral content, to let algorithms curate reality. They built the highway. Altman’s generation gets to drive 200 mph on something that can optimize chaos.

The difference isn't just technical, it’s existential:

  • Social media era: Lies were text and memes and poorly edited photos and videos. Still deniable, still traceable to sources.
  • AI era: Lies can be video of you committing crimes you didn't commit, saying things you never said or believe, in places you've never been and 100% realistic.

Your reputation is now a prompt away from destruction.

Bryan Cranston (just like Scarlett Johannsen did when ChatGPT Voice launched with a voice just like hers) discovered his likeness cloned without consent despite OpenAI's claimed opt-in policy. But Cranston and Johannsen have agents, lawyers, SAG-AFTRA. What about Bob Sutton, the coffee shop owner in Ohio?

Imagine: Bob's competitor uploads Bob's photo with the prompt: "Coffee shop owner spitting in customer's coffee and serving it with a smile." Sora generates a convincing 15-second video by using a picture of Bob in his coffee shop as a source. It's anonymously posted to local Facebook groups, Google reviews, TikTok. Within hours, it's shared thousands of times.

Bob can prove it's fake in court. But by then, the damage is done. His business is destroyed. The video keeps circulating on review sites and social media for weeks, months, before platforms pull it. If they ever do. Now imagine a photo of you at a work event talking to someone of the opposite sex. That could be used to end your marriage, or job.

The guardrails? OpenAI's own documentation admits metadata and watermarking are "not a silver bullet" and can be "easily removed." Tissue paper locks sold as security.

Additionally, AI detecting software has limitations too, and detection is a consistent cat and mouse game where it can't reliably detect AI in all videos as new models emerge.

Act IV: The Money and the Bunkers Behind It

Following the money often leads to the truth. So let’s take a look at where the titans are investing other than their businesses. Remember when Zuckerberg, Ellison Thiel, and Altman started prepping for doomsday and building elaborate luxury bunkers and compounds? Zuckerberg in Hawaii. Ellison has 98% ownership of the Hawaiian island Lana’i. Thiel in New Zealand. They’re not just diversifying portfolios, they’re hedging civilization.

This generation of AI innovation isn't just richer than the DotCom innovators; they have more capital, more data, more geopolitical leverage, and complete control over the infrastructure that determines what's real. They're not building social networks; they're building reality itself.

Meanwhile, Washington spins in a paralysis loop:

The detection illusion: Industry claims detection tools solve this. Lab accuracy hits 98%, but real-world effectiveness appears to drop to 50% against evolved deepfakes. More importantly: detection only matters if lies haven't already spread. By the time you detect fake Bob spitting in coffee, it's been shared 1,000,000 times on TikTok with a dancing model going eww, screenshot without watermarks, embedded in review sites where correction never reaches the audience.

The damage isn't the fake's existence of these videos; it becomes the social dynamics of belief formation. No detection tool fixes that.

Act V: We Can Still Choose Differently

We don’t need another 30-year disaster curve. The fixes are visible and achievable.

1. Mandatory Opt-In at Source

  • No likeness, voice, or copyrighted character generated without explicit consent. That means not agreeing to a EULA that no one reads before clicking agree, but a clear statement stating you agree/disagree with letting them use it—and with using it, specific limitations to personal use or general use.
  • Platforms verify before processing, not after complaints are raised
  • Violation fines that have teeth: 2% of global revenue or they’re meaningless and just a cost of business as is currently happening.

2. Liability Inversion

  • Section 230 protection removed for AI-generated content. If the platform creates it, it also owns it.
  • Platforms legally liable for all outputs as if they created them
  • Changes incentives overnight to make preventing harm more profitable than apologizing

3. Personal Data Property Rights

  • Individuals legally own biometric signatures (face, voice, gait)
  • Unauthorized use treated as identity theft with criminal penalties

4. The "Hammurabi Clause" Builders whose shoddy houses killed inhabitants in ancient Babylon faced execution. The principle was those who profit from systems bear responsibility for their failures. If your AI destroys reputations or livelihoods, executives face proportional liability.

Move fast and break things? Fine, but buy what you break.

What You Can Actually Do (Because Fatalism Is Boring)

The tech industry's "no regulations" mantra isn't a philosophy—it's a business strategy allowing them to ask "can we do it?" without ever asking "should we do it?"

Support the NO FAKES Act

Establishes federal protection against unauthorized AI replicas. Creates property rights in your own identity.

Why tech opposes it: Claims it will "stifle innovation." Translation: it prevents using your face and voice as free raw materials.

OpenAI claims to support NO FAKES, yet launched Sora 2 with opt-out systems violating its principles. That feels like PR cover, not policy.

What you do: Contact representatives at congress.gov. Frame it: "I should own my face the way I own my home." Counter the "innovation" argument: protection ensures progress benefits everyone, not just platforms.

Assume All Viral Content Is Fake Until Proven Otherwise

Obviously fake (I hope):

If it looks like Pixar presented as reality, it isn't. Biology isn't negotiable.

Malicious fakes (the dangerous ones):

  • Politicians/celebrities in compromising situations designed to go viral
  • "Surveillance footage" of famous people committing crimes (mirror sticks under skirts, stealing, assault)

Here's the tell: Powerful people don't do scandalous things in easily recordable ways.

Actual predators use encrypted communications, private jets and islands. When they use airport bathrooms or parties with members of the public with massive amounts of baby oil, they get caught.

Real scandals look boring. They're buried in shell corporations and sealed settlements. Viral scandals are designed for maximum shares, making them inherently suspect.

But here's what should terrify you: The Bob Sutton coffee shop scenario doesn't require powerful predators. It requires one pissed-off competitor with a $20 ChatGPT subscription.

New default: If content is designed to be shared in a viral manner, assume it's designed for manipulation, not truth.

Verification checklist:

  1. Pause before sharing (10-second rule for emotional content)
  2. Check the source (official account or screenshot chain?)
  3. Reverse image search
  4. Ask: "Who benefits if I believe this?"
  5. Wait 24-48 hours (real stories get verified, fakes get debunked)

Other actions:

  • Use digital signatures for important communications
  • Teach kids: "Seeing isn't believing, but verification is believing"
  • Join unions pushing for federal likeness protection

The Bottom Line Up Front

Sora 2 isn't a product launch gone wrong. It has potential for amazing applications—filmmakers visualizing scenes before expensive shoots, educators creating historical reenactments, artists exploring impossible scenarios.

But without regulations, it's a preview of the future OpenAI is building: one where your identity is their intellectual property, your reputation is their training data, and your recourse is their customer service department.

If tech companies won’t regulate themselves proactively and governments continue to not push anything forward, the default outcome is a world where Bob Sutton loses his business because deepfakes are free and truth requires lawyers.

The only meaningful question: Are you willing to fight for a reality where you own yourself?

Every day we don't choose, we're choosing by default. Their bunkers are being built. What are you building?

What's your take: Should AI companies face criminal liability for deepfakes that destroy lives, or is that innovation-killing overreach?

PixelPathDigital helps organizations use AI effectively across marketing and digital product development without falling into the confidence trap. We enhance human judgment with AI capabilities while maintaining the strategic thinking and accountability that drive real and sustainable results.

Enjoying the Pandora's Bot Newsletter series? Don't forget to subscribe! And if you think your friends or colleagues would enjoy it, please share thepandorasbot.com!