Let’s start with something we can all agree on: technology is amazing. Technology can do amazing things that drive convenience, innovation and human excellence.
Seriously. Think about what we get, often for free:
- You no longer need to fold an accordion-style paper map in a gas station parking lot while strangers judge your spatial incompetence. Waze just tells you which pothole to dodge and for you speed racers when to slow down so as to not get a ticket.
- Can't recall that obscure, emotionally-charged video essay you watched five years ago at 2:37 a.m.? Google will not only pull it up from your history but also serve you five eerily-relevant suggestions from creators you've never heard of because it knows what you like.
- Google Docs saves your document every half-nanosecond so work is rarely lost. Gmail auto-fills your sentences saving time and sends “junk” into the junk folder without you ever having to see it. YouTube seems to know you're sad before you do and share videos that help change that frown to a smirk.
These are Artificial Intelligence conveniences that genuinely improve life until you realize you’re not the customer. You’re the product. Or more precisely: your behavior, preferences, emotions, and metadata are the product and up until recently was designed to deliver ads that would be most likely to drive conversion. But now, with the rise of AI in what I call its Age of Alchemy, that product is now advanced enough to become you and even have bad actors replace you.
From Suggestion to Simulation: How AI Crossed the Uncanny Valley Line
Historically, data was used to target ads. That’s annoying, but manageable. Nobody was ever radicalized by a well-placed promotion.
But now we’re in AI territory. These systems don’t just suggest anymore, they’re designed to simulate and convince you to think and do other things you might not have considered before. They don’t just “know” you like dark comedy and almond milk, they can predict how you’ll respond to political ads, emotional triggers, job offers, or conversations with your ex and the responses they share can shift your belief without you knowing you’re being manipulated. We saw this as early as 2016 on Facebook with Cambridge Analytica.
You’ve effectively trained a shadow version of yourself. And depending on what platforms you’ve used, where you’ve clicked, and what you’ve fed into tools like ChatGPT, DeepSeek, or Google’s ecosystem, you may have trained multiple versions.
Not clones. Not copies. But models. These become sophisticated digital facsimiles that can predict what “you” would say next and work to strengthen that belief, or change it without you realizing you’ve been targeted. For instance, imagine the creation of a DeepSeek-style model which can generates hundreds of micro-influencer profiles on Linkedin, Facebook and Reddit with realistic bios, behavior patterns, and opinions trained on segmented datasets of real people.
These fake accounts then:
- Engage in subtle ideological persuasion
- Share “personal stories” that are statistically optimized to shift sentiment
- Respond to real users with validation or doubt—nudging beliefs without triggering suspicion
Think: a friendly commenter named "DawgPaw" sharing a relatable story on LinkedIn about why surveillance tech “actually makes him feel safer.”
The Four C’s: Convenience, Commercialization, Control, and (accidental) Cloning
For those who know me and my 5Cs of digital marketing, I like my C words. I now have 4 for AI's direction. Let's stop pretending this is about personalization and automation. That feature that pulls up your old video? That’s less about you reminiscing and more about keeping you inside a platform for two more hours to show you 20 more ads.
The ad that feels like it read your mind? It kind of did. Or more precisely, a thousand-point behavioral profile did. The AI chatbot that seems to “get you”? It was learned from people like you. Or maybe, directly from you, if you’ve been feeding it detailed prompts about your work, relationships, and dreams. What began as convenience has morphed into commercialization at scale and now into control at depth.
That “you” it knows? It’s becoming more accurate, more detailed, more useful to companies that want your money-or, in certain cases, your obedience. In some ways, Big Tech now plays the role once reserved for governments dictating what’s visible, what’s monetized, and who gets heard. They don’t need to pass laws; they just change the algorithm. In fact, they appear to be playing a role in shaping them.
Enter the Chinese Model: The Social Credit Experiment
Let’s explore a real-world, large-scale digital governance case study: China’s Social Credit System.
China’s national initiative or collection of regional pilot systems is designed to monitor and evaluate the “trustworthiness” of citizens and companies. While Western media often portray it as a dystopian all-powerful algorithm, the real mechanisms are equally disturbing and real.
Capabilities and consequences:
- Facial-recognition payments: in stores, buses, and subways are seamless for users on the positive list. We even have CEO’s like FORD’s Jim Farley praising these technologies as a means of frictionless commerce - but completely ignores how this technology makes it extremely inconvenient or impossible for those on the blacklist.
- Travel bans: Missed court payments or minor civil infractions? No plane or train tickets. Millions blocked in pilot phases.
- Public shaming: Blacklists publish your name online, on billboards, or in newspapers. Remember Minority Report and Tom Cruise seeing personalized ads targeted towards him? China’s Social Credit System can make this less about an ad for a Lexus and shame a person with a low score - or encourage actions with threats of non-compliance.
- Restricted services: Low-score families lose access to private schools, high-speed internet throttles, fewer credit options and discounts/
- Social pressure: Even pet owners get dinged for letting their dogs roam or not cleaning up. Sidewalk jaywalking counts too.
The goal? Shape citizens’ behavior with rewards and punishments. This is all automated, algorithmic, enforced. Even without a singular national score, blacklists operate cross-regionally for travel or housing bans, job barriers, visa refusals, and more.
Can It Happen Here? The Palantir Question
“Surely we’d never go that far,” you might think. Enter Palantir. In May, the New York Times reported that the Trump administration considered using Palantir to create a centralized database on U.S. citizens. If that sentence doesn’t send a chill down your spine, go read it again. Slowly. Palantir has denied this…and might say no to building a social score, but they’re building the pipes that could power it. They just won a $795M contract for a centralized data platform which could be used for just this to assist in work where Trump officials have “already sought access to hundreds of data points on citizens and others through government databases”.
And while we’re on chilling narratives: remember when Attorney General Bondi claimed she had the Epstein “client list” sitting on her desk? Fast-forward to this past Monday where the DOJ says no such list ever existed. Either she lied, or the federal government is gaslighting the public. Neither is comforting. The truth may be irrelevant anyway because the power lies in who controls the narrative, not the data itself.
The Game Has Changed: What AI Can Actually Do With Your Data Now
So what’s different now? In a word: unification.
Previously, your digital life was fragmented. Facebook knew your friends. Google knew your interests. Amazon knew your spending. Apple knew Siri and privacy.
Now, AI systems are starting to stitch it all together. ChatGPT can remember your previous chats (soon, maybe all of them). DeepSeek and other LLMs are trained to interpret behavioral nuance at scale. And they’re designed to praise you at every interaction. The flattery isn’t just nice. It’s part of the model’s design to build trust and lower your guard. Once siloed data streams are being unified into a “total you” not just for advertising, but for modeling, prediction, simulation, and influence.
This isn’t sci-fi. It’s API-level integration.
And here’s the kicker: most people are voluntarily feeding these systems more than ever before, through their prompts, file uploads, chat histories, and “trusted” assistants.
What You Can Actually Do About It (Besides Panic)
It’s tempting to shrug and say, “Too late.” But it’s not - not entirely. You can’t unplug completely, but you can outsmart the defaults. Here’s how to fight back with brains, not bunker paranoia.
My recommendation is to take some of that power back now before it’s too late to do so. Here’s what you can do. Think of this as a modern-day digital hygiene checklist, or maybe just a mild form of rebellion:
- If you can swing the ChatGPT teams plan ($25 per user), use it: Your data will never be integrated into training (as per OpenAI). If you're using the free or Pro versions, you have privacy options that can help - but it won't guarantee your data won't be used in training.
- Don’t use or extremely limit Deepseek use: When it first came to light in Jan 2025, I’ve said it here and now lots of security and privacy experts have also weighed in.
- Anonymize your inputs: Rename your files. Use initials instead of full names. Strip metadata before uploading anything to AI models. Don’t narrate your life story in prompts frame it as a “case study,” or use a third person. DO NOT frame it as a confession.
- Feed AI fake data sometimes: Search for things you don’t care about too. Mix up your patterns. Introduce chaos. It worked during WWII and it can work against profiling algorithms too.
- Use separate profiles per context: Don’t mix work, personal, and creative identities. Different browser profiles. Different email aliases. Keep your AI queries compartmentalized.
- Favor on-device AI: Apple’s new privacy-focused models don’t send your queries to the cloud. This limits leakage, logging, and latent surveillance.
- Delete everything you can: Use Google’s auto-delete tools. Purge your chat histories. Set expiration windows for stored data. Think of it like spring cleaning for your digital soul. Or if you can swing it - just don’t let it collect anything - turn the history saving on everything off.
- Encrypt what matters Use encrypted folders for anything sensitive - even if you’re not a spy. It’s not about hiding something. It’s about preserving choice over access.
- Say less in prompts: If you’re building something in AI, don’t train the model on your actual PII. Strip it first. Don’t feed it proprietary files in native form. Mask, obfuscate, translate.
- Rethink how much “training” you’re doing for free: Every interaction is helping someone else’s model get smarter. Would you volunteer to teach a stranger how to impersonate you?
- Don’t accept normalization: Just because it’s “easier” doesn’t mean it’s better. Resist the idea that privacy is outdated. It’s not. It’s foundational.
- Talk about it: The more we normalize awareness, the harder it becomes to normalize exploitation. Share what you’re doing. Ask others what they’re trying.
The Takeaway: You Are Not Your Convenience
Convenience sounds like a gift until you realize you’re paying with something irreplaceable: your digital self, a version of you that you might not realize is more you, than you know about yourself.
The stakes have changed. It’s no longer just about who sees your data. It’s about who uses it to become you, what they do with that power and how that can impact you in the real world. If you’re in charge of customer data, user experiences, or public narratives and this isn’t theoretical. It’s operational. Now’s the time to lead the charge toward more ethical, human-centric systems.