From Deepfakes to Job Takes: The Unintended Consequences of AI on Employment
The pace at which AI evolves is indeed breathtaking, unleashing new technological innovations and benchmarks with a frequency that feels almost weekly. Reflecting on the past year alone highlights the rapid advancement of generative AI. What began as meme-worthy content, such as the humorous imagery of Will Smith eating pasta, has now escalated to a level of sophistication where a deepfaked CFO scam successfully deceived employees—those familiar with the actual CFO—resulting in a staggering $25 million loss for a company. This progression not only showcases the capabilities of current AI technologies but also serves as a stark reminder of the potential risks and ethical considerations that come with such rapid development.
Introducing The Pandora’s Bot: Monitoring AI’s Impact
AI is on the brink of either being our greatest innovation or our most catastrophic oversight, completely overhauling industries and rewriting the rules of the employment game. That's why I've created The Pandora’s Bot newsletter - a vigilant watchdog on the AI beat. The unabashed romance the world is having with AI? It's making me nervous. It’s like we’re in the throes of a new relationship, with rose-colored glasses where even the most annoying habits seem endearing. Ever been so lovesick you overlook glaring red flags? That’s us with AI right now, so caught up in the allure that we're missing the forest for the trees. But here's the kicker: the cracks are starting to show. And while everyone's busy singing praises, the choir of critics is disconcertingly small. Well, I’m not about to join the silent majority. It’s high time we had a more nuanced conversation about our infatuation with AI - warts and all. Let's dive in.
Uncharted Territories: The Leap of Claude III and Opus
With this week’s recent developments, notably Anthropic's unveiling of Claude 3 and its stellar model, Opus we begin with a significant leap into uncharted territories where AI's capabilities overshadow those of its predecessors, including GPT-4 and Gemini Ultra. This evolution is not merely about technological advancement but also about the profound implications it harbors for the job market and societal norms. It’s only a matter of time before there’s a model available where every white collar role, in every department, from coordinator to SVP will be at risk. This was unimaginable last year. Make no mistake,today there are tools being created to make that happen.
LinkedIn’s Pandora's Box: The Collaborative Articles Project
Take for instance a recently launched project by the folks at LinkedIn called the LInkedin Collaborative Articles Project which has now been active for a year. With a pool of nearly a billion members who cumulatively have over 10 billion years of combined experience, the project generated about 1MM answers in the first six months, and at that rate, it’s likely now surpassed two million answers now. These answers are sourced from every role in every industry from Coordinator to SVP. There’s a lot of expertise in this pool, including the cofounders of companies like Netflix, Industry Icons like Andrew Ng and 1,000,000’s of people who keep their respective company machines well oiled and running. Moreover, LinkedIn's initiative to aggregate professional insights through the Collaborative Articles project serves as a double-edged sword. While fostering a rich repository of knowledge, it can fuel the development of an extremely robust and powerful AI toolset capable of performing any and all white collar and professional (mental) service roles. A better name for this project might be, “The AI Myopia Project: Missing the Forest for the Trees.” This scenario exemplifies the unintended consequences of our digital contributions, as we edge closer to a future where AI's role in the workplace is both indispensable and unsettlingly pervasive.
In time, maybe even today at this point, the answers collected from the Collaborative Articles project will be sufficient enough to offer a SAAS subscription for a Marketing Manager who can perform both strategic and tactical services. Did I say Marketing Manager? I meant, Marketing Director, VP, SVP, EVP, CMO too. And don’t think it’s unlikely. Tyler Perry just announced he’s putting a hold on an $800M studio expansion project because he believes OpenAI’s Soma can do the work of the creatives he would have needed to hire.
The Corporate Race Towards AI Efficiency
Yet, this situation unveils a stark reality – the efficiency and cost-effectiveness of AI are rapidly becoming too compelling for businesses to ignore, even as they try to navigate the delicate balance of technological progress and its human impact. And the phenomenon is not isolated. Across the globe, companies are leveraging AI to streamline operations and enhance productivity. From MSN utilizing AI for content creation to Google integrating AI across various departments, the trend is unmistakable. SAP, Salesforce, and Duolingo are just a few names in a growing list of entities adopting AI, each with its narrative of efficiency gains and the inevitable questioning of the future role of human workers in these sectors. Just two days ago (March 5th 2024), Entrepeneur Magazine reported on JP Morgan's Free AI Cash Flow Software Cut Human Work By Almost 90%. Care to guess how one of the world's largest financial institutions will deal with it? How about their clients using it once they're billed for it?
Beyond the tangible issue of job displacement, the rise of AI in the workplace sows deeper, more insidious seeds of unease among the workforce. Workers are not only facing the real threat of losing their jobs to AI powered machines but also grappling with increased anxiety over job security. This constant undercurrent of worry can erode mental health, dampen workplace morale, and stifle productivity. This ex-Meta employee seems to agree
the overall layoff strategies to improve immediate shareholder return in the short term, seems like it's a pennywise, but pound foolish.
Ethical Challenges and the Darker Facets of AI
However, this shift towards AI is compounded with challenges and ethical dilemmas. The instance of a Hong Kong firm falling victim to a $25 million scam orchestrated through deep fake technology and Taylor Swift deep fake porn underscores the darker facets of AI's rise. Such incidents illuminate the urgent need for robust frameworks and regulations which are needed to ensure AI's ethical use and to safeguard against its potential for deception and fraud. And where these are examples of human bad actors, technology isn’t safe either, as highlighted by the AI Worms recently created by researchers at Cornell Tec, Israel Institute of Technology and Intuit. It will only get worse if frameworks and regulations aren’t implemented soon.
Blueprint for Action: Addressing AI’s Unintended Consequences
To navigate the ethical and employment challenges posed by AI, we must advocate for and implement specific regulatory frameworks and ethical guidelines. For instance, adopting AI governance models similar to the European Union’s AI Act, which proposes a risk-based approach to AI regulation, could serve as a blueprint.
Moreover, globalization has been a hot topic now for nearly two decades, but nothing will usher in a need for global regulations more than both the promise for good AI can deliver, as well as the increasingly nightmarish scenarios that are becoming less science fiction and more real on a daily basis. Industry leaders and policymakers should work together to establish global standards for AI ethics, ensuring that AI development aligns with principles of fairness, accountability, transparency and a push to uplift social constructs as opposed to just enabling and securing business returns. Business may be an engine for social well being, but without consumers and a population that feels safe, business dies. Collaborative efforts, such as the Partnership on AI, which brings together leading AI companies and stakeholders to establish best practices, exemplify how diverse entities can unite to steer AI towards societal benefit. More than ever, the impacted and expected to be impacted will also need to be integrated into the process and thinking.
In addressing the ethical dilemmas of AI, a deeper exploration into the nuances of bias, decision-making, and autonomy is also more crucial than ever. Ethical frameworks, like those proposed by the IEEE's Ethically Aligned Design, offer comprehensive guidelines for respecting human rights and ensuring AI's beneficence. Such frameworks encourage developers to prioritize ethical considerations from the outset, embedding values of equity and justice into the very fabric of AI systems. By rigorously applying these principles, we can mitigate risks such as discriminatory biases and unintended consequences, fostering AI that enhances rather than undermines societal well-being. It’s very important to get it right. When it’s not, the value of the AI is irreversibly harmed. Note what happened to Google’s Gemini and its inability to generate accurate historical representations due to a diversity inclusion.
The Future of Generative AI In the Workplace
In embracing the advancements of generative AI in the workplace, we stand at a crossroads. The path forward demands a nuanced understanding of AI's impact on the workforce, a commitment to ethical standards, and a concerted effort to navigate the societal transformations it heralds. As shared earlier, jobs are being removed. Generative AI is in its infancy and the full scope of what it can do and will do isn’t being thought about consistently and routinely yet outside of those building it and that’s a problem. Here’s a great example of an inaccurate assessment: “AI wont’ replace you, those who use AI will”. This isn’t true. AI will replace SOME to MANY of your roles. Maybe not today, but it’s coming… If not today, within a couple of years. Two years ago Klarna laid 700 people in quite a spectacle of a process. Today, nearly two years later, Klarna announced the AI assistant they’ve implemented with the help of Open AI, is doing the equivalent work of 700 full time agents, handling 2.3 million conversations, about 2/3s of all customer service chats, and leading to a 25% drop in repeat inquiries. This has generated a $40 million profit improvement in 2024. Want to make a bet on something that will total 700 in the near future at Klarna that’s a seven letter word beginning with the letter L? It’s not Laptops.
Humanity and AI: Ethical Judgment and Emotional Intelligence
And here’s the thing, when enough people are replaced, there won’t be customers to buy. We’ve got a snowball at the top of the mountain right now just beginning its roll down the slope. We have time, but it’s running out. Our leaders, innovators, and the global community will be paramount in steering the course of generative AI towards a future that harmonizes technological prowess with the imperatives of human dignity, job security, and ethical governance. The main concern I see is that we have leaders asking those who are most likely to benefit from AI dictate the course of AI. It’s akin to asking the fox to guard the henhouse. While a seat at the table should be there, it shouldn’t be the only seat. The journey is fraught with uncertainty, but it also offers a canvas for reimagining the contours of work, creativity, and societal well-being in the age of AI.
Acknowledging Technological Limitations and Workforce Adaptation
While AI's capabilities are impressive, acknowledging its limitations reveals the indispensable role of human creativity and empathy. Complex problem-solving, emotional intelligence, and ethical judgment remain uniquely human traits that AI cannot replicate yet, and may never be able to do so. It’s unethical to kill someone. We know this But AI? For AI it’s a programmed choice and it will have to make that decision in the future. It’s one of the challenges with self-driving automobiles. I’ve listened to conversations about whether it would be more ethical to kill an old man or a toddler playing in a street if an autonomous vehicle lost breaking power and would need to hit one. Which should it choose? These conversations are being had right now. Christine would hit both. The Blues Brothers would first ask if either was a Nazi. The questions like these need to be flushed out and answered by the right people.
As such, to complement AI's rise, we must prioritize how our roles can work better with AI either through reskilling and upskilling initiatives and empowering workers to thrive alongside AI. Educational reforms and lifelong learning programs can equip the workforce with the skills needed in an AI-integrated future, ensuring that technological advancement augments rather than replaces human potential.
Democratizing AI Development: The Role of Stakeholder Engagement
Ensuring that AI's development and deployment are democratically governed requires engaging a broad spectrum of stakeholders. Public forums, ethical AI committees, and participatory policymaking processes can incorporate diverse perspectives, from workers affected by AI to ethicists and civil society representatives. This inclusive approach ensures that AI policies reflect a wide range of interests and values, promoting fairness and preventing the concentration of power among a tech elite. Right now, it’s primarily the elite holding these conversations. Furthermore, public education campaigns can raise awareness about AI's ethical implications, fostering a society that is both informed and engaged in shaping the future of AI.
Unleashing AI's Potential for Human Progress
Artificial intelligence is a powerful catalyst that can revolutionize industries from healthcare, manufacturing and environmental conservation to retail, finance and entertainment. However, it is not a panacea - the uniquely human traits of EQ and empathy remain invaluable and like with some humans may never be achievable in AI. When developed responsibly, AI can amplify humanity's abilities, propelling us towards a future where its technology and our compassion unite harmoniously, akin to the enlightened society we've seen portrayed in Star Trek.
Yet we stand at a pivotal crossroads. Without a steadfast commitment to ethical AI principles and enforcing robust governance, we risk careening down a troubling path reminiscent of Ready Player One - a dystopia where digital realms become an escape from harsh realities left unaddressed. The choices we make today will shape whether AI uplifts society as a transformative force for good or merely provides fleeting distraction from our core challenges.
For marketers, like myself, this duality necessitates an approach that accurately conveys AI's incredible potential while respecting lingering concerns around consumer privacy, transparency, and human-centric design. Technologists must remain grounded in real-world applications that create value ethically and equitably. Together, we can pioneer AI solutions that enhance rather than replace human strengths and judgement.
Our collective actions will write the story of AI's societal impact. By fusing technical ingenuity with an unwavering moral compass, we can chart an inspiring course towards a future where innovative AI and enlightened human values coalesce to uplift humanity as a whole. That sounds promising, right?