The AI Hype is Designed to Exploit Your Insecurity
Technology AI OpinionContents
- Introduction
- A bit of background
- You're not good enough
- Corporate insecurity
- Disrupting humanity
- Who is this even supposed to benefit?
- Counterpoint
- We don't need AI
- Conclusion
Introduction
I recently sat down to pick up a blog post that I've been drafting on and off for almost a year, now. I won't spoil the post, but it's an opinion piece - something I haven't done in a while - meaning that my interpretation of certain facts can make or break this piece. That's all well and good, but I have recently found myself questioning and doubting the positions I often take. As a result, I sat and stared at the page for a while and found that I really couldn't put together a sequence of sentences.
You may be thinking, "well, writer's block in normal". Sure, but this wasn't just writer's block. It was different. It was different because the voice in the back of my head that would normally tell me to just write and worry about refining my ideas later was saying something else entirely:
"Hey, you know you could just use AI to draft the next section, right?"
"Why would I bother reading something that the author didn't even bother to write?" - someone on the internet.
If I'm being honest, I hated myself for even thinking that. Nothing puts me off more than a low effort blog post that was clearly cobbled together in ChatGPT. I definitely wouldn't subject my readers to that, either. It did get me thinking, though, about how the marketing around AI tools has been specifically framed to make me (us) feel insufficient. It's all about how we can "improve our outputs", "work faster", "be more productive", "write better", etc. We're told that we don't need to learn any difficult skill, now, or endure the discomfort of self-improvement. Why? Because we're not good enough to do so. Because we're better off just letting the machine do the hard work because we can't or shouldn't be trusted to do so.
My goal, with this post, is to show you that this latest wave of AI hype has, at its core, been driven by induced insecurity and fear-mongering about what may happen to us if we don't embrace it.
A bit of background
I'm not going to take you all the way back to the initial release of ChatGPT in November 2022. I've already done that before, and so have many others. Instead, I want to skip ahead to a pivotal event a few months after this - Sam Altman's appearance in front of congress.
Clamouring for regulation
If I had made a huge technological breakthrough that could impact all of humanity and make me the richest person in the world, I reckon the last place I'd want to find myself is in front of US Congress begging them to regulate that technology.
And, yet, that's exactly where Sam Altman was on the 16th of May, 2023 - a mere 6 months after ChatGPT's release had shattered all expectations and sparked a new AI race. More shocking was what he said that day (with my own emphasis on certain keywords):
- "We think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models"
- "we’re going to face an election next year and these models are getting better."
- "There will be an impact on jobs."
There's probably more nuance to this, admittedly, but how do you keep building a technology and, at the same time, preach about how deadly it could turn out to be? Maybe, you only do that if framing the narrative around the technology is far more important than the technology itself.
To give an analogy: imagine if, at the height of World War II, J. Robert Oppenheimer had appeared in front of congress as the leader of the Manhattan Project, begging them to slam the brakes on the development of the atom bomb. Surely, news would trickle down to America's war time enemies that the US was building a bomb that even it's creator was weary of. One might start to wonder if it was all smoke and mirrors. If, maybe, there was more to be gained from people believing that the technology was powerful, because that power was either not immediately demonstrable, or could not survive long-term scrutiny.
I believe that a lot of the value attributed to AI right now, especially in terms of the market capitalisations of the companies involved, is based more on promises than actual present capabilities. That, in turn, drives adoption because no one wants to be the person left behind as everyone else supposedly reaps the benefits of AI. After all, we have seen what happens when people (and companies) fail to react quickly enough to the winds of disruption - they get out-innovated and left in the dust.
You're not good enough
During the two and half years since the launch of ChatGPT, AI companies have had their fair share of controversies, and many of them are tied to how they have been advertised. One notable example was during the 2024 Paris Olympics, when Google put out an ad showing a father using Gemini to write a fan letter on his daughter's behalf to "her hero", Simone Biles.
I found that ad to be pretty cringeworthy. A fan letter should be real, it should be personal. If you've seen, for example, cards that children write to their parents, you know that for all they lack in eloquence, clarity, turn of phrase, and, sometimes, just plain good grammar, they more than make up for with pure intention. It doesn't matter that the writing is derivative and lacks depth. It's not meant to be a literary work for the ages - it's meant to be self-expression from one human to another.
As a writer and programmer, I know first hand what imposter syndrome feels like. Creatives are among those most bogged down by that feeling which invariably leads one to question their own abilities. Am I good enough? That is the fundamental question.
If you hear a voice within you say "you cannot paint," then by all means paint and that voice will be silenced. - Vincent Van Gogh.
For thousands of years, the answer to that question was to forge ahead and improve. There was no substitute for those years or even decades spent honing your craft (think of Malcolm Gladwell's 10,000 hours). If you wanted to attain mastery and earn the recognition of your peers, you'd have to work for it.
Today, however, the answer has changed. We have unleashed a technology that can exploit our deep-rooted insecurities.
Maybe I don't know exactly how to express myself in a formal setting. Maybe I'm working on a new codebase and am absolutely terrified of making a change that will break production.
This would be the time to give myself room to learn. But, now, there's no time for that. Why take the risk of spending years learning something that you might never truly get good at when you could just take a shortcut and have something else do it for you?
The irony of it all is that it actually makes you worse at the thing you thought you weren't good enough to do. Maybe you were a half-decent writer, or coder, or digital artist - a rough, uncut diamond in need of some polishing. Maybe you've got an analytical mind for product design and development, or business strategy. All of that is lost once you allow yourself to cave to offloading mentally challenging work to AI. You do it once, then twice, then it becomes a habit, next thing you stare at a blank page and find that you've regressed to the early days of your career when a blank page was the stuff of nightmares. All because you've allowed a bunch of tech companies to exploit your insecurities, and convince you that not only are you not good enough today, but that there's just absolutely no time to improve.
Corporate insecurity
"The shark that does not swim, drowns" - a Russian proverb
If it makes you feel any better, the corporations themselves are not immune to this insecurity. The stories of Nokia and Blackberry are a spectre that haunts them all. They are among the most memorable examples of companies - two massive consumer electronics giants - collapsing because they missed the turn when a new technology popped up on the scene.
As a result, companies are absolutely terrified of being out-innovated. Especially in the world of tech where a small head start often compounds, being late to embrace a new technology by even a couple of months can be the difference between survival and death (or, so they say. History shows that most successful companies were not, in fact, the first movers. But, for the sake of argument, we'll entertain this position).
The cynic in me thinks that not everyone pushing AI technology fully believes in it as much as they claim to. It's like virtue-signalling - I doesn't necessarily have to believe that AI will take everyone's jobs and that my company's survival depends on our willingness to embrace AI in everything. But, it can be massively beneficial to show ourselves to be doing so, especially when investors are desperate to see 'forward-thinking' companies that:
- stay up to date with the newest technological advances.
- are willing to embrace ruthless efficiency to maximise shareholder value.
In such an environment, it's not enough to be agnostic about the technology. One has to be loud and proud about their belief in it. You must be seen as a bastion for the elevation of AI-related technologies. As the leader of a company or corporation today, you really can't afford take a nuanced position. Especially if it turns out to be as revolutionary as it's proponents claim it will be, you'll be the equivalent of Ronald Wayne selling his 10% stake in Apple for $800 back in 1976. You really don't want to be the guy who was wrong because the tech industry is often unforgiving. So, you'd rather go with the crowd, assuming that, at least, if we're all wrong, we all drown, and if we're right, we all win.
Disrupting humanity
AI companies will have you believe that humanity is about to be disrupted, but let's take a second and look at our track record via a non-exhaustive list.
These weak, little brains that are apparently not even good enough to write a thank you note have managed, over the millenia, to:
- write millions of works of great literature, fiction, and philosophy that have pushed our thinking forward and elevated our understanding of morality, ethics, linguistics, and the physical universe.
- understand germ theory, and come up with a functional framework for curing diseases instead of just attributing them to "witchcraft" and the "will/wrath of the gods".
- master the atom and quantum mechanics (albeit, with some negative outcomes).
- put humans on the moon, set up a residence for humans in space known as the ISS, and built crafts like Voyager that have flown far beyond the edge of the solar system.
- built mega software projects like the Linux kernel, operating systems, etc. some of these with very little corporate organisational structure.
Everything AI can do today, it does so by standing on the shoulders of giants. At risk of sounding like I'm writing some kind of "humanist manifesto", it's only right to point out that absolutely nothing AI can do today is beyond the realm of possibility for a human. We are the giants on whose shoulders all of AI's achievements have been built. Why else would AI companies be ignoring robots.txt rules to scrape as much human generated content as possible from the web? And, why else would using AI bots (almost) always automatically opt you in to having subsequent models trained on your inputs and feedback?
The big risk is that, in our FOMO and insecurity, we allow companies with obvious financial incentives to convince us that we are not capable of doing things that we have been doing for millennia. I'm all for replacing humans in dangerous domains if it means saving lives, but not for the wanton replacement of human labour with machines just to save a couple bucks and work a little faster. Most importantly, we must not allow these companies to try and diminish the value of mastery in a field. It's pretty damn awesome to be competent.
Who is this even supposed to benefit?
At this point, I find myself wondering who exactly this is even supposed to benefit. It may sound like I'm committing the cardinal crime of creating a false dichotomy, but I feel like there are two potential outcomes, at least on a personal level:
- AI becomes as powerful as they say it will: we develop AGI, most skilled workers are replaced by bots. Then what? What's in it for me? Because, I am certain that once massive corporations are profiting from not having to pay humans, they are not just going to turn around and start giving handouts, through Universal Basic Income (UBI) or just food, to those who can't afford to feed themselves anymore. At least, not unless things become so absolutely dystopian that their very survival and that of society depends on it.
- AI doesn't become as powerful as they say it will: in which case, I would have wasted a bunch of time where I could have been learning and improving, but instead allowed myself to be deskilled, and now I can't actually do the job, but neither can AI.
In both cases, I reckon, the biggest winners will be those who refuse to become overly reliant on AI to the point of becoming mentally atrophied. Think of it similarly to Pascal's Wager:
Clearly, you can never go wrong with acquiring skills. There will always be room for domains where AI is undesirable, or just people who still want something built/done by a human. Coffee machines didn't put baristas out of work, and calculators didn't eliminate the need to learn basic arithmetic, etc.
Counterpoint
And, just so that I don't come off as one-sided and blinded by raging hatred for AI, I will say that I understand the allure of leveraging a technology that draws on thousands of years of human progress. AI models are trained on humanity's best (and worst) and, to some extent, we can learn from that. Why should we get bogged down with learning things when we can leverage a technology that synthesises humanity's best ideas into a consumable form?
The problem, of course, is that AI-generated content is mediocre. Have you ever read something that was blatantly plagiarised? There's a lot of bending over backwards to rephrase sentences that simply don't need it. In that case, a plagiarist finds themselves making short sentences wordier or reaching for a thesaurus to find alternative words that no one would ever use in polite society in a bid to hide their crime. That's how AI written content comes off (at least, to me). Same thing with code generation. In many cases, it is absolutely clear that the goal is to not have things copied verbatim as that would reveal the source material.
We don't need AI
Maybe the thing that has motivated me to write this has been my exposure, over the last year or two, to programming tools designed to help us reduce the most common mistakes we often make as software developers. From the built-in memory safety in Rust, to the actor model in Erlang for safer concurrent programming, I have seen how we have built tools to help ourselves avoid common mistakes that plague modern software. The same applies to things like spell-check in word processors. What they saw, I believe, is that the solution isn't as simple as discarding all forms of human creation and replacing it with AI because we are prone to making mistakes, getting things wrong, or, even, in the worst cases, falsifying things. After all, AI is prone to making the same mistakes, often at a larger scale, and often without the same level of scrutiny that we apply to humans.
Without getting into the obvious intricacies of copyright law and fair use - i.e. whose data are these AI models being trained on and are those people being sufficiently compensated, I do want to point out that I recognise that AI can be the above-mentioned tools that we use to complement our skills and improve our abilities. As a programmer, I know when tools like a better editor, extensions, and language server protocols (LSPs) help me to do my job better. But, the operative word there is complement - as in, you're not completely replacing human labour because these tools still require human input.
With AI, however, things are very different. I could ask any of the popular chatbots right now to generate the code for a fairly complex application and they'd do it. I'd get 1000+ lines of code that, instead of carefully reviewing each line for security vulnerabilities and point of failure, I'd just copy into my editor and try to run/compile. Indeed, with agentic code editors, I don't even need to copy and paste. They can do everything from creating directories and files, to installing external packages, and running the code.
This can all be done without human input which means that, given the sheer quantity of content generated, it's much harder for a human to carefully review everything. I'd even go as far as to argue that if one is willing to take this route, then they are far less likely to carefully review each line to make sure things are in order. One reason for that could be laziness or impatience, but it could also be that they have already succumbed to the insecurity-targeting advertising and would be convinced that the tool can already generate better code than they can write. In which case, they wouldn't even see themselves as being qualified enough to review the generated code.
Yet, history shows us that humanity has always moved forward not by trying to replace ourselves because of our deficiencies and limitations (many have tried), but by building things that give us greater leverage to overcome them.
Conclusion
I became a programmer because I believed that I had the ability to build cool and useful things. I started writing because I believed that I had something valuable to say. I believe that the same goes for most people out there in creative occupations. I don't want to sound naively idealistic, but I think there's value in the work that people produce. It might not be great, it may not touch lives and change minds, cure someone's spiritual thirst, teach them a new thing, or make you a billionaire, but it has value as the self-expression of a specific body-mind.
Maybe, the thing that scares me the most is the dead internet theory. That is the idea that, at some point, bot activity will surpass humans on the internet, and most of what you'll see, most interactions, will be generated by non-human entities. People will argue for hours on end about whether or not this is a bad thing, but I think it will be a terrible tragedy if we can't hear from other humans or interact with them because the internet has developed a bot infestation. Especially, if it happens because we allowed ourselves to be convinced that we have nothing of value to say. That would totally suck.
My goal, here, was simply to inspire you to hesitate before reaching for AI tools. Not because it's ethically or morally wrong, bad for the environment (it is), or anything like that, but because, given enough time, you can do better.
If you've made it this far, I hope that this has been a good read. If you have any thoughts, comments, or just want to say hi, Mastodon is where you'll find me. Most importantly, if you loved it, don't hesitate to share it with someone else who might enjoy it, too.