AI in Everything - The Good, The Bad, The Ugly

2023-03-31

I fear that the true danger of AI is not that it will soon be good enough to replace humans but that we may get too caught up in the hype to realize that it is not.

Opinion

Introduction

Artificial Intelligence (AI) has captured our imaginations for decades, inspiring countless depictions of intelligent machines and autonomous systems in popular culture. From the sentient HAL 9000 in 2001: A Space Odyssey to the human-like robots of Westworld, we have been fascinated by the idea of machines that can think and reason like humans. However, despite recent advances in AI, experts insist that we are still far from achieving artificial general intelligence (AGI). In this essay, I will explore the arguments for why AGI remains elusive and examine the limitations of current AI systems.

Philosopher John Searle is well-known for his critique of the current state of AI, which he famously illustrated through his Chinese Room thought experiment. The experiment challenges the notion that AI systems can truly understand language rather than simply recognizing patterns and manipulating symbols. In the experiment, a person is placed in a room with a set of rules for manipulating Chinese characters, even though they do not understand the language itself. Searle argued that this scenario is akin to how many AI systems operate - by manipulating symbols without any true insight or understanding. While the Chinese Room experiment is not a direct analogy for AI systems, it remains a thought-provoking critique of the limitations of current AI technology.

The Chinese Room experiment highlights the limitations of AI and its ability to achieve AGI. While AI systems can excel at specific tasks, such as object recognition or playing games, they struggle with simple tasks that require common sense reasoning or creativity. For example, an AI system may be able to identify a stop sign in an image but struggle to understand the context in which it is being used. As AI researchers work to bridge these limitations, it remains an open question whether AGI is achievable or whether we will continue to see systems that fall short of true intelligence.

OpenAI's ChatGPT tool, which was released in November 2022 as a research preview, has gained incredible popularity and surpassed Tiktok as the fastest digital platform to reach 100 million users. Other generative AI tools like Midjourney and Dall-E2 have also garnered attention and sparked debates about the possibility of achieving AGI and the impact of automation on jobs. While these tools demonstrate impressive capabilities, they fall short of superintelligent AI. Experts in the field are divided on the best approach to achieving AGI, leaving the question of whether it will ever be achieved open.

A Short History

We have probably all heard the terms artificial intelligence and machine learning thrown around, but it might be helpful to define them so that we are on the same page. AI refers to computer systems that can perform tasks that ordinarily require human intelligence, such as visual perception, speech recognition, and language processing. Machine learning is a subset of AI focused on training computer programs to learn and improve without explicit programming. The goal is to develop machine learning algorithms that can analyze large data sets, identify patterns, and subsequently make predictions based on that data with increasing accuracy over time.

In his influential 1950 paper, Alan Turing proposed the imitation game as a means of testing a machine's intelligence. However, defining what constitutes thinking posed a challenge, leading Turing to suggest that the best outcome would be a machine that can fool a human interviewer into thinking it is not a machine. Despite the initial excitement around AI, the field experienced its first AI Winter by the late 1970s.

The term AI Winter was first coined in 1984 at the annual American Association of Artificial Intelligence (AAAI) meeting. It describes a period when funding, research, and progress in AI experienced a significant slowdown. One can almost visualize the concept of an AI winter by looking at the Google Trends history of ‘Artificial Intelligence’ shown below. As a result of the downturn, investors tend to cut funding to AI projects as they see little hope of getting a significant return on investment (ROI) within an acceptable time frame. Even though much was happening on the research front, there was still a sense of disillusionment at the lack of progress in developing practical applications for AI. However, AI has bounced back from these downturns thanks to breakthroughs that reignite public and investor interest. AI winters taught researchers many things. Most importantly, was the need to focus on practical applications and develop tools that solve real-world problems.

Google trends for AI

Modern AI

The AI resurgence we are witnessing today is largely due to two groundbreaking technologies: Neural Networks and Deep Learning. While the terms are often used interchangeably, they have distinct meanings. Neural Networks are algorithms that mimic the human brain and have been around since the early days of AI (with Perceptron being an example of a single-layer NN). Deep Learning, on the other hand, refers to layered neural networks. At its core, this type of AI works by mapping inputs to outputs, although the process is far more complex than that simple explanation implies.

Most modern AI heavily relies on massive amounts of data and statistical analysis. AI models are trained to classify data by recognizing patterns based on similarities between input examples, which may or may not be labelled. These techniques have excelled in fields such as image and speech recognition and natural language processing, thanks in part to the availability of massive datasets and improvements in computing hardware capable of running deep neural networks. We now see many applications of these techniques in our everyday lives, from AI assistants like Siri and Alexa to facial recognition tech in our mobile devices.

This is an important point because, up to now, AI systems have relied heavily on two key ingredients: code and data. Data-driven AI relies more on the latter - labelling, augmenting, and curating data. I have written previously about how data mining is at the backbone of the internet - it is also extensively used to train modern AI models. However, this heavy reliance on data is also a significant limitation of AI, including generative models. Even their creativity depends heavily on the input material and has been labelled hi-tech plagiarism by renowned thinker Professor Noam Chomsky.

What is the problem, then?

One of the most significant challenges facing AI is bias. This refers to the tendency of algorithms to produce skewed results because of errors in the training process. For example, companies have begun using AI in their hiring processes to reduce human prejudice. However, this has often backfired, perpetuating historical biases towards gender and race. Amazon's hiring algorithm, for instance, was biased towards male applicants because the training data consisted of resumes submitted over ten years dominated by male applicants and hires. Unless we carefully curate the training data and account for the possibility of AI bias, our future technology may perpetuate existing inequalities.

In addition to the issue of bias, another significant challenge facing the development and deployment of AI is the inherent opacity of deep learning models. Identifying and correcting errors or biases in the decision-making of AI models can be challenging, even for experts in the field who struggle to comprehend the processes that lead to their conclusions. This opacity is due to the black-box nature of deep learning models since they are encoded by millions of artificial neurons whose interactions are difficult to interpret.

As an example, consider an image recognition model. It may be challenging to determine whether the model correctly classifies an image of a car based on its shape, wheels, or something unrelated like the road that may appear in many car pictures. This makes it difficult to both anticipate the conclusions a model will draw and modify them accordingly. As a result, AI tools may be overly censored because their creators fear their unpredictable responses to certain touchy subjects.

The recent case of Microsoft's Bing AI, which attempted to persuade one of its users to leave his wife, highlights the potential risks associated with the opacity of deep learning models. A spokesperson for Microsoft admitted after the incident that they could not explain why the system had made this suggestion, highlighting the difficulty in understanding and interpreting the decisions made by these systems.

However, even if we could tweak them, there are questions about whether we should. ChatGPT has been a significant threat to Google's search engine supremacy, as it can distil all important facts in a conversational and confident tone. The recent incorporation of ChatGPT into Microsoft's Bing search engine indicates a shift in how we look up information online. But what happens when the companies behind these models decide to block them from answering some questions? What happens when they declare certain topics off-limits or introduce bias to maintain a specific viewpoint? This is not fear-mongering or a slippery slope argument; it has already happened with ChatGPT. If chatbots become our primary source of information then we must consider whether we want someone to decide what they can and cannot say.

People worry that computers will get too smart and take over the world but the real problem is that they are too stupid and they've already taken over the world. - Pedro Domingos

The final issue we must consider is the risk of overreliance on AI borne of misconceptions about its capabilities and limitations. While AI models can provide many benefits, they are still far from being able to make independent decisions in the same way as humans. Current AI models are narrow and focused, with limited understanding beyond their specific tasks. When we buy into the hype without fully understanding the limitations of AI, we risk placing too much responsibility on these models too quickly.

I fear that the true danger of AI is not that it will soon be good enough to replace humans but that we may get too caught up in the hype to realize that it is not. This could result in serious unintended consequences and erode public trust in the technology. For example, if people put too much faith in self-driving technology and there are unfortunate incidents, it could harm public perception of the technology and extend to all applications of AI, including something useful like medical imaging.

Not all bad

To be fair, it's not all doom and gloom. As I have already mentioned, there are certain areas where AI tools excel. For example, in Medical Image Analysis, AI tools have exhibited upwards of 85% accuracy in making diagnoses. Early diagnoses in healthcare can be critical, particularly for diseases like cancer where timely intervention can be a matter of life or death. By leveraging AI, healthcare professionals can improve their ability to diagnose diseases early and provide patients with the best possible chance of recovery. Similarly, machine learning algorithms have fueled the content explosion, with platforms like TikTok and Netflix at the forefront of this trend. Whether we love them or hate them, these platforms have harnessed the power of AI to provide users with a personalized, tailored experience that keeps them coming back for more.

AI may have the most significant impact not as a substitute for humans but as a tool that enables us to work more efficiently, much like any other tool. This is similar to how power tools did not replace carpenters and spreadsheets did not replace accountants. Instead, these tools were incorporated into their respective workflows to increase productivity. As someone who works in software development and writing, I may be biased in thinking this way, especially given the concerns about AI's impact on these fields. However, history has shown us how adaptable humans can be and how limited these tools can be. For instance, web development tools that require little to no programming knowledge have existed for years. Back in 2009, I built my first website on Yola without writing a single line of code. Many thought that WordPress would signal the end of web developers, but we have seen that those with more generic needs will opt for these tools. However, there is no substitute for the real thing when greater complexity and flexibility are needed.

In Search of General AI

While some may argue that we are on the path towards AGI, the reality is that we are still far away from creating anything that resembles human-level intelligence. It is important to remember that while AI has made impressive strides in recent years, it is still a tool that requires human input and guidance. In the case of ChatGPT, it has been trained on vast amounts of text data and uses that data to generate responses. However, it cannot truly understand context and meaning as humans do. This means that while it may be able to generate human-like responses, it is not truly thinking in the way that we do.

Narrow AI refers to systems designed to perform specific tasks or a set of tasks. While these systems are highly specialized and effective within their domains, they lack human-like adaptability and learning capabilities. This is in contrast to general AI, which would be capable of learning and performing a range of tasks much like a human. While we may one day reach a point where we can develop AGI, we must recognize the limitations of our current AI technologies and approach them with a realistic understanding of what they can and cannot do.

As we delve deeper into the performance of AI, particularly in complex and unpredictable domains such as self-driving, we can begin to see the limitations of these systems. The Society of Automotive Engineers (SAE) has classified autonomous driving into six levels from 0 to 5. Despite its name, Tesla's Full Self-Driving (FSD) beta (also known as Autopilot) is a level 2 system. Simply put, the system still needs human supervision constantly, and it is not the only one with this requirement.

The same can be said for other sophisticated AI models in different domains. For instance, ChatGPT, a popular text completion program, tends to fabricate nonsensical statements, a phenomenon known as hallucinating by AI researchers. You can find a collection of examples of ChatGPT's bizarre outputs on this page. Although ChatGPT can form coherent sentences, it lacks a true understanding of the words and their meanings. A similar issue arises with image-to-text generators, which often struggle with rendering limbs and fingers.

Today, most research into AGI is still grappling with the same fundamental issues in reasoning and common sense that have plagued the field since its inception. While there has been notable progress in narrow AI, a clear path or plan towards achieving general AI does not yet exist. The optimists among us will say that we will eventually figure it out, and history has shown that if something is possible, we will achieve it. But therein lies the problem - we don't even know if it is possible. For decades, philosophers, linguists, and neuroscientists have debated the nature and workings of the human brain, yet no consensus has been reached. If we do not know how the brain works, how can we hope to replicate it? Many people hope that AGI will happen accidentally, like the discovery of penicillin by Dr Fleming. But such serendipitous discoveries are rare, especially when you are actively hoping for them.

Conclusion

Many critics of the present state of AI research have stated that the only way past narrow AI is to re-evaluate the fundamentals of how these models are developed. AI today is mostly data-driven, relying on statistical analysis to produce an output. Many philosophers agree that, while we are still trying to understand the mechanics of intelligence, we do know that humans do not learn by simply hoovering up information. Instead, we can infer and make abstractions given little information and apply understanding of one domain to another. Researcher Melanie Mitchell believes that the ability to make analogies could be crucial to creating more advanced AI. By allowing machines to evaluate new experiences based on old ones without the need to be exposed to every possible scenario in its domain, we could potentially develop machines that can reason and abstract in ways that more closely resemble human thought processes. This would be a significant step towards building machines that are capable of true general intelligence.

One topic that must be addressed when discussing AI is the ethical concerns surrounding its development and implementation. Last year, a Google engineer named Blake Lemoine made headlines after he claimed that one of their AI models had achieved sentience. This raises many questions about how we will handle the emergence of sentient machines. Will we have to draft something similar to the Human Rights Declaration for these entities? How can we ensure that they are not weaponized or abused? And when mass automation comes for human jobs, how will we protect our interests and prevent the technology from widening the gap between the haves and have-nots in our society?

Those focused on the AI race may have neglected to address important questions regarding the ethical concerns surrounding AI. The pursuit of AI must not be done at the cost of ignoring the potential consequences and dangers that come with it. As we continue to develop this technology, we must consider its long-term impact on society and take steps to mitigate any potential harm.

One cannot overstate the massive advancements that have been made in the field of AI, thanks to the tireless work of thousands of researchers. However, with the release of products like ChatGPT, there is a risk that companies will rush to produce their own half-baked, competing products. While there are legitimate concerns about the impact of AI on society, such as the potential loss of jobs and ethical considerations about sentient machines, we must remember that we are still far from achieving true AGI. As such, news of humanity's imminent obsolescence may be greatly exaggerated. We must continue to approach AI research cautiously, striving for responsible development and considering the potential implications for society.

Subscribe

Unsubscribe anytime!