AI Under Fire: Why Are People Increasingly Skeptical About AI Tools and Is It Justified?

While browsing the web last week for techniques for effectively testing LLMs (Large Language Models), I stumbled upon an article that resonated with my queries. Eager to dive in, I was distracted by the comments section below the article. The discourse was less about the article’s solutions and more about LLMs and AI in general. A recurring sentiment caught my attention: skepticism regarding the efficiency and accuracy of AI. This isn’t the first time I’ve come across such skepticism. There was a time when ChatGPT and similar tools were met with widespread fascination. Everyone seemed to discuss the revolutionary potential of AI. But a few months in, the narrative appears to have shifted dramatically, with many now doubting the capabilities of AI. Why is the industry suddenly filled with AI critics? Has AI’s performance genuinely declined, or is there another reason for the mounting criticism? In this blog post, I aim to address this question, sharing my personal beliefs and recounting my experiences from the past months and weeks. Could it be that AI hasn’t changed much, but rather our expectations of it have? Perhaps we should rethink how we assess it.

Is This AI Truly Generative?

When we discuss “generative” in the realm of artificial intelligence, what exactly are we talking about? At its core, being “generative” implies the ability to produce or “generate” new content. Think of it this way: you provide the AI with some information, and it crafts a new response, much like ChatGPT producing a text reply. Instead of giving a ready-made answer, the AI looks through lots of past conversations on the topic you asked about and comes up with a response. In contrast, there’s the “discriminative” model. This doesn’t create new content but rather differentiates between existing ones. Imagine a machine learning algorithm tasked with spotting cat pictures within a mixed dataset of cat and dog images — that’s discriminative in action.

Many people argue that our current AI tools aren’t truly generative. They believe these tools aren’t creating new content but just piecing together known information in random ways. Critics often say that AI doesn’t have original thoughts and just shuffles things around. This leads to a fundamental question: What do we really mean when we talk about “originality” and creating something “new”?

There’s a common expectation that if we feed any question into AI tools, they should instantly provide a groundbreaking answer that no one has thought of before. Yet, people often forget that AI’s responses are based on its training data. If it hasn’t been exposed to certain information, how can it produce solutions based on it? Many exclaim, “See, that’s exactly my point!”. But then, I naturally wonder: Isn’t it the same with humans? Aren’t we also products of our experiences and learning? Humans don’t know everything inherently; we learn as we grow, gathering knowledge over time. Ask a 5-year-old about economics, and the response will likely be far more simplistic than if posed to an adult (hopefully). This isn’t because the adult is innately more intelligent, but because they’ve had more exposure to information, and their “model” is more refined. Yet, even adults, with all their accumulated knowledge, can’t be deemed truly original in many respects. Consider the challenge of inventing a brand-new color. It’s an impossible task because our ideas are built on pre-existing concepts. Genuine innovation often emerges from combining known elements in novel ways. Take the discovery of DNA’s double helix structure. By merging existing knowledge about DNA’s chemical composition, Chargaff’s base-pairing rules, and X-ray diffraction imagery, Watson and Crick presented the iconic double helix model. Their breakthrough came not from a vacuum but from skillfully interweaving the wealth of information they had access to.

Errors Aren’t Always Lies

A frequent criticism of modern AI tools is their supposed inability to recognize what they don’t know, leading them to produce answers that might be inaccurate. Some go as far as accusing AI of intentionally lying to provide a solution. But can an entity without consciousness truly “intentionally” deceive? The answer is a straightforward “no.”

It’s vital to remember that these AI systems are crafted to deliver solutions based on the data they’ve been trained on. The AI does its best to provide what it believes is the right answer given that information. If it gets it wrong, we have two clear courses of action: correct the AI by giving feedback and adjusting its behavior, or revisit and refine the training data. Labeling the AI as deceitful is a misinterpretation. If the AI repeatedly delivers the same incorrect answer to a specific query, the fault likely lies in the training model or the data, not in the AI “choosing” to lie. Is the AI still on its learning curve? Absolutely. If people believed everything AI said without question, it would be like blindly trusting any human statement. The key distinction is that only one of these entities — humans — can truly lie with intent. An AI system would only lie on purpose if it was designed to do so.

What Does “Learning” Really Imply?

A few months back, I was at a conference and tuned into a talk about AI and its future prospects. Once the presentation wrapped up, attendees got the chance to ask questions. The first individual took the mic and expressed discomfort with the word “learning”, suggesting machines weren’t truly learning but merely making lucky guesses. Another attendee went even further, humorously proposing the term “artificial stupidity” in place of “intelligence”. Considering the event focused on software testing, the widespread skepticism about AI didn’t caught me off guard.

I realized that we need to re-examine the terminology we employ in this field. Let’s start with the term “Learning.” According to a definition from the Oxford Dictionary: Learning is the act of gaining new, or changing existing, knowledge, behaviors, skills, values, or preferences, often through methods like study, experience, or instruction. Even by this straightforward definition, machines could qualify.

The gentleman at the conference argued that true learning requires understanding, thereby suggesting machines can’t truly “learn.” But countless instances challenge this perspective. Take, for example, my son, who began walking a few months ago. Did he fully grasp every intricate detail of the walking process? Of course not! He observed others, mimicked their movements, and utilized the age-old trial-and-error approach. At times, he experienced reinforcement learning, like when my wife and I cheered him on after he took a few steps independently. But he wasn’t consciously analyzing every aspect of the act.

Another instance to consider is implicit learning, like when someone picks up a language purely from exposure, without formal instruction. They might speak fluently without understanding the underlying grammar or structure of the language. So, while learning might not always necessitate a deep understanding, understanding typically requires a level of consciousness — something machines currently lack.

Intelligence and the Cycle of Hedonic Adaptation

The other term under scrutiny is “intelligence.” Many argue that these AI systems aren’t truly intelligent because they can offer mostly incorrect answers and lack consciousness. I agree with the second point, and frankly, I think we should be grateful that we haven’t yet crossed that frontier. The issue with the first point is again about definitions. A few decades ago, the Turing Test was the gold standard for machine intelligence. Now that some machines have passed it, the goalposts have moved, and people demand even more advanced criteria for “intelligence.”

Why do our expectations keep changing? I believe this phenomenon can be attributed to what’s known as the “hedonistic treadmill” or “hedonic adaptation.” This concept from positive psychology posits that people’s happiness levels tend to remain stable over time, even if certain events or changes temporarily elevate or reduce their happiness. Think about finally acquiring that luxury car you’ve always wanted. At first, you’re ecstatic, but after some time, the thrill fades. The car becomes a normal part of your life, and your happiness reverts to its default level. Similarly, when it comes to technological advancements like AI, society quickly adjusts to new norms. A few months ago, everyone was buzzing about the transformative power of AI, considering it a leap in machine intelligence. Now, people are disillusioned because AI hasn’t lived up to the utopian visions of replacing human labor entirely or solving all our problems.

So when evaluating technological progress, especially in the short term, it’s useful to remember this cycle of hedonic adaptation. Our quickly shifting baseline for what constitutes “intelligence” or “advancement” may not be a fair measure of the actual progress being made.

Machines Lack Fear, But We Don’t

A striking observation during the initial rise of ChatGPT and similar tools was the palpable sense of fear among many. Every day, there seemed to be a new report forecasting which professions would become obsolete due to AI’s rapid advancements. Even roles we thought were safe, such as software engineers, were being portrayed as potential casualties. While I’ve delved deeper into this specific topic in another blog post, the widespread fear was undeniable.

Interestingly, as people started to see that AI tools might not be as flawless or dominant as initially thought, there was an almost collective sigh of relief. I can’t help but feel that, to some extent, people secretly want AI to have limitations. It brings comfort, appeasing our anxieties about the unknown. However, underestimating AI could be our pitfall. Technology is advancing at an exponential rate, and our linear way of thinking may not fully grasp its pace. Dismissing AI’s potential or asserting it will never achieve true intelligence might blind us to genuine advancements. We shouldn’t let our fears overshadow the reality of AI’s capabilities. Instead of being scared, we should understand and use the current technology, so we’re ready for what’s coming next.

Conclusion

We’re in the early stages of learning how to work with AI tools. I believe that while these tools aren’t flawless, they serve a purpose. Remember, they’re just tools! If you choose to use ChatGPT, be aware it’s a chatbot and might not know everything you expect it to. There’s a variety of AI tools emerging, and it’s crucial to remember that none are universally adept at all tasks. Use them wisely, managing your expectations, but also acknowledging their potential.

A good way to think of our current AI systems is to compare them to children. They’re in their infancy, still growing and learning. They aim for intelligence but have much to absorb and understand. Would you expect a 5-year-old to drive a car without any mishaps? Likewise, it’s unfair to label them as unintelligent or foolish when they make an error. Instead, we should recognize their potential, identify their strengths and weaknesses, and support their development. Our current AI systems are, in a manner of speaking, in their infancy. We should remember this when we task them with challenges and responsibilities. As the often-quoted saying goes, “Everyone is a genius. But if you judge a fish by its ability to climb a tree, it will spend its whole life believing it’s stupid.” This perspective is crucial when diving into the world of AI, both in terms of using the tools available today and critically assessing their performance and capabilities.

Previous
Previous

Beyond Bugs: Cultivating a Quality Assurance Mindset In Your Company

Next
Next

Lost in Translation: The Art of Communication as a Software QA Engineer