What truth can AI tell?

ChatGPT and other conversational agents (chatbots) based on artificial intelligence (AI) have experienced rapid growth recently. Their application and impact are being widely discussed, and so are the importance and ways to regulate them. When Geoffrey Hinton left Google with concerns about spam, government abuse, and super-intelligent machines, more attention was drawn to possible risks. Chatbots' shortcomings in giving an answer that can be considered “the truth” have been questioned to the point that Elon Musk is proposing an alternative to the existing products, referred to as TruthGPT. Such efforts aim at an AI that s not trained by incorrect, biased, or limited data and so can give the true answers.


In a short article, Mark Bailey (National Intelligence University, USA) and Susan Schneider (Florida Atlantic University, USA) argue that chatbots shouldn’t decide what’s true. Their argument is based on the black-box nature of artificial neural networks that prevent transparency as a main principle of truth. With billions of parameters buried in multiple layers of a network, it is hard to know how a chatbot decides what the true answer is. Not to mention that what the network has learned is based on the data it has received with all its biases. Bailey and Schneider state that: “Artificial intelligence already helps us solve many of our daily troubles—from face recognition on smartphones to fraud detection for our credit cards. But deciding on the truth should not be one of its assignments.”


Many of these concerns are legitimate and it is wise to doubt any single product aiming to be the single source of “truth.” But it seems that despite all these conversations, fundamental issues in the basic approach (creating content vs. knowledge) and underlying philosophy (dualism vs. pluralism) of these chatbots are dangerously overlooked by many end users, developers, and regulators. 


Modern Western society and its technological developments are commonly rooted in Cartesian dualism that sees the world as objective/subjective, body/mind, true/false, and promotes a single, mostly biased, worldview. In his 2018 book “Design for the Pluriverse,” Arturo Escobar discusses the role of ontology and worldview in the design process and supports the notion of ontological design, i.e., a design that is based on acknowledging multiple ontologies. Interestingly, his argument applies to the design of AI chatbots. An AI agent replacing the search tool, in my opinion, promotes a single, convenient answer, assumed to be objective and true. So, the main question should not be “Can AI tell the truth?” but "What truth can AI tell?" or “Is there a single truth that anyone, AI or not, can tell?” An ontological design approach will answer no, and avoids solutions that offer convenient but single-minded information. In that sense, a convenient chatbot (even if supported by TruthGPT-like engine) is inferior to a traditional Google search as it takes away options and opinions. Of course, this statement should not take to mean web searches are not biased or that convenience and automation are not helpful. The point is that we should not rely on AI to tell “the truth, the whole truth, and nothing but the truth” because there is no single truth. Truth, in many cases, is a social construct and offers a non-binary spectrum or a more complicated non-linear web of options.


In addition to this ontological issue, there is a methodological one. The generative AI tools establish patterns among data and use them to predict new content. They establish a model of language (not knowledge) through biased data and rules. While predicting text can result in correct (or seemingly correct) answers, they are not based on understanding the subject. This means at some point, they are doomed to get it wrong as they literally don’t know what they are talking about. These two ontological and methodological issues can turn AI into an expert-looking agent, lacking both fundamental knowledge and diverse thinking. 


Generative AI tools are increasingly used, possibly taking over human jobs. From writing essays and reports to designing multimedia content, a wide range of skills are being either replaced or significantly assisted by AI. This has raised practical concerns about privacy, job security, skill development, copyright, and more fundamental ones, such as smarter-than-human AI dominating the world. These concerns are legitimate, and proper attention to AI is a significant priority for our society. So, it is essential to understand how AI tools are created (which determines what they can do) and the philosophy behind their operation (which determines what they will be used for). Such an understanding can raise new concerns and also can help define more effective solutions addressing fundamental problems. AI regulations and funding should go beyond where and how AI is deployed, and support research on better knowledge representation, and a more pluralistic and inclusive operation.


Popular posts from this blog

Is AI taking over our jobs (and classrooms)?

Anyone Can Code: Algorithmic Thinking (new open-access book)