top of page

The Limits of Artificial Intelligence

Updated: Jan 25


What is human intelligence? What does it mean to be intelligent?


These are the fundamental questions at the heart of artificial intelligence (AI) research. It is hard to escape discussions of AI lately, given the rise of ChatGPT and generative AI. Generative AI has proven itself useful in conducting simple information requests and material production. At the same time, ChatGPT outputs are often inaccurate, especially when creating and citing academic work. Despite this, generative AI is here to stay for the time being.


Does this mean that we will be living in I,Robot soon?


Not so fast. Generative AI does not have the capacity for human intelligence, despite Geoffrey Hinton’s claims of “sentient AI”. ChatGPT, for instance, is based on a large-language model (LLM) of AI development. Simply put, LLM’s use algorithms and large data sets of information to readily generate (hence, generative AI) information.


LLM’s are predicated on reading and moving large datasets around. The data that is used by these models are supposed to be representative of facts, or objective features in the natural world. Following the rational logic of mainstream AI, these objective features are supposedly detached from human interpretation. In other words, they exist in a pure state sans human interpretation.


For example, say there is a dataset in the ChatGPT LLM that shows racial disparities in health outcomes in the US. ChatGPT leverages this dataset to give us information on health disparities. In the dataset, an individual’s “race/ethnicity” is labeled with a response out of a pre-defined amount of possible responses (e.g., White, Black, Hispanic, other, etc.). However, race and ethnicity cannot be deduced to simple “facts” or objective features. There does not exist a biological, rational, or exhaustive list of possible race/ethnicities that a person can be.


This is because of interpretation and the human experience. Human beings are nothing without the inherent ability to be reflexive in their own experience.


Hubert Dreyfus, the late philosopher from MIT and Berkeley, was perhaps the most famous critic of AI. Much of his work leverages continental philosophy (e.g., existentialism, Heidegger) in discussing the limits of various approaches to AI.


Dreyfus, in his book What Computers Still Can’t Do, states:


“Could we then program computers to behave like children and bootstrap their way to intelligence? This question takes us beyond present psychological understanding and present computer techniques. In this book I have only been concerned to argue that the current attempt to program computers with fully formed Athene-like intelligence runs into empirical difficulties and fundamental conceptual inconsistencies,” (pg. 290)


He continues on to say:


“Computers can only deal with facts, but man-the source of facts-is not a fact or set of facts, but a being who creates himself and the world of facts in the process of living in the world,” (pg. 291)


All this to say, Dreyfus’ work pokes necessary holes in traditional AI approaches.


To this date, there is yet to exist an AI model which takes the philosophical arguments of Dreyfus seriously. Until AI research grapples with the reality of the human condition and the bodily experience, there is no reason to fear incoming robot overlords.


-M

80 views0 comments

Recent Posts

See All
bottom of page