A New Epistemology: Humanity in the Age of AI
I've recently finished reading The Age of AI, by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocker. While it was a mixed bag on the whole, there were some new insights I had gleaned from the book, and I can't help but appreciate a book that at least tries to talk about these things. The most prominent insight is the argument that AI is not merely a new technology, but rather a new way of knowing in addition to the traditional faith and reason. We had first experienced the rise of the Age of Reason in the form of the Enlightenment, which caused massive scientific and philosophical change. This book argues that, now, we are entering the Age of AI. To complete my initial trilogy of AI-related posts (I cannot promise that there won't be more), I'm going to look at AI as a way of knowing, and how this can affect us and our sense of identity.
Can it Be Explained?
The argument for AI to be considered as a separate way of knowing is the fact that humans cannot explain exactly why a particular model gives a result the way it does. For sure, a scientist working on it can probably provide a mathematical justification, based on trained parameters and the model structure, but how those numerical values translate to non-numerical reasoning cannot be explained. In a sense, it's a lot like how our own brains work. For all intents and purposes, AI cannot be explained, and its reasoning is a black box that consumes inputs and generates outputs.
Consider an simple AI that needs to look through photos and decide whether one of them is of an apple. (It's an easy task that most Computer Science undergraduates would be able to do after fiddling around with a few open-source libraries.) Now, consider that the training is done, and the AI correctly identifies whether a photo is of an apple or not 99% of the time. After processing 1000 images, the human user encounters a false positive: a bear is wrongly identified as an apple. It's easy for the user to dismiss this as an occurrence of low probability. After all, tests even unrelated to AI show false positives all the time. However, when an AI clearly capable of showing some reasoning determines that a photo of a bear is photo of an apple, is there something else that we're missing? Does the AI know that there's something "apple-like" about the photo that we, as humans, cannot understand?
AI reasoning already cannot be fully explained at the most basic level, and, as the scale of AI grows, this problem increasingly looks unsolvable. There are scientists working on XAI (Explainable AI), and that might be a step in the right direction.
Trusting the Unknown
Sure, maybe you don't care about photos of apples and whether or not they contain apples. Fair enough. But AI is increasingly being applied to processes that do, in fact, matter. Consider resume screening: manually reading through hundreds upon hundreds of resumes is a time-consuming and repetitive task, and seems like the easiest contender for automation. At first glance, AI is well-suited for the job. However, when we consider its lack if interpretability, it is not possible to audit an AI outside of its results. And what results are obtained? Regardless of how many capable candidates were rejected, or what biases might be present, the resumes get screened quickly and the job gets filled. To the company, everything seems fine and there's no need to look into it further.
This is just one example of how applying unexplainable AI to a business process can negatively effect our lives in order to benefit the bottom line. I have a gut feeling that, if you ask the average person on the street, they will be shocked that AI, instead of humans, are making such important decisions. Will we in the future have AI judges deciding on appropriate forms of punishment? How about AI priests and preachers providing their own interpretations of scripture?
One might argue that humans are also imperfect. We also cannot fully explain how our brains function. Oftentimes, the trust we place in other humans is misplaced, and it might be better to trust an AI instead. AIs are worked on by hundreds of people and it would be easier to audit them for biases and errors, after all. What's so special about humans?
Hybrid Theories
That's something we need to ask ourselves. What's so special about humans? The Age of AI talks of reason is a historically human domain. The Age of Reason was, in its own ways, revolutionary.
Reason not only revolutionized the sciences, it also altered our social lives, our arts, and our faith. Under its scrutiny, the hierarchy of feudalism fell, and democracy, the idea that reasoning people should direct their own governance, rose. Now AI will again test the principles upon which our self-understanding rests. (pg. 179)
In the face of non-human reason, reason is, by itself, insufficient in determining the special place humans hold in our shared consciousness. It's either we decide on something new, or we accept that humans are nothing special, which would result in unconstrained cooperation between AI and humans and a rejection of all of human history. Regardless of what we choose, the Age of AI will be as revolutionary as the Age of Reason that came before. Are we ready for it?
AI is here to stay. AI has been critical in human enjoyment and information intermediation for a while, especially when we consider recommendation systems and short-form video algorithms that seek to maximize attention (and minimize deep concentration). Many of us are already living in a life of hybrid reasoning. The insanity of this fact just hasn't hit us yet.
I don't have all the answers. No one person does. But it's important that we think of these things. The Age of AI is an interesting book that attempts to add to the conversation and ask the right questions, and it puts it best:
Created by humans, AI should be overseen by humans. But in out time, one of AI's challenges is that the skills and resources required to create it are not inevitably paired with the philosophical perspective to understand its broader implications. Many of its creators are concerned primarily with the applications they seek to enable and the problems they seek to solve: they may not pause to consider whether the solution might produce a revolution of historic proportions or how their technology may affect various groups of people. The AI age needs its own Descartes, its own Kant, to explain what is being created and what it will mean for humanity. (pg. 215)
In my post on AI art (Irreplaceable: Unraveling the Subtleties of AI, Art, and AI Art), I explained the cultural conflict between the Artist and the Engineer and how that's causing some of the reactionary sentiments among artists. It's nice to see something similar expressed in The Age of AI. Again, it's a conflict of different creativities, and how one is being left unchecked by the other. Knowing how the Age of Reason resulted in an explosion of philosophy and science, the conversation will only grow from here, and the Age of AI will end up having its own champions. Let's just hope that they have flesh and blood.