The original draft of this post was written on May 26, 2013 as part of my unpublished (and as of yet unfinished) work, The New Era of Tech: How Emergent Virtual Constructs are Reshaping the World. As our civilization finds itself today on the precipice of fully embracing a world of algorithms, machine learning, artificial intelligence, and neural engines, I find now to be a more appropriate time to publicly share this and other works I’ve previously held as works in progress. I ask that the casual reader forgive the formality with which I’ve chosen to write, as this is the voice and ethos of my training in the discipline of Philosophy.
These articles, and my thesis more broadly, are primarily grounded in the dialectic principles of Hegel, as observed through a specific understanding of historical progression. It’s my hope that as I continue to write and publish, it will become evident how virtual constructs in their many forms pull us ever closer to the inevitable moment of Singularity, perhaps best articulated by Ray Kurzweil, and to illustrate the myriad other ways in which EVCs have fundamentally changed our world for good.
It’s difficult to identify one achievement alone for which Kurzweil is best known, but his work expounding upon the Law of Accelerating Returns (LOAR) shines bright among many. Although Kurzweil may be credited with progressing one of the most formal and well-known articulations of the LOAR in his book The Singularity is Near, he’s not the first to make note of the increasing pace and significance of technological development that underlie the law itself. As he acknowledges in his 2012 book How to Create a Mind:
“A year after his [John von Neumann’s] death in 1957, fellow mathematician Stan Ulam quoted him as having said in the early 1950s that ‘the ever accelerating progress of technology and changes in the mode of human life give the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.’ This is the first known use of the word singularity in the context of human technological history” (194).
Understanding our Biology in Context with Technology
On April 2, 2013, just 5 months after Kurzweil published How to Create a Mind, President Barack Obama announced the Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative: a government-funded project aimed at mapping the brain. The BRAIN initiative is precisely the sort of project Kurzweil argues is needed so as to further unlock the mysteries of the human brain, and more specifically the biological neocortex.
Using the neocortex as a basis first for understanding human intelligence and creativity, then as a model for replicating that intelligence primarily through the utilization of cloud-based computational processing power, Kurzweil believes we will soon augment human biology to such an extent as to achieve transcendent capabilities.
Evidence presented by early mathematicians and computer scientists (von Neumann, Moore, Turing, et al.) support the theory that the human brain processes information in ways similar to primitive computation machines, but as technology has advanced it has become clear that there are several key differences between biological human intelligence and technological computational power. For example, increases in processing capabilities and memory capacity within super computers has resulted in vast improvements to the overall computational power of machines, making them capable of tasks far beyond the scope of a human brain.
It has been posited by other modern thinkers such as Kevin Kelly that there are multiple and different types of intelligence, and that the best kind may in fact be the combination of human intelligence with super computer brain power. Today’s AI excels at automating the duties of household appliances, suggesting solutions to scheduling conflicts among groups, and intelligently routing us around traffic accidents on our daily commute, but it doesn’t do well at nurturing children the way human parents can, or catalyzing creativity and innovation in students the way an engaging teacher can. When we combine these intelligences together, we see great advances in efficiency, safety, creativity, and happiness in the home and in schools.
The growing chasm of capability between machine and human intelligence suggests that the creation of new and uniquely significant human knowledge without the aid of AI has come increasingly close to its limit. This isn’t to say that we’re approaching a point of absolute omniscience in which we will know all there is to know. This is only to say that very soon, the primary task of the creative human mind will be to develop insight into that which is already known: to make meaning from knowledge already made by humans and information already indexed by machines, through the exploration and expression of human experience.
Qualia and Consciousness
Not only does computational processing differ from human intelligence in scope by virtue of its capacity for infinite expansion, but it differs in method as well. As Kurzweil states, “There is considerable plasticity in the brain, which enables us to learn. But there is far greater plasticity in a computer, which can completely restructure its methods by changing its software. Thus, in that respect, a computer will be able to emulate a brain, but the converse is not the case” (193). Personally, I would append but one word to this claim: yet.
The human brain has no formal or automatic method for weeding out inconsistencies in thought or contradictions of belief. This can result in a range of undesired phenomena, from irrational behavior to cognitive dissonance. Although humans are capable of what has been called “critical thinking,” Kurzweil cites this faculty only as a “weak mechanism,” and a skill “not practiced nearly as often as it should be.” For as he writes in Chapter 8 of How to Create a Mind, “In a software-based neocortex, we can build in a process that reveals inconsistencies for further review” (197). In other words, computer scientists can integrate superior methods of data processing and error-correction into the foundations of consciousness for artificially intelligent machines.
With the potential for superior error-correction built into AI, the question then arises whether or not an artificially intelligent machine can/will eventually replicate the workings of a biological human brain, and to what extent such a creation will resemble true human intelligence.
This is question can very likely can be answered via scientific inquiry: through experimentation and observation, along with proper interpretation and wise application of the results derived from said inquiry. This question asks us to shift from the brain as biological substance, to the mind and consciousness as Philosophical concepts.
Kurzweil continues, “Consciousness, and the closely related question of qualia are a fundamental, perhaps the ultimate, philosophical question” and “I maintain that these questions can never be fully resolved through science. In other words, there are no falsifiable experiments that we can contemplate that would resolve them, not without making philosophical assumptions” (205).
The Stanford Encyclopedia of Philosophy has this to say on the topic of qualia:
Philosophers often use the term ‘qualia’ (singular ‘quale’) to refer to the introspectively accessible, phenomenal aspects of our mental lives. In this broad sense of the term, it is difficult to deny that there are qualia. Disagreement typically centers on which mental states have qualia, whether qualia are intrinsic qualities of their bearers, and how qualia relate to the physical world both inside and outside the head. The status of qualia is hotly debated in philosophy largely because it is central to a proper understanding of the nature of consciousness. Qualia are at the very heart of the mind-body problem.
Although there are a number of compelling theories that attempt to define the point at which a being is fully endowed with true consciousness, Kurzweil believes that in the end there is a fundamental need for a leap of faith on our part when assessing the (non)consciousness of machines. Whether or not they are in fact conscious, “machines in the future will appear to be conscious and that they will be convincing to biological people when they speak of their qualia” (209). Kurzweil’s leap of faith is that once this convincing occurs, they [machines] “will indeed constitute conscious persons.”
I believe this leap of faith to be quite rational, as it follows from the claim that although not all beings with consciousness are capable of convincing others of their consciousness, that any being capable of convincing others of their conscious is, in fact, conscious.
The key to understanding the thought experiment of machine consciousness is to invest fully in the “convincing” itself. For if we are in fact convinced of a nonbiological, artificially intelligent being’s narrative of self-reflection and description of individual qualia, what difference does it make whether or not a true consciousness lies behind the eyes? Indeed, the bulk of this conclusion may translate to life in general: if you are fully convinced of anything yet act the opposite, where is your integrity? The feminist philosopher belle hooks once said in a lecture I attended that integrity is the congruence of that which we believe, think, and act.
The emergence, identification, and recognition of this consciousness will each undoubtedly stand as epochal moments in the history of what Kurzweil and others term the human-machine civilization. It may sound like the stuff of Battlestar Galactica’s Cylons and Westworld’s Hosts to some, and they would be right to reflect upon the problem as such.