Microsoft co-founder Paul Allen caused quite a stir among transhumanists and singularitarians this week when he penned an article titled, “The Singularity Isn’t Near.” In it, he and co-author Mark Greaves argue that while a Kurzweil-esque technological singularity “might one day occur,” it is a long way off – certainly further out than Kurzweil’s predicted date of 2045.
The authors’ rationale for the article rests on the fact that humans have barely begun to understand exactly how our own brains work, and therefore could not possibly create a human-equivalent (or smarter-than-human) AI without massive, revolutionary advancements in neuroscience and/or AI research occurring in the near future, of which Allen and Greaves are skeptical.
But I think this is the fatal flaw in Allen’s and Greaves’ argument – those who believe the Singularity will occur in the next 20 – 50 years (including thinkers like Ray Kurzweil and Vernor Vinge, who coined the term) do not argue that a smarter-than-human AI needs to be modeled after the human brain, or employ human-like cognition. Indeed, there is a far better chance this AI will be totally alien – at this point, however, we simply don’t know. As the Singularity Institute notes, the reason we don’t know is because “we’re not that smart.” In other words, our inherent cognitive limitations make it difficult for humans to imagine how a vastly smarter alien intelligence will behave, or operate.
That said, technology marches on, and advancements in computing speed and power continue to escalate at an exponential rate. Allen and Greaves even note that we’re on the verge of developing Exaflop-class computers that “could probably deploy the raw computational capability needed to simulate the firing patterns of all of a brain’s neurons, though currently It happens many times more slowly than would happen in an actual brain.”
Given this dramatic increase in raw computational power and the advancements that will likely continue to occur in the decades to come, is it so unreasonable to think humans will see the birth of human-equivalent (if not human-level) or smarter-than-human AI? As plenty of thinkers have shown, Kurzweil among them, it is not.