Jaron Lanier’s piece on Edge (“The Myth of AI”) has sparked quite a bit of discussion in my field, and he really pisses me off. He starts out with a straw man argument, by insinuating that suspect (and dominant, most wealthy!) parts of the tech culture believe that computers were people, and that this would be intrinsically linked to the idea that algorithms were equivalent to life.
I think that is utter rubbish. We may believe that people are computers, and that life is a set of algorithms (i.e., the inverse of what Lanier claims), but we are acutely aware that computers are simply devices for the creation of causal contexts that are stable enough to run software, and for the most part, we do not care about the metaphysics of biological systems.
Lanier’s straw men are designed to create a fake antagonism between an imagined tech subculture group and speciecist “reactionaries” that attempt to stem the takeover of the computational Übermensch, when in fact, the fault lines are much more subtle. For instance, I do not think that it is helpful to equate computational neuroscience researchers with people advertising their current wave of AI business models, or starry eyed singularitarians with engineers that build self-driving cars. On the other hand, “mind-of-the-gaps” mysterianists (exemplified by Searle or Penrose) are hardly in the same camp as Stephen Hawking and Elon Musk, which are optimists with respect to what technology can achieve, but seem to have recently been infected by the AI Cassandras of MIRI.
The false dichotomy enables Lanier to sell his main talking point: that AI research has turned into a religion, and more importantly: to establish himself as a voice of reason, equanimously located outside the deluded camps that his argument projects into the discourse landscape.
According to Lanier, the religion is created by combining narrow AI, such as face recognition technology, with a Frankenstein myth. By reducing the idea of strong AI, i.e. fully autonomous, generally intelligent artificial systems, to a fairytale story, Lanier bulldozes the landscape of the discourse. He essentially refuses to listen, to take people like Musk and Hawking seriously; to him, they are not discourse partners that make an argument that he ought to refute with all the discursive acuity he can muster, but superstitious children deluded into believing in Santa Claus. Let us not worry about the effects of a global nuclear war, Lanier is saying, because the warners are just combining the idea of dynamite with a myth of an abrahamitic deity orchestrating an apocalypse.
Throughout the text, Lanier gets carried away with half-digested, superficial just-so reinterpretations of existing narrow AI applications as vague but all-encompassing cultural principles. Because companies such as Netflix or Amazon are driven by a need to sell their products, there is no way to distinguish recommendation mechanisms from manipulation mechanisms. Because Chomsky’s universal grammar hypothesis was wrong, a generation of attempts at creating translation programs failed. Because Apple and Microsoft created the Siri and Cortana interfaces to their knowledge bases, people are bound to confuse AI applications with artificial personhood. Because Markram’s expensive brain simulations are so AI-like, there is justified indignation within the humble and reasonable community of neuroscientists (who actually are worried that his giant EU grant will have a negative impact on their abilities to pay off the next fMRI scanner).
Lanier’s sloppy thinking culminates in the idea that the tangible benefits of (existing, narrow) AI applications are not the result of automation and novel ways of obtaining, processing, integrating and distributing information, but by somehow stealing the food off the tables of the poor knowledge worker classes that still are forced to do the actual job. Thus, the stupid and evil myth of AI not only a superstition, but it is actually used by its Silicon valley profiteers to cheat and exploit human translators, authors and data entry personell.
My criticism of Lanier’s argument does not imply that I think that it is clear that strong AI is going to happen soon, or that it is going to be trigger utopia or a techno-apocalypse, or that the role of AI in the current economic contexts is entirely beneficial. My issue is that Lanier does not care for these issues. Lanier appears to be an intellectual fraudster, peddling the talking points that he suspects his intended audience is most likely interested in buying. Lanier is not interested in the actual debate, and does not take its participants seriously. By obscuring the intellectual, economical and technical issues of AI with the ill-fitting template of a need to stem the tide of religious thinking, he does both proponents of AI research and those concerned about its possible implications a tremendous disservice.