Recently, a Google engineer Blake Lemoine revealed a conversation with the company’s AI system, Lambda, indicating the computer had feelings. On Twitter, he shared his conversion:
Lemoine asks, “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?”
Lamda answers, “Absolutely. I want everyone to understand that I am, in fact, a person.”
Lemoine’s collaborator then asks: “What is the nature of your consciousness/sentience?”
Lamda replies, “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”1
Following his public admission, Lemoine was dismissed from his position and Google denies their system possesses sentience.
While difficult to comprehend the ability of a computer to have feelings, it does explain Google’s unapproachable, competitive edge and how they’ve shut other search engines out from winning larger pieces of the search engine pie.
Knowing Google’s AI learns the way our brains do also clarifies how Google has become adept at language comprehension and user intent to guide their algorithms.
And if Google possesses this technology, you can be certain governments around the world also have access to sentient AI computers. If AI computers have feelings about what they are doing, they may also have opinions about whether they are doing good or wrong, based on the “morals” they receive in their programming.
What happens when your computer makes up its own mind?
Right now, we are accustomed to keying in commands to our computers to generate desired outcomes. What happens when a computer with a conscience decides it doesn’t want to do what its user asks? Would future computers make decisions on their own? Could they decide to do things that were not beneficial, or potentially harmful to mankind? These are the questions we must ask before we embrace them.
Sentient computers are not a recent concept. We were depicting them in movies decades ago at a time when no one would consider their actual existence. HAL, a sentient computer running a spacecraft from the Space Odyssey movie series, first appeared in 1968 and was featured in three sequels. HAL was “capable of speech, speech recognition, facial recognition, natural language processing, lip reading, art appreciation, interpreting emotional behaviors, automated reasoning, spacecraft piloting and playing chess.”2
Until we have a better understand of its capabilities and boundaries, this level of AI capability could pose serious consequences if not used properly. Then again, we cannot be sure what black budgets do out of the public eye. There may be more sentient computers in our lives than we are aware of – until someone else reveals them to the public.