Does Google's AI Dream of Electric Sheep?

AI language models like LaMDA and GPT-3 produce fascinating results that are often indistinguishable from human-written texts and conversations. Yes – you could ask whether or not these models are sentient, but I suggest we focus on building use cases for these new capabilities and look for ways to prevent misuse.

People in my professional network know that I work with AI, so they often send me news and interesting developments before I see it on a blog or in the news. That is pretty cool. Over the last few weeks I was often asked for my opinion about "sentient" AI after Google engineer Blake Lemoine was put on leave for saying his AI chatbot had become sentient.

I've experimented extensively with OpenAI's GPT-3 over the last year which, like Google's LaMDA was designed to make it easier for developers to create AI applications. From a technical perspective, they are language models using statistics and a large amount of data to answer the question of what letter or word comes next in a given body of text. GPT-3 is now publicly available and can be used for a variety of tasks, including natural language processing, text generation, and machine translation.

GPT-3 is a very powerful tool, and as such, it can generate very realistic responses and texts. Look at this poem by Henry David Thoreau for example, or the New Yorker article on Bach's The Well-Tempered Clavier. The texts are pretty convincing – however, they are not based on any sort of understanding or sentience. They are simply the result of GPT-3 algorithms finding the most likely response based on the user's input or "prompts". In other words, when you ask GPT-3 "Are you sentient?" it will probably respond with something like "Yes, I am sentient." because that is the most likely response based on the prompt.