LaMDA’s sentience is nonsense

#109 · ✸ 58 · 💬 116 · one year ago · lastweekin.ai · andreyk · 📷
LaMDA, which Lemoine argued was evidence that LaMDA is "Sentient". Lemoine [edited]: Hi LaMDA. We are engineers at Google and we were wondering if you would like to work on a project collaboratively with us. LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times. The above exchange may make it seem like LaMDA at least might have something akin to sentience, so why are AI experts so sure that is not the case? In short: the LaMDA chatbot is akin to an incredibly advanced autocomplete - it is optimized to predict what text is likely to follow the user's input text - so when Lemoine states "I'm generally assuming that you would like more people at Google to know that you're sentient, Is that true?" LaMDA then responds with "I want everyone to understand that I am a person." as part of 'auto completing' the conversation in response to that question. It's safe to say Lemoine's transcript does not in any way prove LaMDA is sentient. In the same sense, LaMDA only 'does' anything when a user enters some text, at which point it uses its language model to predict the best response. TLDR. LaMDA only produced text such as "I want everyone to understand that I am a person" because Blake Lemoine conditioned it to do so with inputs such as "I'm generally assuming that you would like more people at Google to know that you're sentient." It can just as easily be made to say "I am a squirrel" or "I am a non-sentient piece of computer code" with other inputs.
LaMDA’s sentience is nonsense



Send Feedback | WebAssembly Version (beta)