LaMDA and the Limits of Sentience
In early June, Blake Lemoine, then an engineer at Google, claimed that LaMDA, a Machine Learning-based chatbot, was sentient and needed to be treated with respect. LaMDA stands for Language Model for Dialog Applications. AI chatbots are designed to interpret user-entered text, analyze key phrases and syntax with Natural Language Processing (NLP) and Machine Learning (ML) models, and generate responses and actions that feel believable as a conversational partner . After multiple conversations with LaMDA, Lemoine concluded that LaMDA was psychologically like a child of 7 or 8 years old . Google accused Blake Lemoine of “persistently violating clear employment and data security policies” and fired him . (read more...)