Google fires software engineer Blake Lemoine who said AI system LaMDA has become sentient

0
213

Google said Mr Lemoine had chosen to ‘persistently violate clear employment and data security policies’ and they found his claims ‘wholly unfounded’

US tech giant Google has fired a senior software engineer who claimed the firm’s artificial intelligence (AI) chatbot LaMDA is sentient.

Blake Lemoine had been placed on leave last month by Google, after making the assertion the AI system had feelings.

Google has now dismissed him, saying he had violated company policies, adding his claims about LaMDA had been reviewed and were “wholly unfounded”

-Advertisement-

A statement from the firm on Friday read: “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information.”

Google said last year that LaMDA – Language Model for Dialogue Applications – was a “breakthrough conversation technology” which could learn to talk about essentially anything.

Mr Lemoine, who worked for Google’s Responsible AI team, was involved in testing if the AI system used discriminatory or hate speech.

READ ALSO: Google confirms strange bug affecting millions of Gmail users

In June, he published an interview with LaMDA on Twitter to support the idea LaMDA may be self-aware and could hold conversations about emotions, enlightenment and empathy.

Google and many leading scientists were quick to dismiss Lemoine’s views as misguided, saying LaMDA is simply a complex algorithm designed to generate convincing human language.

Mr Lemoine was then placed on paid leave for violating the company’s confidentiality policy.

Google has said it takes the development of AI “very seriously” and has conducted 11 reviews of LaMDA, adding it would be continuing “careful development” and wished Mr Lemoine well.

Mr Lemoine has been contacted by for comment.