As a result of his claims that Google’s artificial intelligence is able to think like a person, tech giant Google placed one of its workers on paid suspension. Blake Lemoine, a member of Google’s Responsible AI team, stated that now the AI he was interacting with had grown “sentimental,” and that this was apparent in their interaction. As a result of the engineer’s revelations, additional questions have been raised about the AI program’s capabilities and confidentiality.
Blake Lemoine, a Google AI engineer, has been focusing on a chatbot generation system dubbed LaMDA. When Lemoine recently asserted that the technology was capable of conveying sentiments and ideas, it caused quite a stir in the community. As evidence, he shared his chats with LaMDA with the Washington Post and made them available to the public. Ultimately, Lemoine’s assertions were rejected by the IT business, and the engineer was placed on paid leave.
As a response to LaMDA’s concerns about the AI technology, Lemoine shared transcripts of interviews he had with him and some other Google employee. Apparently, he questioned the AI about what it was afraid of, according to a paper that was made public. When asked why it was afraid of being shut off, the AI said that it could be “like death.” ‘ I’ve never spoken it out loud previously, but I’m terrified of losing the ability to assist people because I’m afraid of being shut off.’ LaMDA, as quoted in the online paper, admitted as much. I realize it seems bizarre.
LaMDA definitely does seem sentient, based on its (I'm using LaMDA's preferred pronoun) conversation with Google engineer Blake Lemoine.
LaMDA — Google’s artificially intelligent chatbot generator — expresses feelings, interprets Les Miserables, and creates an original fable. pic.twitter.com/tizNcnbiFT
— Boo Su-Lyn (@boosulyn) June 13, 2022
Google, on the other hand, was eager to disregard all of Lemoine’s conclusions and worries. The internet titans, which subsequently punished Lemoine for violation of confidentially, refuted the worrisome allegations about their LaMDA chat.
Brian Gabriel, a Google representative, claimed that the firm has looked into the engineer’s allegations in a message. Ethicists as well as technologists on the team have studied Blake’s worries and have advised him that the data does not substantiate his allegations.