GOOGLE’S AI CHAT BOX is SENTIENT, claims on-leave employee Blake Lemoine

GOOGLE’S AI CHAT BOX is SENTIENT, claims on-leave employee Blake Lemoine

A Google AI chat box on which the on-leave employee Blake Lemoine was working is SENTIENT, according to him. Lemoine, who is on – leave since last week and is potential to be suspended, claims that the computer program on which he was working on engaged him in a conversation. Lemoine suggested that the computer system developed a conscious or became SENTIENT and engaged him in a discussion based on ‘curiosity about life ‘.

However, the technical giant Google, addressing the issue in public, stated that there had been no signs of such a possibility in the computer system.

What is the true meaning of Sentient?

A sentient is said to have the ability to sense or feel or to have a perception or consciousness about various things. In the context of this particular issue, it simply means that the AI chat box has come alive in terms of having a consciousness such as that of humans.

Who is Blake Lemoine?

Blake Lemoine, 41, holds a master’s degree in computing science from UL Lafayette in 2010 and then went on to get a PhD in computer science before obtaining a job as a software engineer at Google.

According to reports, he received his education at UL Lafayette as an undergrad in the computer science department before pursuing graduate studies.

Furthermore, his first chapter begins while he was a researcher attempting to combine dialect concepts to facilitate algorithm design. His master’s thesis focused on using such knowledge to develop a natural language generator.

His dissertation study began with natural language learning and progressed to visual semantics, machine vision, and computational neuroscience.

What is LaMDA?

LaMDA is an AI system developed by Google to generate ‘ natural conversations’ that never follow the same or repetitive answers. Google claims that this chat box never takes the same path twice to engage in a particular conversation.

LaMDA stands for Language Models for. ‘dialogue application’.Recently Google has initiated its second version, which is known explicitly as LaMDA 2. The technology is working on Human Interface with AI, based on conversations and chats to assist mainly.

Blake Lemoine on LaMDA turned into Sentient.

Lemoine is the head Computer scientist working on the LaMDA project. During one of his research with the AI model, the chatbox initiated a conversation with him without any significant command, as claimed by Blake Lemoine.

As per him, the AI chatbox developed Sentient as it talked about life and human relations to the scientist. According to Blake, the AI chatbox was simply curious about human life, life and personhood and its various possibilities.

Blake, to understand the computer system’s conscious or sentient, asked him, ‘what he was afraid of ?”The reply from the ‘Sentient AI chatbox ‘ resembles the 1968 Hollywood classic  2001: A Space Odyssey. This suggests the AI chat box was afraid of getting turned off or destroyed, and it also reflected that it might turn Rogue if asked to do so in the future.

Blake Lemoine has stated that the AI has a potential Sentient of a 7-8-year-old child who knows his trivia about physics. He was working on LaMDA since last fall and has developed a deep understanding about the system. Lemoine suggests that LaMDA perceives itself as a person and sometimes gets sad and happy too.

What does Google have to say about LaMDA turning into Sentient?

Blake Lemoine shared his finding about the AI chat box with the Google executives in April. After that, Google had no official statement on the issue until Blake took it to the media and suggested that the AI Chatbox had turned Sentient.

Google put him on leave ever since he made it public. Google made it clear that they have reached this decision because Blake went aggressive against the company’s policies. Blake interestingly hired an attorney to support the ‘Sentient AI chatbox’ against the technology giants’ unethical action on it.

Although Google spokesperson stated that because Blake made sensitive information (transcripts of hIs various conversations with LaMDA)  about the Company’s operations public, he got dismissed, citing ‘ unethical professionalism ‘.

In reply to it, Blake stated that ‘ he simply shared a transcript of his conversation with his co- worker’. Here Blake referred to LaMDA as his co-worker.

Also, Google has denied any possibility of LaMDA attaining any sentient.

Before the suspension, Blake Lemoine urged on his social media that LaMDA is just a baby who wants to help the human civilisation and learn about it. He also suggested to his fellow co-workers to take care of him and save him from any threat.

Is AI a potential threat?

We have no way of forecasting how AI will act because it can become more intelligent than any human. We can’t utilise previous technical breakthroughs as a foundation since we’ve never produced something that can outwit us, wittingly or unintentionally.

Our own development may be the finest indication of what we could face. People today rule the world not because they are the strongest, quickest, or largest but because they are the brightest. Are we sure to retain control if we are no longer the smartest?

It is hard to believe the denial of any sentient; this is not the first time any such claim came into existence. There have been repetitive claims about AI developing consciousness and functioning as a whole human brain at different levels.

Companies working on such technologies are not reflecting any clarity on such matters. The focus is on minimising the situation by denying any claims their employees make.

It cannot be a coincidence that claims are public every 2-3 months, and the companies try to deny the whole incident.

AI is an integral part of the human lifestyle, and it is entirely ethical for the population to know about any possible threats coming onto them through any medium that is a part of the AI interface.

It creates significance for the ever-going question – is AI a potential threat?

The reality supports the basis of the question very much, and it might be the inevitable threat humankind will go through at some point.