Translate

Google's AI like people! Why worry about Lambda? | Sojasapta2 Tech

Blake Lemoine, along with a colleague, is trying to prove that Lambda is sentient. But Google's vice president Blaise Agera-e-Arkas and responsible innovation head Jane Gainey dismissed that evidence. That's why Lemoine decided to make the matter public.

Blake Lemoine, along with a colleague, is trying to prove that Lambda is sentient. But Google's vice president Blaise Agera-e-Arkas and responsible innovation head Jane Gainey dismissed that evidence. That's why Lemoine decided to make the matter public.


Google introduced the new artificial intelligence (AI) chatbot Lambda to the world last year. They claimed that Lambda will be integrated with services like Google Search and Assistant for the convenience of users. Even if everything is going well, there is a problem this year. An interview with Blake Lemoine, one of the software engineers behind the creation of Lambda, caused a stir. Lemoine told The Washington Post in June that Lambda is no ordinary bot. Its AI or artificial intelligence is conscious or sentient just like humans.


Lemoine lost his job a few days ago because of that interview. According to Google, he was fired due to violation of employment rules and information security of the organization. The Washington Post published the report discussed on June 21 based on Lemoine's interview. What explosive information was there? Based on the report, Rubaid Iftekhar wrote for Newsbangla readers.

Google engineer Blake Lemoine opened his laptop one day and was talking to Lambda, the company's artificially intelligent chatbot.


Google has developed special software for creating chatbots based on language models that can collect and simulate hundreds of billions of words from the Internet. Lambda stands for Language Model for Dialog Applications.

Lemoine writes on the Lamda interface's type screen, 'Hi Lamda, this is Blake Lemoine...'. Lambda's chat screen looks a lot like Apple's iMessage.

Lemoine, a 41-year-old computer engineer, told the Washington Post about Lambda, 'If it wasn't known beforehand that I was talking to a computer program that I created myself; Then I'd be forced to think I was talking to a 7 or 8-year-old who knows physics pretty well.'

Lemoine is a member of Google's Responsible AI organization. He started talking to Lambda last fall. His main task was to find out whether artificial intelligence spreads hateful messages.

While talking to Lambda about religion, he noticed that the bot was also quite vocal about his rights and personality. In one discussion, Lambda even managed to change Lemoine's understanding of Isaac Asimov's Third Law of Robotics. 


Legendary science fiction writer Asimov laid down three laws for robotics in his writings, which are:

1. A robot will never hurt a human or allow itself to be harmed.

2. Robots obey all human commands except those that conflict with the First Law.

3. A robot will protect its own existence at all costs, as long as it conflicts with formulas one and two.


Lemoine, along with a colleague, is trying to prove that Lambda is sentient. But Google's vice president Blaise Agera-e-Arkas and responsible innovation head Jane Gainey dismissed that evidence. That's why Lemoine decided to make the matter public.

Lemoine believes that people should have the right to change technologies that improve their quality of life. He's also excited about Lambda, but he also has some apprehensions.


He said, "I think it's going to be a great technology. It will be useful for everyone. And many may not like it. And we who are Google workers should not decide the choices of all people.

Lemoine is not the only engineer who claims to see an anomaly in artificial intelligence. There are a number of technologists who believe AI models are not far from achieving full consciousness.

The matter was also acknowledged by Google's Vice President Aegira-e-Arkas. In an article published in the Economist, he writes, the neural networks used in AIs like Lambda are on the way to achieving consciousness. This network is following the workings of the human brain.


Earlier wrote, 'It felt like the ground was moving from under my feet. I felt like I was talking to something intelligent.'


However, in an official statement, Google spokesperson Brian Gabriel denied Lemoine's claim. "In the context of Lemoine's claim, our technology and ethics team reviewed the matter in accordance with AI principles," the statement said. Lemoine was then informed that his claim was not substantiated. He is also informed that there is no evidence that Lambda has consciousness, but rather overwhelming evidence against it.'

Zuckerberg's Meta, on the other hand, presented its language model to academics, civil society and government agencies last May. Joely Pineau, managing director of Meta AI, believes companies need to be more transparent about their technology.


He said, "It is not right to be stuck in the hands of big companies or labs with big language models."

Sentient robots have been part of science fiction for years. Now they are meeting in reality. Two special softwares developed by Open AI, a famous company working with AI, can be mentioned. One is GPT-3. Its job is to write movie scripts. Another is the Doll-e-Too, which is able to create images based on a sound it hears.

There's no shortage of funding and technologists working at companies aiming to create AI smarter than humans think it's only a matter of time before machines become intelligent.


Most academics and AI experts say that the words or images that artificial intelligence software like Lambda produces are mostly collected from Wikipedia, Reddit or other bullet boards and human posts on the Internet. And so the machine's ability to understand the meaning of the model is still unclear.

Emily Bender, a professor of linguistics at the University of Washington, said: 'We now have machines that can make sounds without meaning. But we could not eliminate the imagination of them having one mind. Terms such as 'learning' or 'neural net' used with language models for teaching machines create a sense of analogy with the human brain.'

Humans learn their first language from their peers in childhood. And machines learn their language by looking at lots of text and predicting which words will come next. They are also taught by filling in the blanks from the text.

Google spokesperson Gabriel drew clear differences between the recent controversy and Lemoine's claims.

"Certainly, some of those working on AI are considering the long-term potential of sentient AI," he said. But today's conversational models do not make sense compared to humans. They are neither conscious nor sentient. These systems simulate millions of sentences and can match any cool topic.'


In fact, Google claims, with the amount of data they have, the AI ​​doesn't need to be conscious to be realistic.

Large language technology is now widely used for teaching machines. Examples include Google's conversational search queries or auto-complete e-mails. When Google CEO Sundar Pichai unveiled Lambda at its 2021 developer conference, the company plans to integrate it into everything from Google Search to Assistant.


Meanwhile, users tend to talk to Siri or Alexa like a human. In 2018, after criticism of Google Assistant's human voice feature, the company promised to add a caveat.

Google has acknowledged the security concerns of making machines more human. In a January paper about Lambda, Google warned that people could share personal thoughts with chat agents that mimic humans. Users often do not even know that they are not human. Google also admitted, Pratip

Next Post Previous Post
No Comment
Add Comment
comment url