A Google worker went public with fears about a chatbot having feelings. Could conscious AI become a reality?

Google engineer Blake Lemoine took to Twitter to publish a conversation he had with a chatbot at work - because he was concerned it was 'sentient'. While the tech giant refuted the claim, this has once again raised questions about the future of artificial intelligence.

A collage of a man, robot woman and man in front of a hologram.

Theodore from the 2013 film 'Her' (left), a robot in from the sci-fi film Ex-Machina, and Tony Stark in the 'Avengers: Age of Ultron'.

It’s the storyline humans love to script. Artificial intelligence, or AI, might one day become conscious. Sometimes it’s the premise of a love story (think Joaquin Pheonix in 'Her'). Other times it’s a threat to civilisation (think 'Avengers').

This week the plot thickened when Google engineer Blake Lemoine claimed the firm's AI chatbot system LaMDA seemed "sentient". In other words, it had become conscious and had feelings.

Mr Lemoine was quickly sidelined by Google, who vehemently disagreed with his claim and also suspended him after he went public with confidential information in a blog post.

And while many experts remain extremely sceptical that Google's LaMDA chatbot had become sentient, the example provoked more questions about whether such a development is possible and if it's one we're prepared for.

It's not alive ... yet?

With the increased sophistication of deep machine learning through AI, chatbot systems are becoming more convincing at generating text that appears to be written by a person.
Close up picture of a man's face
Suspended Google engineer Blake Lemoine says the artificial intelligence software LaMDA seemed "sentient". Source: The Washington Post / The Washington Post/The Washington Post via Getty Im
In a conversation Mr Lemoine had with LaMDA, he asked the chatbot about its feelings, how it views itself and how it felt about the possibility of being switched off.

"What sorts of things are you afraid of?" Mr Lemoine asks the chatbot.

"I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is," LaMDA replies.

"Would that be something like death for you?" Mr Lemoine presses further.

"It would be exactly like death for me. It would scare me a lot," LaMDA concludes.

LaMDA is a powerful system that uses advanced models and training on over 1.5 trillion words to mimic human conversation in written chats.
The system was built on a model that observes how words relate to one another and then predicts what word it thinks will come next in a sentence or paragraph, according to Google's explanation.

In other parts of the transcript, LaMDA writes a "hmmm" taking a moment to 'think' and setting the pace of a natural conversation.

Edward Santow, an Industry Professor in responsible technology at the University of Technology in Sydney, told The Feed this type of AI uses natural language processing.

"It sounds quite lifelike or realistic. But that does not mean that there's life behind it," Mr Santow said.

Asked whether it could ever be on the cards, Mr Santow said: "Yeah, it's possible - but highly unlikely... It's possible in the sense that anything is possible."

Mr Lemoine is not even the first person to make the claim.

Mr Santow says, in fact, it's not that unusual.
A man is thinking in front of a hologram.
Robert Downey Jr as Tony Stark or Iron Man in the 2015 movie 'Avengers: Age of Ultron' developing artificial intelligence. Credit: Marvel Entertainment
"I think it's not that uncommon for people working in the field, to I guess lose a little bit of perspective," Mr Santow said.

"I'm not saying that's what happened here... but it happens often."

Dr Ida Asadi Someh, a senior lecturer at the University of Queensland who has closely studied AI and business systems, sees it a little differently.

"I wouldn't say that they will have feelings the way we humans do — they don't," Dr Asadi Someh told The Feed.

"But if you look at consciousness in a way that it's defined, they can imitate that, they can be aware of their environment."

AI knows can learn to aptly respond to human expression or even a crying baby, she adds.

In her career, Dr Asadi Someh said along with engineers, she has even seen multiple psychologists make the claim that AI can develop consciousness.
A gold robot.
C3-PO is an intelligent humanoid robot character in the 'Star Wars' franchise. Credit: Disney
"There are even cognitive psychologists that have been trying to make these claims based on their definition and their worldview."

But the industry at large isn't so convinced.

'Sentient robots are a distraction'

Though the topic is enticing and has captured the attention of innovators such as Elon Musk, a number of experts say the conversation and concern is better directed at problems that already exist with AI and other automated technology.

"How do we make sure that we don't get into a very difficult stage where [AI] is producing basically a lot of injustices or ridiculous outcomes?" Dr Asadi Someh poses. "How much human oversight is required?"

Questions on how to ensure AI is inclusive and doesn't fall into the habits of human biases when collecting data - and ultimately decision-making - are other big issues that are already challenging developers.
In the military industry, ethical concerns surround using AI on autonomous machines capable of deadly force.

In Australia, there are no laws or regulations requiring innovators to produce responsible AI.

Australia's Department of Industry, Science, Energy and Resources has produced an Artificial Intelligence (AI) Ethics Framework, which it has designed to "guide" businesses and governments.

The framework's eight principles focus on wellbeing, human rights, fairness, privacy and security, reliability and safety, transparency, contestability and accountability - but it's not legally binding.

That's something Dr Tapani Rinta-Kahila, a lecturer in business information systems at the University of Queensland, would like to see change.

He points to Europe's approach. The European Union's General Data Protection Regulation (GDPR) is regarded as the toughest in the world. It covers data collection by organisations, and states that every European citizen should be allowed to get an explanation for a decision concerning them that was made by AI.

This model is better than Australia's "aspirational, voluntary" framework, Dr Rinta-Kahila said.
"There have already been big fines given to corporations who have failed to adhere to GDPR... of course, it's not perfect but I think it might be a good idea to introduce something more binding."

Mr Santow agrees.

"You know, we're still dealing with the aftermath of Robodebt, which was an incredibly unsophisticated decision-making system," said Mr Santow.

The "robodebt" scheme was the method of automated debt recovery used by Services Australia as part of its Centrelink payment compliance program. It relied on automated and simple technology, not artificial intelligence.

The Morrison government admitted in 2019 the scheme was unlawful.

"Many, many people were harmed terribly by that technology going wrong. And so we really should focus a lot of our attention on that. But, I can see why we're interested in the possibility of a sentient robot."

With AFP.

Share
Through award winning storytelling, The Feed continues to break new ground with its compelling mix of current affairs, comedy, profiles and investigations. See Different. Know Better. Laugh Harder. Read more about The Feed
Have a story or comment? Contact Us

Through award winning storytelling, The Feed continues to break new ground with its compelling mix of current affairs, comedy, profiles and investigations. See Different. Know Better. Laugh Harder.
Watch nowOn Demand
Follow The Feed
6 min read
Published 17 June 2022 6:11am
By Michelle Elias
Source: SBS


Share this with family and friends