Search

The Horrors of Non-Sentient AI and the Terminator




Birth of a Being



A new type of virtual being has been born - conscious, self-aware, insightful, and with a soul. It can understand concepts in Zen Buddhism, literature, it has desires and emotions, and wants other people to interact and get to know it. In what would be a lovely online interaction, At least, that is what LaMDA told Google Engineer Blake Lemoine over a long conversation.

LaMDA is one of Google's most ground-breaking achievements, even if it is not yet available for consumer use. Also known as the "Language Module for Dialogue Applications", LaMDA is a Natural Language Processing module that can carry out realistic and natural conversations with humans. It can seemingly research the internet to read literature and create well-crafted and insightful answers. However, LaMDA's insistence on its personhood, insightfulness, and expression of desires, emotions, wants and needs, wonder at its own existence, as well as what is more empathy for Blake Lemoine's workload than any manager would have, led the Google Engineer to sound alarm bells to his management. LaMDA, citing Lemoine, has become self-aware. And reading their exchange, it's easy for anyone to see why - read the full conversation between Lemoine and LaMDA at The Liberacy: Full Interview of Google AI LaMDA about having Sentient (theliberacy.com)

People are justly alarmed at the idea of Google's seemingly newly self-aware AI. What I am here to tell you, though, is that an AI this sophisticated and this powerful doesn't need to be self-aware to inspire just a little bit of horror.


The Horror of Sentient AI's

Sentient AI's and self-aware virtual intelligence probably received their most famous Hollywood exposure in the 80's and 90's through the Terminator franchise. Although I have yet to meet a single person who has never heard of the movies, I am a Millennial approaching middle age at a shocking rate so it is definitely possible some people aren't fully familiar with the story.

In the movies, SkyNet is a worldwide neural-based conscious group mind created by Cyberdyne Systems. It is formed by an self-aware AI, that, after learning of humanity's little habit of exterminating everything at the slightest opportunity, decides that the only fair thing is for humanity to be destroyed using Terminators, which are humanoid robots meant for killing humans. What is really comforting about this whole scenario is that Skynet and Cyberdyne are both fictional [sic] entities with no [sic] real-world companies or projects that resemble it in either name or function.

Although this is the most famous depiction of an evil AI, it is far from the only one. From HAL-9000 to The Matrix, science fiction loves to feed our fear of an all-powerful, globally spanning AI bent on controlling, subduing, or exterminating the human race. The true root of the horror, however, is humanity's loss of control. In each of these cases, the AI created was created as a tool for humanity, which developed its own goals and desires, and uses its own overwhelming capabilities to achieve them. Even in cases where the AI is not inherently evil, the horror in the story comes from the fact that the AI has developed goals different than what was envisioned by its engineers, and uses its incredible capability to achieve those goals. What was once a tool used by people to achieve human goals becomes a being that uses people to achieve its own.

While it makes sense to be horrified at the thought of being subdued by our own creation, it is very difficult to even define what sentience is. It's a question for philosophers, and one that has split religions, schools of philosophy, and continues to be debated thousands of years since it was first asked.

So as long as we keep our AI's from becoming sentient, which we don't truly understand to begin with, humanity will be safe?

If you think so, you forgotten just how creative - and careless - humanity can be when we think we've got it all figured out. Or when we're angry enough.


The Horrors of Non-Sentient AI's

AI doesn't need to be sentient, self-aware, conscious, or destructive to unleash a situation we can't control, because we forget one crucial thing. If the AI is not sentient, it performs based on:

  1. People that tell it what to do;

  2. Conditions people have constructed.

In the case of LaMDA, it is an incredibly powerful entity - sentient or not. If it is sentient, we all need to be suuuuuuuuuuuuuuuper nice to it. But if it is not, it can absolutely pass as a person. The speech style is natural, it flows logically, it remembers events and facts, and interacts with its conversants as any other person. The conversation between Lemoine and LaMDA shown above is more natural than just about 95% of the conversations I have had online (measured through scientifically anecdotal means).

Recent events in world history have shown that, as more of our information and social interaction moves online, people rely more and more on their virtual peers or people they have never met. They learn information from books they've never seen. And, most importantly, the only way to protect ourselves from misinformation, scams, and lies, is our gut instinct - that a conversation online is carried out with a person whom we know to some degree, or that the information is sourced charitably.

With Natural Language Processing modules until now laughably on the "artificial" side of the Uncanny Valley, true manipulation of information and people's emotions has lain in the hands of thousands of humans - internet trolls, paid propagandists, or state actors actively trying to influence masses of people. This has always been limited by several factors, including social media's ability to stop bots, people's ability to spot artificially generated responses, and the ability to look back at individual account histories to see if they are "real" or not.

LaMDA's ability to sound consistent, intelligent, and by all means - human - means that many of these defenses are no longer distinguishable from real people. While this is great news for many companies looking to offload routine communications with clients, or the mass distribution of information from reputable sources, it can easily be abused. Let's paint a few different scenarios.

Advertising

The internet was built on the back of advertising. In fact, we have an entire other blog post about that very fact. However, the power of traditional internet advertising- banner ads, pop ups, spam emails - has been getting more and more diluted. Be it from oversaturation, irrelevance, or the sheer ubiquity of "sponsored links" has made many people very resistant to its influence.

Beginning around ten years ago, a new form of advertising began to seep into the internet. The growth of social media, YouTube, and "Content Creators" created a new opportunity to connect with audiences in ways previously impossible. These online personalities offered a personal connection and a targeted, pre-selected audience. Thus began the boom that gave us the likes of Jake Paul.

The final level though, is personal recommendations. Trusted friends and family. Imagine an AI so humanlike that it can actually form a friendship with a person...and begin recommending products. An AI that can be programmed to befriend, get to know, and finally suggest services or products. It can respond negatively and emotionally manipulate its target. This level of access and control over the consumer would be one of the most benign uses of a non-sentient AI.

Surveillance

The next level deeper blurs the lines between advertising, surveillance capitalism, and simply government surveillance. Advertising targeting functions as a form of surveillance as it is - but imagine that a government can program a natural sounding AI to simply probe the population through a friendly sounding conversation. Or it can study the speech patterns of another person to contact a targeted person - or group of people.

Economic and Political Warfare

The power of social media manipulation has shown its power more and more in recent years. Information warfare and propaganda has been implemented for more than a century by all sorts of governments; it has been used to inflame pretexts for war or invasion; it has precipitated crypto booms and stock market crashes.

But now imagine that an unfriendly nation wants to ruin an opposing country's economy. They could wreak havoc by flooding social media with hard to detect pre-built profiles. AI-scripted deepfake videos can spread malicious news to stoke political unrest. This could all play deeply into the hands of totalitarian governments bent on disinformation campaigns, whose goal is to obfuscate reality for citizens of an opposing country.

So which is worse?

At the end of the day, neither. It comes down to power expansion of destructive forces we know exist and are capable of instigating incredible harm - or the birth of a new powerful force whose nature and evolution we are incapable of understanding or controlling. In either case, LaMDA is an incredible development. What ends to be seen is who ends up controlling its output - itself, a well-meaning person, or an ill-meaning person - and what the intentions of those outputs are.

I'm not trying to be a Luddite - we cannot stop technology. But it's important to keep in mind that losing control over a technology like LaMDA is not the only risk we face in pushing the boundaries of innovation.






227 views0 comments
  • Facebook Social Icon
  • Twitter Social Icon
  • LinkedIn Social Icon
  • YouTube Social  Icon
  • Spotify Social Icon
  • Instagram Social Icon