We’re hearing more and more these days about artificial intelligence or AI, but a lot of people aren’t familiar with exactly what AI is or what it can do —or, whether we should have any concerns about it.
The fact is, we all use AI, often on a daily basis. If you’re asking Siri for a recipe, or Alexa for today’s weather conditions, or Cortana for the best vacation spots for under $1000, you are using AI. Voice assistants like Siri, Alexa or Google assistant are what’s known as narrow or weak AI, which is limited in its scope. These type of voice assistants use artificial natural intelligence or ANI, which allows them to carry out specific tasks or solve a particular problem without necessarily being programmed to do explicitly that. Or to put it another way, they are fancy search engines.
The new generation of voice assistants go beyond just giving you a list of answers pulled from an Internet search. Chat bots like ChatGPT and Microsoft’s new Copilot can also generate text and more — they can write an essay or a poem, and some can even write code. Does that mean they can write your business letters or your kids’ school paper? Yes, they can. These more advanced voice assistants can even hold a conversation with you and ask follow-up questions.
Aside from allowing us all to be a little more lazy or possibly even plagiarize a bit, AI sounds pretty harmless right? Not exactly. there are things AI can do today that can actually be very harmful when misapplied. That’s why some people are worried that if AI can do these harmful things today, what might it be capable of when AI becomes more advanced in the future?
What is AI?
Artificial intelligence is typically categorized in three levels – narrow AI, general AI, and super AI. The only level currently used in real world applications today is weak or narrow AI. Ranging from your virtual personal assistant to real-time facial recognition and biometric information systems, “weak” AI isn’t really all that weak after all.
The next level, Artificial General Intelligence, or AGI, is a quantum leap above today’s narrow AI. Instead of a computer that carries out a specific task or solves a certain problem, an AGI system can understand and perform widely different tasks at a level that more closely resembles human intelligence. An AGI system would be capable of drawing on accumulated experience to reason and solve problems similar to the way a human being does. Think of all the problem-solving tasks you handle in a typical day. Then think of an AGI that could manage those same tasks, just as well as you can … or perhaps a little better. Gulp.
A leading artificial intelligence company, OpenAI, believes that its Chat GTP application is actually a rudimentary AGI because of its ability to solve complex language issues and perform some math and logistics functions. Others believe that autonomous driving systems are an example of AGI. Scientists differ on defining these applications as AGI, arguing that this level still falls within the realm of narrow AI that performs a specific set of tasks or solves a particular problem set.
While there are differences of opinion in the scientific community as to what truly constitutes AGI, the generally accepted definition of an AGI system is one that can complete all the tasks that human intelligence would allow, not just a few specific tasks.
What Can AI Do? And Why Worry?
Weak AI may not mimic human intelligence, but there’s quite a lot that it can do today. While some of the advances are great, some have raised valid reasons for concern:
- SummarizeBot can read and summarize information for you. This AI “cliff notes” app can condense entire books, emails and other documents down to the essentials for you.
- InteriorAI helps you redesign a room by taking a picture and rendering the interior of your dreams.
- Modular homebuilders are transitioning residential building from on-site “stick builds” to machine-constructed walls and rooms in environment-controlled indoor factories, for fast, efficient final assembly on location.
- An AI recycle bot can sort through trash to determine what can and can’t be recycled. In fact, it can sort faster than humans can.
- And although GM has had some high-profile problems with the technology, several companies are doing real-world testing of autonomous vehicles.
AI continues to demonstrate its benefits, but its dark side has brought up legitimate questions about its use. For GM, that dark side was revealed in an incident in San Francisco which a pedestrian -- hit by a driver -- was knocked into the path of an autonomous GM Cruise vehicle and subsequently dragged by it. The incident resulted in GM’s autonomous vehicles being banned in California.
Broader issues on the use of AI persist:
Privacy - Google recently settled a $5 billion lawsuit for tracking the activities of users who were browsing in privacy or “incognito” mode. voice assistants have also been known to eavesdrop on peoples’ conversations. From a larger perspective, AI's ability to process and analyze vast amounts of data raises concerns that personal information can be misused or mishandled.
Discrimination - From hiring practices to policing, AI can perpetuate biases that lead to unfair treatment. A 29-year old black man has filed a lawsuit against officers from the a Louisiana Police Department for false arrest and imprisonment. Randal Reid was driving to his mother’s home outside Atlanta just after Thanksgiving when he was arrested for crimes that happened in two Louisiana parishes. The problem… Reid had never been to Louisiana. Police allegedly used facial recognition technology and misidentified Reid as one of three perpetrators who used stolen credit cards to make purchases. Unfortunately this is not an isolated incident. According to the ACLU, every known case of false arrest due to facial recognition technology has been of a Black/African American person.
Deepfakes - Public figures are falling victim to AI-generated fake videos and audio. Actress Scarlett Johansson had her image and voice generated by AI, without her knowledge or permission, to promote an ad for an app. No surprise — she’s taking legal action. Worse yet, similar deepfakes of politicians can be used to deceive and spread misinformation, potentially harming reputations or even causing social unrest.
Yet, despite its negatives, AI offers some powerful applications that can potentially solve very pressing real-world issues. Can an ANI application be developed that helps solve the current climate crisis? That would have been an ideal solution at the most recent COP 28 summit. Imagine if an AI app could detail the actions individual nations must take to reduce carbon dioxide and methane and ensure that the planet remains below the 1.5°C threshold. Now THAT’S an AI the world could really use! Of course, given the complexity of that global issue, it might require the creation of a Super Intelligent AI ... an ASI. And THAT could be a problem.
ASI
You may be thinking "Is this really something we need to worry about?" Maybe not. But most of us have experienced a moment when we’re interacting with someone who seems absolutely clueless. We’ve probably walked away from the conversation thinking “what a moron!” Or worse “what a waste of human flesh!” What If a super smart AI someday came to that same conclusion … about ALL of us.
You’ve seen the movies … "I, Robot", "Terminator", "The Matrix". They all have one central theme — humans are stupid and may not even deserve to live. Ouch. Harsh. The fear is that ASI could some day lead to an outcome just like we’ve seen in those movies. The reason? ASI surpasses human intelligence. OK it’s true, we do have a tendency to act against our own best self-interests. But seriously? Do we deserve annihilation?!
SkyNet came to that conclusion in "The Terminator". Fortunately the human race in the movie had time travel and Sarah Connor. In real life we probably wouldn’t be so lucky. (Hey, anybody working on a time travel AI?) Perhaps we’d get lucky and have an ASI that chose to act as a benevolent overlord. Wasn’t that all the AI supercomputer VIKI wanted to do in "I, Robot"? Of course that didn’t go too well, either.
Maybe the dystopian scenarios in those movies will never happen. The scary thing is, human beings keep finding ways to make the impossible possible. Once upon a time it was impossible to sail around the world. Once upon a time it was impossible to fly. Once upon a time it was impossible to go to the moon. Yes we are really good at eventually making the impossible possible.
So should we be afraid of Artificial Superior Intelligence? Absolutely! But the other side of that coin is … we invented nuclear weapons and have had the ability to destroy this planet and everyone on it for decades. The good news is, up to this point we have resisted that option as a global community. That could bode well for the future.
Regardless, the AI genie has been let out of the bottle, so what do we do now?
The Future Is ... Unwritten
Perhaps ASI will fall into the category of the thing we can create, but choose not to, for the good of all humanity. Already, nations are putting up legislative guardrails to limit the dangers of AI gone wild. The White House has developed a blueprint for an AI Bill of Rights, and recently the European Union established the AI Act. It places limits on biometric identification, protects copyright holders and requires transparency from AI developers. Is all this enough? Probably not. But it’s a start.
Of course, it may be that, thanks to the Climate Crisis, none of this will ultimately matter. But as noted before, human beings have a habit of making the impossible possible. Maybe we can even solve the climate crisis soon … and possibly without the help of a super-intelligent computer.
END