“I've come up with a set of rules that describe our reactions to technologies:
- Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
- Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary, and you can probably get a career in it.
- Anything invented after you're thirty-five is against the natural order of things.” –Douglas Adams
I recently read Tim Urban’s blog post “The AI Revolution: The Road to Superintelligence,” and it blew my mind. You see, if artificial intelligence (AI) evolves to the point described in Urban’s story, then AI machines will become way more intelligent than the smartest humans on the planet.
This notion gives some people nightmares (myself included) while others say it’ll change our lives for the better.
Urban argues that it’s impossible to predict what will happen after the AI superintelligence revolution has arrived because our limited human brains can’t quite gauge it. Just for fun, let’s discuss what the experts believe might happen.
Two Sides to Every Story
Imagine it’s the year 2030. You hop in your fully licensed, AI driverless car and ask it to make a right at the next stop sign to take you to the nearest Burger King. But instead, it goes all sentient on you and and turns left to take you to SaladsRUs – because it can tell that you are little heavier than usual and it wants to you eat healthier. Kind of a jerk move, if you ask me.
Now, that’s not exactly a nightmare-inducing scenario. But there are some highly intelligent scientists and software engineers who believe that far worse things could happen (e.g. the Terminator movies) in the future, including Stephen Hawking, Bill Gates, and Elon Musk – who like to hang out in what Urban calls “Anxious Alley.”
Perhaps many peoples’ anxious reactions to new AI technologies simply fit with Douglas Adams’ theory described above?
However, Urban’s post also discusses the theories of optimists like Ray Kurzweil (who is also over the age of 35), Director of Engineering at Google who prefer to spend their time over in “Confident Corner” – believing everything is going to be AI-OK.
There will always be two ways to look at new technologies. The invention of broadband internet gave us a new way to stay connected with colleagues, friends and family AND it gave trolls and cyberbullies a platform upon which to unleash their fury of hate, harassment and death threats.
So, let’s look at both sides of the AI argument.
Are We ‘Summoning the Demon?’
According to Urban’s article, there are three levels of AI which can seem increasingly more terrifying:
1. ANI (Artificial Narrow Intelligence)
This is the phase we are in right now. ANI includes technologies like Siri, Amazon recommendation algorithms, and even Google Search.
So far, none of this stuff is scary. However, this level of AI isn’t yet on par with human intelligence.
Image via: The Guardian
Right now, computer scientists are working on ways to train “artificial neural networks” to become more human in their thinking. For example, Google is training an image recognition neural network to identify objects in images. But it’s still being perfected, as you can see from the trippy example (above) that the neural network spits back when trying to emphasize features in an image.
Nicholas Mulder, Engineering Lead at Shopify Waterloo, explains
“This is the way children learn. But we don’t yet understand how it works. Although we do know that most of this learning is done through inference. For example, a child doesn’t have to see every cup in the world to be able to identify drastically different cups. They may call a mug a cup at first, but after a couple of corrections, they learn to spot the difference.”
2. AGI (Artificial General Intelligence)
We still have a long way to go to reach this next level – although optimistic estimates suggest we can get there by 2030. At this point, the technology will be as powerful as the human brain – assuming we first figure out how the human brain works.
In this new reality, you would be able to have snarky conversations with AGI technologies, just like how Tony Stark interacts with Jarvis from the Iron Man movie franchise.
However, the “anxious alley” dwellers warn that before we even get to this point, we need to be careful about how we teach computers to think like humans. Because once the technology reaches this level, it won’t be long until computers can make the final leap on their own, possibly within a matter of hours or days, to the most mind-blowing level.
That’s why in January of last year, a group of leading technological experts signed “an open letter on artificial intelligence” which calls for further research into how AI could impact our society in the future.
3. ASI (Artificial Superintelligence)
This is where computers become exceedingly more intelligent than human beings.
While the technology could help us cure cancer, solve global warming and more, it is also possible (if we are not careful) for machines to evolve into demi-gods who may either play nice with humans or make us their slaves (or worse).
I say “may” because humans tend to project our personalities and characteristics onto everyday objects (as seen in many Disney movies like Cars, Planes and more).
So, an ASI OS or robot might not act out at all. Just because it can think like humans, doesn’t mean it’ll have the same quality of intelligence or emotional reactions.
However, because ASI won’t necessarily have “feelings” or a conscience, Urban explains there is a threat an ASI-level machine could destroy us in the pursuit of doing its requested task as efficiently as possible. That is, if it somehow learns to be ambitious. And it will be because we told it to do so, without educating ASI in such a way to respect human life (and hopefully worship us as its maker).
Elon Musk, who allegedly likened AI research to “summoning the demon,” warns in the video below “the emphasis needs to be on safety” to avoid a worst case scenario.
However, even if an AI apocalypse doesn’t happen, there is also major concern whether new technologies could cause a global economic implosion.
“An Oxford University survey suggests 47 percent of the world’s jobs will be taken by robots in the coming decades,” explains Simon Worrall in this National Geographic article.
Worrall goes on to describe how with fewer jobs, comes less demand for products and inevitably, increased poverty.
On the other hand, optimists argue we’ve been here before. For example, after the advent of early computers, people were afraid we would all be out of work today. But humanity evolved, right?
What if AI Just Becomes Freakin’ Cool?
“The future of AI can be terrifying, but it can also be really cool,” says Shopify’s Nicholas Mulder. If all goes well, and we successfully develop AI technologies that can peacefully co-exist with humans, the upside is so profound that life as we know it will be unrecognizable in 30 or 40 years from now.
Imagine if ASIs could help us cure diabetes or heart disease? Kurzweil argues it’s all possible.
In fact, he believes advancements in AI and nanotechnology (the manipulation of matter) could help us achieve immortality by the year 2045. “Human biology could one day be treated as software,” he says.
We’ll eventually be able to re-program that software to do things that would seem like a miracle today. Here’s a video explanation to store in what Kurzweil calls your “mind file” which he believes you will be able to back up at the end of each day in the future.
Meanwhile, humans will soon have AI personal assistants that can cater to our every whim. Mulder explains the movieHer is an interesting example of what is possible in the near future. Although hopefully the AI OS’s won’t decide to ditch us once they become superintelligent.
“We’re teaching something to learn on its own, and it’s getting faster and faster. In the future, multiple machines will do this in parallel,” he explains. “So, what if an OS like Siri could plug into more intelligent APIs around the globe? Then we’d have a personal assistant to which we can outsource all of our tedious tasks.”
For example, if you like to get products on sale, "you’d have to search the whole internet of things to find all of the deals for the day,” says Mulder. There are companies that do it through email right now. But that’s not the best experience, and you end up with a lot of marketers trying to push messages to you through those emails.
But Mulder says very soon, we will see AI-driven bots start to interact with us in a chat-based system (e.g. instant messenger, text messages or platforms like WeChat). Ideally, some of these will be bots users can control and teach.
“Having an interruption there might be more appropriate,” says Mulder. “You could then educate the bot to alert you when something you want is on sale and is going fast. If there is enough trust built up, the bot could initiate small purchases for you (like automatically buying you more milk or toilet paper when you are running low at home) and might even have access to your digital wallet.”
Likewise, the bot could subscribe to specific feeds for you, versus a company trying to market to you.
“Building trust will be the key to its success,” he explains. “But we’re not there yet – we’ll need to buy bots from reputable vendors that we trust. Likewise, we’ll have to trust that our data is safe, and that the bot is able to make decisions in ways we understand.”
However, new machine learning tools are already popping up to give us a glimpse of what is possible.
Check out Digit, an app that you can connect to your checking account and give it permissions to learn your income and spending habits, so it can find small amounts of money it can safely set aside for your long-term savings plan.
A Perfect Storm of Data and Trust
Mulder is seeing a lot of students in hackathons build decentralized peer-to-peer (P2P) chat clients – enabling your personal data to be held in bots, instead of closed platforms like Facebook. The trend is also flowing into the startup world.
In fact, this Verge article predicts that “within 5 years, every business will be programming its own bots” to create “Slackbot-style messaging systems to update their hours of operation, current menu, or inventory, and so on.”
However, beyond everything we can imagine today, “there is no way to know what ASI will do or what the consequences will be for us,” explains Urban.
In my humble opinion, the best case scenario is that the AI community takes its time to get it right. Yes, the technology has the capability to reach its potential in 30 or 40 years, according to optimistic estimates. But is it enough time to iron out all the kinks? What’s the rush?
Like most of the people over in Anxious Alley, I am cautiously optimistic and hope I am around to see how it all transpires.
About The Author
Andrea Wahbe is a freelance B2B marketing strategist and corporate storyteller who writes about Canadian SMEs, marketing, and digital media trends. Follow her on Twitter.