Cover Story

The Future of AI, According to this Mad Scientist

by Lisa Wirthman

The brainchild of 1950s researchers, AI is now maturing. How can we responsibly lead through the next wave of intelligence?

08 Min Read Time


Twenty years ago, IBM’s Deep Blue computer shocked the world by defeating world chess grandmaster Garry Kasparov, who experienced firsthand the beginning of competition between humans and machines. Standing 6 feet 5 inches tall and weighing close to 4 tons, Deep Blue was a formidable competitor. Though what was remarkable for the general public was not Deep Blue’s size, but instead the machine’s intelligence. Magazines around the world wondered, “Was this the brain’s last stand?”

Today, “Mad” Max Tegmark, an MIT physicist known for his unorthodox ideas, isn’t quite convinced that robots will overpower humans — at least, not yet.

“AI is better than any human at narrow tasks such as arithmetic, but it can’t even do what any healthy child can: learn to get quite good at any one thing with sufficient effort,” he explains.

Still, Tegmark encourages all of us to start thinking about a future where super-intelligent machines can outsmart us at every task. The brainchild of computer scientists in the 1950s, AI is going through a massive growth spurt, reinventing how we think, live and work. It is also fueling widespread concerns that machines will take over human jobs, or worse, turn into a robot super-species.

Co-founder and president of the Future of Life Institute, author of “Life 3.0: Being Human in the Age of Artificial Intelligence,” and the brains behind more than 200 technical papers, Tegmark is an authority on all things AI. For him, the question remains, “Will AI one day be smarter than its parents?”

Back of a woman as she interacts with a touch screen wall
"Mad" Max Tegmark walking

A Mind of Its Own

Currently, AI — the capability of a machine to imitate intelligent human behavior — allows computers to perform a broad range of tasks.

AI allows people to track and enhance their sleep, schedule business meetings and order new socks through a voice assistant. For companies, it is making decisions about stock market trades, ordering inventory and redefining the hiring process (no more sifting through paper resumes). AI is also advancing science and healthcare, including discovering new planets and diagnosing eye disease.

In 2018, what’s known as narrow AI is designed to complete specific tasks that it’s assigned, like operating a self-driving car or feeding more relevant content to a social media feed, but it’s already beginning to do much more. Developments like deep learning (a subset of AI that uses data processing and pattern recognition to make unscripted decisions) give us a glimpse into the future of AI.

The step beyond deep learning is a concept called “artificial general intelligence” (AGI), where machines can sense, reason and learn a broad range of tasks all on their own. With AGI, computers don’t just do what they’re told; they become adaptive, assigning themselves new tasks based on applied learning. (If you can tie your shoes, for example, general intelligence says you will be able to tie other types of knots, say, on a boat or to wrap a present.)

According to Tegmark, machines are far from making this leap and thinking on their own. Until artificial intelligence picks up cognitive functions such as adaptive learning, Tegmark notes that today’s technology is more likely to manipulate than annihilate you.

“Instead of fearing assassin robots that try to terminate us,” writes historian Yuval Noah Harari in a review of Tegmark’s book, “[Tegmark shows us] we should be more concerned about hordes of bots who know how to press our emotional buttons better than our mother … .”

In addition to manipulating our feelings, this advanced AI can be used to redirect our spending patterns, alter our news intake and even influence our philosophical and political ideals. Yet for AI to reach adolescence, Tegmark says, it will first need to rely on its deep-learning neural capabilities to develop a mind of its own.


“Everything I love about civilization is the product of intelligence. If we can amplify our human intelligence with AI and solve today’s greatest problems, humanity might flourish like never before.”

“Mad” Max Tegmark

Life 3.0

Perhaps part of why Mad Max Tegmark has earned his nickname lies in his belief that intelligence is not limited to biological organisms.

“I define intelligence very inclusively, simply as the ability to accomplish complex goals, because I want to include both biological and artificial intelligence,” Tegmark says. “I want to avoid this carbon chauvinism idea that you can only be smart if you’re made of meat.”

In his book, Tegmark maps three evolutions of intelligence. If Life 1.0 is a basic biological form, like bacteria, that is preprogrammed, then Life 2.0 is a human whose physical “hardware” takes years to evolve but whose intelligence is constantly reprogrammed through learning.

From the printing press to the internet to developments in medicine and leisure, Life 2.0 has proven its ability to adapt during the course of a lifetime. At the same time, humans’ slow biological evolution, our hardware, fundamentally limits our growth. No one can live for a million years, memorize all of Wikipedia or enjoy spaceflight without a spacecraft.

The most sophisticated stage of intelligence, says Tegmark, is Life 3.0, which cannot only design its own software through learning, but also update its hardware. Implanting artificial knees and pacemakers may classify humans as Life 2.1, but we can’t dramatically change our DNA, Tegmark says. Free of the limitations of a biological body and therefore a lifespan, the next level of life, in theory, will be able to continuously learn and physically adapt.

“Life 3.0 is the master of its own destiny,” Tegmark writes, “finally fully free from its evolutionary shackles.”

Many AI researchers believe Life 3.0 will arrive in this century, which is why Tegmark thinks it’s critical for innovators to consider their role in this transition. “I’m optimistic that we can create an inspiring future with AI, but it won’t happen automatically, so we need to plan and work for it,” he says. “If we get it right, AI might become the best thing ever to happen to humanity.”

For Tegmark, getting AI right means developing the wisdom to manage evolving technology. Rather than scaremongering, Tegmark advocates for understanding potential technological dilemmas.

An example: Let’s say a large civilian aircraft can be programmed not to fly into stationary objects to keep passengers safe. But what if they are programmed to do something devastating instead? Or what if, one day, super-intelligent machines reinterpret our goals? For instance, what if an AI controller decides the best way to avoid colliding with a stationary object is to destroy the aircraft entirely?

It’s for this reason Tegmark and other AI leaders emphasize the role of ethics — teaching machines to adopt goals that benefit humanity.

“Now that our machines are getting smarter, it’s time for us to teach them limits,” Tegmark says. “Any engineer designing a machine needs to ask if there are things that it can but shouldn’t do and consider whether there’s a practical way of making it impossible for a malicious or clumsy user to cause harm.”

There’s a potential point of no return in AI’s childhood, says Tegmark, which is why we must teach machines how to learn, adopt and retain our goals before they surpass us as unruly teenagers. “A super-intelligent AI will be extremely good at accomplishing its goals,” he says, “and if those goals aren’t aligned with ours, we’re in trouble.”

While it’s impossible to determine how different companies and countries will evolve their own sets of ethics, what’s critical for Tegmark is that we try.

“We also don’t know whether our kids will retain our goals if they grow up to be smarter than us,” he says. “That doesn’t mean we shouldn’t be responsible parents and try our best to teach them our best values.”

Creating an Ethical Future

For now, one of the greatest challenges in AI is that the technology develops quicker than safety research and regulation. That’s why Tegmark believes it’s best to focus our energy on catching up with, rather than halting, innovation.

“It’s much easier to win the race by accelerating AI safety research, which [today] gets vastly less attention and funding,” he says.

The Future of Life Institute is doing its part to contribute to this research and consensus. Tegmark’s organization sponsored two conferences, one in 2015 and another in 2017, to bring together academic and industry leaders such as Elon Musk and Larry Page to discuss AI safety. At the 2017 conference, more than a thousand AI researchers from around the globe signed the Asilomar AI Principles document. The agreement offers 23 guiding principles for AI, including the need to build safety standards into AI that align with human values.

And while Tegmark definitely thinks robots will someday be smarter than humans, he’s more focused on what robots will do than on what they will feel. Worries about whether robots have consciousness (à la “Westworld”), he says, are irrelevant when evaluating AI risks. It’s critical to remember that you don’t need consciousness to have a goal. (A heat-seeking missile has a mission and is a safety risk but has no consciousness.)

Of course, the treatment of future sentient robots is one philosophers and human rights organizations have and will continue to debate. The Nonhuman Rights Project, a Florida-based nonprofit that protects chimpanzees, gorillas and orangutans, believes the same principles should apply to robots.

Steven Wise, from the organization’s legal team, told NBC News, “We should have the same sort of moral and legal responsibilities toward [robots] that we’re in the process of developing with respect to nonhuman animals.”

Nonetheless, Tegmark advises not leaving all the decisions to AI researchers, nonprofits or philosophers: We need to start collectively talking about our own goals for AI’s future with our families, friends, colleagues and elected officials. Important policy considerations for AI include funding AI safety research, negotiating an international treaty to condemn deadly AI weapons, and considering how wealth created by AI advances can be equitably shared.

By sparking a mainstream debate about AI’s future, we can all use our collective, cognitive and emotional human intelligence to create a beneficial future for AI, Tegmark says.

“Everything I love about civilization is the product of intelligence,” he says. “If we can amplify our human intelligence with AI and solve today’s greatest problems, humanity might flourish like never before.”