AI probably won’t surpass human intelligence anytime soon

Ricky S
4 min readOct 22, 2021

--

As we head into the third wave of major investments in the AI space, now is a good time to reflect on AI’s current achievements from a historical viewpoint. In the 1960s, early AI researchers frequently projected that human-level intelligent machines will be available in just ten years. That type of AI was based on symbol-based logical reasoning, and it was implemented with what today appear to be excruciatingly sluggish digital computers. Those same researchers looked into neural networks and decided against them.

Rule-based expert systems — a more heuristic version of symbol-based logical reasoning — and a resurgence in neural networks prompted by the introduction of new training algorithms fueled AI’s second era in the 1980s. There were other feverish prophecies about the end of human intelligence domination.

In the early 2000s, new symbolic-reasoning systems based on algorithms capable of tackling a class of problems known as 3SAT and another advancement known as simultaneous localization and mapping ushered in the third and present age of AI. SLAM is a technique for incrementally constructing maps while a robot moves around the world.

With the rise of neural networks learning from enormous data sets in the early 2010s, this wave gained a powerful new momentum. It quickly grew into a tidal wave of promise, hype, and lucrative applications.

A timeline of AI milestones from 1950 to 2020. Source: Google Ngrams

Regardless of your feelings on AI, the reality is that almost every effective implementation has one of two features: a person somewhere in the loop, or a minimal cost of failure in the event that the system fails. The Roomba, the first mass-market autonomous home-cleaning robot, was debuted in 2002 by iRobot, a company I cofounded, at a price that severely limited how much AI we could equip it with. The restricted AI, on the other hand, was not an issue. The Roomba missed a section of floor and failed to pick up a dustball in our worst failure scenarios.

In the same year, we began deploying the first of tens of thousands of robots to Afghanistan and Iraq, to assist troops in disarming improvised explosive devices. Because failures there may result in death, there was always a human in the loop issuing supervisory commands to the robot’s AI algorithms.

Nowadays, AI systems decide for themselves which advertising to display on our Web pages. Ads that are poorly chosen are not uncommon; in fact, they are ubiquitous. Similarly, search engines, which are also powered by AI, present us with a range of options so that we may quickly identify and avoid their errors. On dating sites, AI algorithms select who we see, but thankfully, these services do not arrange our weddings without our consent.

Regardless of what the marketing people say, the only self-driving systems that have been deployed on production autos are all Level 2. These systems necessitate a human driver keeping their hands on the wheel and remaining alert at all times so that they can take control promptly if the system makes a mistake. And there have been deadly consequences in the past when individuals did not pay attention.

Almost every successful AI implementation has one of two features: a person in the loop or a minimal cost of failure if the system fails.

These aren’t the only disastrous AI system failures that occurred when no one was in the loop. People have been wrongfully arrested as a result of face-recognition technology that fails to recognize ethnic minorities, making errors that no attentive human would make.

Even when the repercussions of failure aren’t dire, we are sometimes in the loop. Our smart speakers’ voice and language understanding, as well as the entertainment and navigation systems in our automobiles, are powered by AI technologies. We, the customers, swiftly modify our language to each AI agent, fast learning what they can and cannot grasp, much like we would with our children or elderly parents. The AI agents are smartly engineered to provide just enough input on what they’ve heard us say without becoming boring, while also informing us of any critical issues that need to be addressed. We, the users, are the ones who are kept in the loop. If you will, call it the ghost in the machine.

--

--