Create a free Industrial Equipment News account to continue

AI: Moving from Recognition to Understanding, Plus Heavy Hitters Invest a Billion

Millennials seem to think that algorithms can do anything.

AI Graduating from Recognition to Understanding

It would probably scare the hell out of us to truly understand how algorithms are, transparently, part of our daily lives. Algorithms, according to this seemingly good, interesting (and non-commercial) Wikipedia article, can of course calculate, process data, and automate ‘reasoning.’ Will algorithms lead to AI that could potentially turn humans into pets? Perhaps, but—programmatically—I would think we’re going to need something beyond an algorithm—or, at least a new definition of what an algorithm is. The Wikipedia piece continues with a bit more about how an algorithm’s instructions work…

Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing ‘output’ and terminating at a final ending state.

Successive states are finite and there’s a final ending state, which is a lot different than biological intelligence. Sure, humans and other biological intelligences learn and reason along the lines of A+B=C as a start: but A or B often turn out to be a little (or a lot) different than what we’ve previously known. So, the output might turn out to be something completely fresh and previously unaccounted for. In my view, anyway, that’s what makes biological intelligence unique—its ability to adapt, change, and perhaps reach different and more valuable conclusions after seemingly identical sets of As and Bs are encountered. Algorithms are, be default, limited to the knowledge, experience, logic, and beliefs of their programmers. (Things get particularly scary when you think about that part.)

So (as a start) adaptability, reason, and understanding—and the ability to evolve, and essentially match or exceed our own capabilities—would have to be baked-in. At that point, critical, non-algorithmic elements come into play. For example, understanding the meaning of seemingly basic children’s fables could change radically depending upon one’s environment and host of beliefs. An algorithm, for instance, could identify and discern the difference between a little pig and a big, bad wolf—and offer thousands of examples of which is which and the variations between them, the environments in which they live, their diets, lifespans, etc.—but it’s not going to understand the morale of the story—or, morality.

Heck, an algorithm wouldn’t understand a wolf in sheep’s (or a pig’s) clothing. (Imagine trying to get an AI to understand Christmas. Or even Star Wars, the Force Awakens. Good luck.) That’s why I don’t think that the robot/AI apocalypse isn’t going to happen for a long time. We’ll have to imbue a ‘thing’ with the ability to understand—and understanding means lots and lots of extremely variable context and nuance. We’re a long way off from that. Or, are we?

According to this fascinating read at Technology Review

Today, that starts to change thanks to the work of Makarand Tapaswi at the Karlsruhe Institute of Technology in Germany and a few pals, who have put together a database about movies that should serve as a test arena for deep learning machines and their abilities to reason about stories.

Connecting my basic yammering above, Tapaswi and friends are starting with something we all know and love: movies…

Movies clearly show information that can answer questions of the type “Who did what to whom?” But they do not always contain the information to answer questions about why things happen, for which additional knowledge of the world is sometimes necessary.

The Turing Test (a machine’s ability to converse naturally with—and fool—a person so they can’t tell it’s a machine) comes to mind.

Alan Turing was so far ahead of his time in 1950 that, apparently, we’re only beginning to understand what needs to be done in order to create a machine that can consistently pass the Turing Test. According to this article at the BBC, for instance, a machine has, in fact, passed the Turing Test, which…

…is successfully passed if a computer is mistaken for a human more than 30% of the time during a series of five-minute keyboard conversations.

Last year, ‘Eugene Goostman,’ a computer simulation of a 13-year-old Ukrainian boy, made 33% ‘of the judges at the Royal Society in London’ believe that it was human.

Was that one a fluke or something valuable and remarkable? I’m thinking the former, though Eugene did pass. I’m also thinking that Turing set the bar at a realistic starting point. It seems Tapaswi is developing ways to make more AIs better and ‘smarter’—and at least begin to define the size and other parameters of related databases—in order to much more effectively and consistently pass the Turing Test. Sure, fooling 33% of those judges is important, but it’s not going to change the world. An AI that could fool, say, 51% of anyone, anywhere—could change the world.

A Billion Dollars Pledged to Human-Positive AI

Prominent scientists and technologists including Elon Musk, Steve Wozniak, and Stephan Hawking have already warned about the real possibilities of AI going bad and, well, making the world, um, difficult for humanity. (Think risks, threats, and Terminator.) Instead of focusing on, for instance, legislation (which needs to happen sooner rather than later, considering the many Dr. Evils of the real world), the newly formed OpenAI research group aims to…

“…advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,”

The Open AI Group’s introductory statement continues…

Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible

So far, Open AI’s founders have committed a billion dollars to the effort (Links intact.)…

OpenAI’s research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The group’s other founding members are world-class research engineers and scientists: Trevor BlackwellVicki CheungAndrej KarpathyDurk KingmaJohn SchulmanPamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.

Peter Theil and Amazon Web Services have also donated.

No concerns about generating a financial return, huh? This’ll be interesting.

More in Blogs