AI Graduating from Recognition to Understanding
It would probably scare the hell out of us to truly understand how algorithms are, transparently, part of our daily lives. Algorithms, according to this seemingly good, interesting (and non-commercial) Wikipedia article, can of course calculate, process data, and automate βreasoning.β Will algorithms lead to AI that could potentially turn humans into pets? Perhaps, butβprogrammaticallyβI would think weβre going to need something beyond an algorithmβor, at least a new definition of what an algorithm is. The Wikipedia piece continues with a bit more about how an algorithmβs instructions workβ¦
Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing βoutputβ and terminating at a final ending state.
Successive states are finite and thereβs a final ending state, which is a lot different than biological intelligence. Sure, humans and other biological intelligences learn and reason along the lines of A+B=C as a start: but A or B often turn out to be a little (or a lot) different than what weβve previously known. So, the output might turn out to be something completely fresh and previously unaccounted for. In my view, anyway, thatβs what makes biological intelligence uniqueβits ability to adapt, change, and perhaps reach different and more valuable conclusions after seemingly identical sets of As and Bs are encountered. Algorithms are, be default, limited to the knowledge, experience, logic, and beliefs of their programmers. (Things get particularly scary when you think about that part.)
So (as a start) adaptability, reason, and understandingβand the ability to evolve, and essentially match or exceed our own capabilitiesβwould have to be baked-in. At that point, critical, non-algorithmic elements come into play. For example, understanding the meaning of seemingly basic childrenβs fables could change radically depending upon oneβs environment and host of beliefs. An algorithm, for instance, could identify and discern the difference between a little pig and a big, bad wolfβand offer thousands of examples of which is which and the variations between them, the environments in which they live, their diets, lifespans, etc.βbut itβs not going to understand the morale of the storyβor, morality.
Heck, an algorithm wouldnβt understand a wolf in sheepβs (or a pigβs) clothing. (Imagine trying to get an AI to understand Christmas. Or even Star Wars, the Force Awakens. Good luck.) Thatβs why I donβt think that the robot/AI apocalypse isnβt going to happen for a long time. Weβll have to imbue a βthingβ with the ability to understandβand understanding means lots and lots of extremely variable context and nuance. Weβre a long way off from that. Or, are we?
According to this fascinating read at Technology Reviewβ¦
Today, that starts to change thanks to the work of Makarand Tapaswi at the Karlsruhe Institute of Technology in Germany and a few pals, who have put together a database about movies that should serve as a test arena for deep learning machines and their abilities to reason about stories.
Connecting my basic yammering above, Tapaswi and friends are starting with something we all know and love: moviesβ¦
Movies clearly show information that can answer questions of the type βWho did what to whom?β But they do not always contain the information to answer questions about why things happen, for which additional knowledge of the world is sometimes necessary.
The Turing Test (a machineβs ability to converse naturally withβand foolβa person so they canβt tell itβs a machine) comes to mind.
Alan Turing was so far ahead of his time in 1950 that, apparently, weβre only beginning to understand what needs to be done in order to create a machine that can consistently pass the Turing Test. According to this article at the BBC, for instance, a machine has, in fact, passed the Turing Test, whichβ¦
β¦is successfully passed if a computer is mistaken for a human more than 30% of the time during a series of five-minute keyboard conversations.
Last year, βEugene Goostman,β a computer simulation of a 13-year-old Ukrainian boy, made 33% βof the judges at the Royal Society in Londonβ believe that it was human.
Was that one a fluke or something valuable and remarkable? Iβm thinking the former, though Eugene did pass. Iβm also thinking that Turing set the bar at a realistic starting point. It seems Tapaswi is developing ways to make more AIs better and βsmarterββand at least begin to define the size and other parameters of related databasesβin order to much more effectively and consistently pass the Turing Test. Sure, fooling 33% of those judges is important, but itβs not going to change the world. An AI that could fool, say, 51% of anyone, anywhereβcould change the world.
A Billion Dollars Pledged to Human-Positive AI
Prominent scientists and technologists including Elon Musk, Steve Wozniak, and Stephan Hawking have already warned about the real possibilities of AI going bad and, well, making the world, um, difficult for humanity. (Think risks, threats, and Terminator.) Instead of focusing on, for instance, legislation (which needs to happen sooner rather than later, considering the many Dr. Evils of the real world), the newly formed OpenAI research group aims toβ¦
ββ¦advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,β
The Open AI Groupβs introductory statement continuesβ¦
Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible
So far, Open AIβs founders have committed a billion dollars to the effort (Links intact.)β¦
OpenAIβs research director is Ilya Sutskever, one of the world experts in machine learning. Our CTO is Greg Brockman, formerly the CTO of Stripe. The groupβs other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAIβs co-chairs are Sam Altman and Elon Musk.
Peter Theil and Amazon Web Services have also donated.
No concerns about generating a financial return, huh? Thisβll be interesting.