Machines that understand human goals
Image: Infinity and Beyond

Machines that understand human goals

Building machines that understand human goals better is an idea that brewed in the minds of computer scientists for a while. They wanted to develop a smart enough algorithm to deduce goals and acclimatize to human planning.

The observer and the Agent.

Psychologists Felix Warneken and Michael Tomasello conducted an experiment that involved an 18-month toddler and an adult. The toddler observed the adult as he wanted to place a few books on shelves. Hence, the toddler being the observer and the adult being the agent. But as the agent approached to keep the books on the shelves, he would bang them against the cabinet door repeatedly. To the surprise of experimentalists, the toddler rose to help the man with his books. And directed him to place the books inside the cabinet. Now the amazing fact here was, that someone as inexperienced as this 18- month toddler was able to observe and understand, the problem. And also infer a goal to help the man place the books. This led scientists into thinking and create algorithms that would capacitate machines to do a similar task.

Food for algorithms

Now, the most important part about being human is, that we are fallible. We do things, make mistakes, learn, infer, follow our short term plans, and execute them. And that’s how our everyday mistakes steadily evolve us into better and smarter humans. And so does the computer scientists want algorithms to do. A machine should observe and absorb our mistakes just as the toddler did, infer goals, and assist us readily.

CSAIL and the MIT on building machines that understand human goals

To induce such intelligence in machines, computer scientists from the MIT and Computer Science and Artificial Intelligence Laboratory, created an algorithm. The algorithm would work into surmising plans even if they failed. But these failed plans also would help machines and algorithms get better. This kind of inferential learning would help improve a lot of digital assistants like Siri and Alexa. Another researcher, Tan Zhi-Xuan from MIT’s Electrical Engineering and Computer Science (EECS) adds to tell that future algorithms will identify our mistakes. And would help us rectify them and not reinforce them.

Bayesian Inference

Thus the team used Gen (An AI programming platform) and Bayesian inference to fabricate their model for building machines that understand human goals. The Bayesian inference model works on variables and dependencies and has an optimal way to provide solutions. It takes uncertainties and the new input combo as the main input. And is used extensively in the evaluation of financial risks and foretells election results.

A professor at the University of California at Berkeley, also reveals that they are progressively trying to upgrade to a newer version of algorithms from the standard ones. In the standard version, a fixed or already known objective is given to the machine. And in the newer model, the machine does not know what is expected from it. Thus figuring out the ways to infer goals becomes an imperative discussion for computer scientists. Also, the new model gave 75 percent more accurate results than the existing one.

Example of machines that understand human goals.

The inspiration for working on the new algorithm lies in the observation of human tendencies. Esp. how humans plan things sub-optimally. They chalk out partial plans, execute them, and then make plans from there for further actions. An important aspect of such partial planning is that though it is partial and prone to make mistakes, but it eases out the analytical or the cognitive load on our brains. So, similarly while attempting to make a machine intelligent, transferring all the load at once may not produce the desired results. Thus, planning sub-optimally, executing, inferring, and learning at the same time seems a more feasible approach. A detailed example of this can be studied at MIT.

SIPS- Sequential Inverse Plan Search

An algorithm called SIPS i.e. Sequential Inverse Plan Search infers goals based on partial plans. And eliminates unlikely plans in the very early stages for any given event. It takes possibilities of all kinds of fallible as well as infallible future actions into consideration at the same time. Thus, the detection of possible failures well in advance becomes crucial to offer good assistance. Researchers like Vikash Mashingka and Joshua Tenenbaum also maintain that one doesn’t have to think much ahead of the doer’s actions to infer goals. This research could only be a small step towards building machines that understand human goals. But the future algorithms are expected to solve a broader spectrum of goals to serve broader purposes. The researchers have also showcased their papers at the Conference on Neural Information Processing Systems (NeurIPS 2020).

Also Read : Spiders in Space

Conclusion:

Computer scientists and machine learning experts are working towards building machines that understand human goals better. They intend to surpass existing algorithms and build algorithms that work sub-optimally to infer goals. Also as compared to the existing algorithms where a set objective is fed to the machine, the newer algorithms do not know much in advance. And thus experimenting on human behavioral pattern-based research is crucial for the field of Artificial Intelligence.

Close Menu