When we speak of robots thinking, we are probably talking about programming that enables a robot to gather information and use it to carry out functions. It may need to gather information about its immediate environment — perhaps for the purpose of identifying objects in front of it. Driving on a street, working in a factory, flying over a landscape and recording images, vacuuming a rug, picking up items and moving them in a warehouse — all these mean the robot is carrying out a complex process of functioning. To make a robot capable of it requires writing some great code, and finding new and efficient methods of solving problems which include hardware obstacles as well.

Let’s start with thinking about how thinking and learning occur, starting with the most basic of interactions with one’s environment. Examples in the biological world provide patterns of functioning that have been useful.

Biological models that offer clues to how robotic systems might be developed include amoebas, with their “basic irritability” response to a surface or substance. When an amoeba physically encounters an object or substance, a reaction is immediately triggered, and there is no need for any signal or more complex operation than can be begun right there at the point of impact. It bumps into something; it moves away in response, as if this ability were a built-in quality of the cell envelope.

Human bodies employ both local and remote functioning. Our skin will react to cold locally, often creating goosebumps in just a small area. Or we may respond to a situation by deciding to pick up a pencil and write on paper — which involved thoughts taking place in the brain and signals sent, causing our muscles to perform actions. The human spinal cord also takes on some reaction/response tasks, because it employs neural networks that handle reflexes.

Neural networks in robots, too, are what take on higher tasks like deep learning and complex decision-making. A neural network depends heavily upon a robot’s ability to recognize objects or images. If the robot is intended to interact with those objects — whether in order to count them, record an image of them, pick one up or move it, and so on — it must see or “find” the object. Online, robotic type software performs functions of identifying and classifying images — which also calls for good object recognition.

For this image classification to happen, the computer or robot must be able to distinguish one image from many others and identify it with a high degree of accuracy. This software will use a database often of hundreds of thousands of images of each single item, making identification a complex and lengthy process — in computer time, at least. This means a huge database it must search, to compare each new image to. Images stored in the database must include a selection that vary in size, orientation/positioning, the lighting on the image, etc. Each image may be further refined by rules about the object.

“They learn from labeled training examples. Essentially, a human points at, say, a picture of a chair, and says “chair.” Then the human must repeat this at least 10,000 times with 10,000 different chairs in order to teach the machine what a chair looks like. And then the human must repeat the process for kittens, marmalade, binoculars, and everything else in existence.” — Gamalon.com

Instead of using this cumbersome process, the company Gamalon has developed a system that uses probabilistic programming. This type of program uses far fewer images as data points, working backwards, as it were, from the image it wants to identify, seeing its features and then comparing those to the features of its data images. It sort of performs a best-match, allowing for external probabilities such as how likely this object is to be in this setting, etc. So ultimately, a driverless car may identify a person standing at the side of the road in Alaska and be reasonably sure it is not a cactus. Or a cat, for instance, might usually be identified accurately when an image has triangular ears, whiskers and a tail. The rest could usually be ignored without losing accuracy.

Ignoring most of the data: A study found this is not unlike human thinking: In 2015, Georgia Tech researchers observed in a study that people use less than 1% of data available about an object to identify it. They postulated it could be random projection — something which Sanjoy Dasgupta, prof of computer science & engineering at UC San Diego, described as “a creative combination of insights from geometry, neural computation, and machine learning.” Some of that insight could be anticipation and assumption, due to familiarity with many viewings of an object: People in the study would identify a known object from a tiny corner of it. Other recognition was considerably harder to understand — when it came to recognizing abstract images. When they compared machine neural networks to the human tests, they got similar results. Machines and humans were thinking in similar ways.

At any rate, probabilistic robot thinking/programming is much more efficient than the old method of recognition combing through and comparing millions of images. Taking the most probable result also is a way of combining the data and coming up with the one result to go on with.