No Comments

You would think the Robot Revolution would have happened by now, with the self-aware robots taking over the world. All things considered, the Robot Revolution already is happening. It has been happening for decades. It is just been a lot less bloody than the motion pictures made us think it would be, and the robots are not self-aware yet. Nowadays most advanced manufacturing is done by robots. These days, there are robots helping spacecrafts dock at the International Space Station, and you might even have one vacuuming your living room floor.

Some of the problems that robotics scientists were struggling with fifty years ago still have not been solved, so there’s a whole lot of history that went into the robots that we rely on today.

If we’re going to talk about the history of robotics, to start with, we have to discuss what a robot really is, and for such a typical term, it’s tricky to characterize, however actually, a robot is only a machine intended to fulfill a task. That is it. Presently, it may seem like that would cover everything from a four-capacity adding machine to NASA’s Pleiades Supercomputer, however that is not what we’re discussing here. When we discuss robots, we’re truly discussing machines that utilization their programming to decide. For instance, in the event that you, a human, choose to get a coin from the beginning, are three primary strides you need to experience. Initially, your eyes need to see the coin and afterward send that data to your mind. At that point, your mind forms that info and utilizations things like past experience to settle on a choice to lift it up. In the long run, your cerebrum sends messages to your body to get a handle on the coin. Robots experience a comparative procedure, yet without the getting the complex integration. They can experience that process on the grounds that more often than not, they have the segments that let them complete every progression. They have sensors for input, control systems for decision-making, and end effectors for output. Sounds sufficiently straightforward, yet building up each of these segments can be a challenge. Sensors must have the capacity to identify things like pictures and sound accurately, effectors must be adaptable and sufficiently quick to do what we require them to do, and the control framework needs to settle on the greater part of the choices important to get the sensors and effectors cooperating. Obviously, there are such a variety of various types of robots that these segments can differ extensively, however that is the rudiments of robot life structures.

To understand how we’ve gotten this far and why robots haven’t taken over the world yet, we first have to talk about how the development of industrial, humanoid, and military robots got the field where it is today. Industry is a good place to start, because that’s where robots first became useful. Since factory work can be repetitive and often involves lifting lots of heavy stuff, it’s a perfect fit for a machine. The world’s first ever industrial robot, Unimate, was installed on a General Motors production line in New Jersey, USA in 1961. Weighing in at nearly a metric ton, it was basically a giant robot arm. Its instructions programmed on a huge magnetic drum told the arm to stack and weld hot pieces of metal over and over again. Soon, other car companies got in on the game, installing their own robotic arms. But this first generation of robots was still in its awkward stage. The arms weren’t particularly flexible, they were often powered by clunky hydraulics, and they were ultimately difficult to program.

After Unimate, a robotic arm called IRB-6 came along in 1974, it was a pretty big deal. This was the first electric industrial robot that was controlled by a microcomputer. It had 16 KB of RAM, it was programmable, and it could display four whole digits with its LEDs. Developed by a Swedish engineering firm, ABB, this robot was used to perform inauspicious tasks like polishing steel tubes, but it was a crucial step toward developing robots that were easier to program. But while controlling robotic arms was getting simpler, another issue came up. You can give a robot as much programming as you want, but if it can’t see, it’s not going to be able to do even seemingly simple things, like figure out which box should go where on a pallet. Crude visual scanners had been around since the ’50s; they could only see black and white,  and the resolution was worse than what you get from a flip phone camera. But to give vision to industrial robots, engineers had to tap into another field that would completely change the robotics game: artificial intelligence.

Artificial intelligence, or AI, is another broad, vague term used to describe any attempt   to make a computer do something that we would normally associate with human intelligence, like translate languages or play chess or recognize objects. In the 60s, the problem was that even though AIs were getting better at complex reasoning tasks, like playing chess and proving mathematical theorems, it was incredibly difficult to actually get the programs to interact with the real world.

There’s a difference, for example, between figuring out the ideal placement of wooden blocks in a theoretical model and actually moving those blocks into place, because moving them involves a whole series of discrete decisions and actions, and the robots at the time simply couldn’t manage that. For robots, vision isn’t just about taking pictures, it’s also about recognizing objects so that they can react to things and situations in real time. By the late 1970s, engineers had developed new algorithms that allowed cameras to recognize edges and shapes by using visual cues like highlights and shadows, but these programs were still just experimental, stuck in research labs.

That all changed in 1981, when the first industrial robot got the gift of vision. A General Motors factory was once again the guinea pig, implementing a system called Consight, in which three separate robots could use visual sensors to pick out and sort six different kinds of auto parts as 1,400 parts per hour moved by on a conveyor belt. For the next two decades, technology kept improving — industrial robots were able to see better, move faster, carry heavier loads, and handle more decisions. These days, industrial robots are advanced enough that it’s totally normal for a factory to install a robotic assembly line that handles nearly all of its production.

Some Industrial robots are heading in the direction of a more general purpose use, the humanoid industrial robot. The Wabot I is usually considered to be the first full-scale humanoid robot.   Developed by researchers at Waseda University in Japan in 1973, it had arms, legs, and a vision system. It could walk, it could pick things up with its hands, it could even talk. Except that it could only reply with pre-recorded responses to very specific statements, and it took 45 seconds to take one step. This bot and its successor Wabot II were a really big deal in their day, but they also  pointed out an important fact: it’s just much easier to design robots to do one task at a time.

So recently, the thinking has been if general purpose humanoid robotics are out of our grasp, we might as well focus on making something that can do at least one useful task. That’s why in the past 10 years, there have been more household robots in use than ever, programmed to perform a single function like vacuuming the floor, mowing the lawn, washing windows or cleaning the pool. They’re not quite Rosie from The Jetsons, but they were all made possible by the advances that came before them, like having the ability to sense their surroundings and make decisions in order to navigate the world.  And it’s not like researchers have given up on the humanoid front. There are humanoid robots in development that can perform some impressive feats including the work we have done here at Nue Robotics.

Categories: Blog

Leave a Reply

Your email address will not be published. Required fields are marked *