The science fiction of yesterday is becoming the reality of today. All the components necessary to make robots that belong in your average sci-fi dystopian nightmare are quickly coming together.
Robots have had a rough gig. In the 1950s and 60s we were often promised that we would have faithful mechanical servants by the 1980s. Often ending up the source of ridicule for how outdated they ended up looking. The concept models that were produced were laughable, even though we still haven't managed to make one that functions as per the original idea. Ie, an autonomous assistant that could do all our house cleaning and chores.
But, that could all be about to change very quickly. Technological development has a habit of moving very slowly, and then suddenly ramping up so quickly that it's difficult to keep track of things. Here are a few new developments that could lead to very human-like machines within our lifetimes.
AI
Okay, stop with the groaning you there at the back! Yes, it does seem like we're on AI overload right now, and you could be forgiven for wanting a break from it. However, we cannot ignore the incredible exponential developments going on with AI when it comes to creating a robot that is truly autonomous. Even in the space of one year, the difference between Chat GPT 4 and its new point release 4o is absolutely huge.
In fact, we've already seen what ChatGPT can do when it is placed in control of a robot body, called Figure 01. The result was a robot that could identify with its environment, and reason through the uses of the objects that it saw. In one instance it was told that the person in front of it was hungry. Without any prompting or pre-programming, it identified an apple on the table in front of it and handed it to the human. It explained precisely why it did this when asked. In fact, it was asked to clear up some dishes that were also in front of it whilst it gave its explanation.
That demonstration used a less capable version of ChatGPT than 4o. The new 4o version of ChatGPT can respond to speech almost as quickly as a human, as well as reading the environment around it. Add to this, research that uses generative AI to allow robots to move much more like humans, and it's easy to see where this is going.
Human-like dexterity and feel
Researchers at Uppsala University have been working on artificial limbs that can process touch as quickly as a human. The main intention is to produce artificial prosthetic limbs that could give amputees or those born without limbs the ability to touch and feel like any other human. Think of Luke Skywalker's hand in Empire Strikes Back. The researchers also say that the technology could also be used to help stroke patients rehabilitate as well.
But, there's another use, and that's in robots. The researchers claim that the system can identify objects by touch as quickly as a blind-folded person. An example they gave was that it could tell the difference and identify between a tennis ball and an apple.
In effect, what they've done is to emulate the way the nervous system reacts to touch using electrical pulses in a very similar way to how a human does. The system consists of an artificial skin layer with pressure sensitive sensors. Then, there's a set of artificial 'neurons'. These convert the analogue touch signals into electrical pulses. Lastly, there's a processor that processes the information and identifies the objects.
Current artificial skin, sometimes referred to as e-skin, only has a very limited number of receptors. This new e-skin, on the other hand (see what I did there?), contains millions of receptors. Apparently, the system can identify an unlimited number of objects. However, so far it has been trained to recognise 22 different objects for grasping and 16 different surfaces. The researchers aren't stopping with simple object identification: They are also working on expanding the system to feel heat and pain, as well as to identify the difference between surfaces such as wood and metal.
In fact, Libo Chen, who is working on the project said that he wants to develop an e-skin of the type used in the development to cover an entire robot. Not at all like a Cyberdyne Systems T-600, then...
Flexible, organic-like muscles
If the idea of a robot that can think for itself and feel the environment around it wasn't advanced enough, how about ditching all those cumbersome limb hydraulics? If it was possible to create human-like artificial muscles, robots could become much, much lighter, easier to manufacture, and also much more power frugal.
Recently, physicists from Virginia Tech, in a paper released to Physical Review Letters, discovered a new microscopic phenomenon that could pave the way for robots that are as fast and as agile as a human.
The research pertains to what are known as hydrogels. Now, currently, hydrogels expand and contract extremely slowly. Over several hours slowly. However, the researchers at Virginia Tech have discovered a mechanism by which hydrogels can be made to expand or contract much more quickly.
Now, take a deep breath for a brief description of how it works. According to the paper abstract, "We develop a continuum poroelastic theory that explains the experiments by introducing a “gel diffusiophoresis” mechanism: Steric repulsion between the gel polymers and released ions can induce a diffusio-osmotic solvent intake counteracted by the diffusiophoretic expansion of the gel network that ceases when the ion gradient vanishes."
Now, if that didn't make any sense to you, you're not alone. However, the long and short of it is that 'soft robots' or artificial muscles may well be on the way. But, out of all the new discoveries mentioned here, this one is surprisingly the one that will take the longest to be made a practical reality. That's because, while it is a step forward getting hydrogels to respond more quickly, they are currently only suitable for micro-sized robots.
You see, speed is relative. Once the new method is scaled up to, say, the size of a human hand, the gel can still take minutes to change form. That said, it's an interesting area of research that goes beyond robotics, and could have ramifications for things like drug/medicine delivery into the human body as well.
Shape shifting robots
The story doesn't end there, though. In separate research, scientists at MIT have also been working on ways to fine control soft shape shifting robots. The researchers created a control algorithm that can autonomously learn how to move, stretch and shape a reconfigurable robot to complete different tasks and move through different environments. Again, a primary use case for it would be for something like medical intervention, where the micro-robot can make its way through the human body to a specific place to remove an object.
In the simulator, the algorithm managed to perform a multifaceted task that involved the robot reducing its height and growing two tiny legs to squeeze through a pipe, before 'un-growing' the legs and expanding its torso to open a lid. So, now we're into T-1000 territory.
Boyuan Chen, an electrical engineering and computer science (EECS) graduate student and co-author the paper detailing the research said, "When people think about soft robots, they tend to think about robots that are elastic, but return to their original shape. Our robot is like slime and can actually change its morphology. It is very striking that our method worked so well because we are dealing with something very new."
Generative AI that imitates human motion
Actually making a robot that can move as smoothly as a human has always been another stumbling block. Even the best current robots, such as those made by Boston Robotics, still move like, well, robots. Very advanced and mobile robots yes, but despite their capabilities, you could never mistake their movements for a human or an animal.
Tohoku University in recently released a new paper based on research by an international group of researchers combining central pattern generators (CPGs) and deep reinforcement learning (DRL). The new method can apparently not only mimic human motions such as running and walking, but it can also generate movement for frequencies where movement data is absent. Furthermore, it can also account for movement over rough terrain, as well as performing smooth transitions between running and walking.
The problem so far is that AI has only been good at generating a single or small quantity of solutions for adapting to unknown environments. According to the research, DRL (deep reinforcement learning) is great for developing flexible learning capabilities, but at the expense of computational cost. On the other hand, imitation learning is good for learning a set type of movement, but it can't adapt to new or unknown situations on the fly.
According to Mitsuhiro Hayashibe, a professor at Tohoku University's Graduate School of Engineering, "We overcame many of the limitations of these two approaches by combining them. Imitation learning was used to train a CPG-like controller, and, instead of applying deep learning to the CPGs itself, we applied it to a form of a reflex neural network that supported the CPGs."
A CPG, or central pattern generator, produce rhythmic outputs of muscle activity. CPG's drive most of our motor behaviours, such as walking, chewing and breathing. They don't require any stimulus from the higher brain areas, but their outputs are not fixed, so they are highly adaptable to sensory input.
Applying this sort of adaptability to a human or animal-like robot would mean it could move around much more fluidly in difficult and unstable environments, as well as demonstrating much more fluidity and agility than has been possible until now.
Tags: Technology AI
Comments