âRobot, point to the screwdriver next to the clamp.â
Daniel Pendergast, a graduate student in CU Boulderâs ATLAS Institute, issues the command, and a few feet away a four-foot-tall robot obeys. The machine whirs to life, bending and twisting its one arm to hover over a table crowded with assorted toolsâwhere it points its claw at a screwdriver right next to a clamp.
The action might seem simpleâsomething that people do every dayâbut in the field of robotics, Pendergastâs pointing system is a big step forward. Thatâs because itâs not easy for robots to understand the messy and often vague nature of human language, said Daniel Szafir, Pendergastâs advisor and an assistant professor at ATLAS.Ìę
What, for example, does a person mean when they say ânext toâ?
In trying to answer those questions, Szafir and his colleagues belong to a rapidly-growing area of study called human-robot interaction. The field addresses the huge gulf that seems to exist between people and their robot helpers: Robots donât always understand people, and people often donât want to be around moving, learning machines.
Thereâs a lot to be gained from helping the two get along, Szafir said. In the case of the screwdriver-locating robot, which the team , Szafirâs goal is to design automated machines that could help people take on a range of tasksâfrom caring for elderly relatives to assembling toy castles for their kids on Christmas morning.Ìę
âThere was always something that fascinated me about this idea of automated assistants,â said Szafir, also in the Department of Computer Science. âIt seems like such a powerful way to improve the quality of life for people at all stages. It can help out in healthcare and rehabilitation. It can help us around the house and free us up for pursuits that weâd really like to be doing.â
Ìę
Flying eyes
If the idea of a world filled with robotic assistants wigs you out, Szafir acknowledged that youâre not alone. Many people feel uncomfortable around robots, in part because humans are used to working with beings with expressive eyes and complex body language.
âThe robot in our lab only has one arm,â he said. âYou can do certain kinds of gestures with that, but people have two arms.â
Szafir, who was named to the Forbes 30 Under 30 list in 2017, is trying to cross that valley. He has experimented, for example, with using augmented reality headsets to help people understand what robots are going to do next. In one case, he made it easier for humans to anticipate the movements of flying robots by .Ìę
He imagines that similar technologies could help disaster responders fight wildfiresâusing augmented reality displays to track and manage fleets of drones flying around blazes. Szafir and his colleagues recently landed a $1.1 million grant from the U.S. National Science Foundation to experiment with how workers in dangerous fields could use those sorts of tools.
But he also focuses on designing robots that can better interpret human gestures and language. As Szafir put it, in the field of human-robotic interaction, âthe human is just as important as the robot.â
Thatâs not easy. Take the task of building a toy castle on Christmas morning. If youâre working with a human assistant, you can signal that you want a screwdriver in many different ways: you might say âhand me that,â grunt and point or just direct your gaze.Ìę
âPeople are so good at interpreting highly-ambiguous statements and gestures,â Szafir said. âSo while I can tell a person, âcan you pass me that thing,â for a robot, it would be really hard to know what that meant.â
Ìę
Helping hands
To get to that point, Szafir and his colleagues took an unusual approach: they asked people to teach their robotic system for them.Ìę
They solicited human volunteers to describe the locations of objects in a series of illustrations of messy workbenches, similar to the one in Szafirâs lab. The team then fed those sentences into a computer algorithm that analyzed and learned the speech patterns that people use when they want something but canât reach it.Ìę
The claw isnât perfect. So far, it points to the right objects about 70 percent of the time. And it canât understand certain types of descriptions, such as those involving negatives: âHand me the screwdriver that isnât next to the clamp.â But, Szafir said, itâs a leap above existing systems of this kind.
The researchers at the in Boulder.
And the team hasnât stopped at spoken words. In related research, Szafir and his colleagues are working to develop robots that can understand the language of human shrugs, head scratching and pointing.Ìę
They have designed a system that scans people as they complete a basic assembly taskâsay, building a tower out of wooden blocks and screws. Based on how the builders move and where their eyes are pointing, the robot tries to guess at the tools those people might need next.Ìę
âIt would recognize when they wanted to fasten things together and it would hand them a screwdriver,â Szafir said. He presented the results of that research recently at the in Madrid.Ìę
Thereâs a lot of work to be done, but Szafir hopes that automated assistants will be coming to work places and homes near you in the decades ahead. Such feats of engineering may seem mundane in a world where drones can fly over the surface of Mars and run on treadmills.Ìę
But, Szafir said, the pursuit of everyday robot coworkers is about conserving something that all humans cherish: âThe one limited resource that we all have is our time.â