Top>Opinion>Coexistence with robots ~The robot map: application of human movement history~

OpinionIndex

Mihoko Niitsuma

Mihoko Niitsuma【profile

Coexistence with robots

~The robot map: application of human movement history~

Mihoko Niitsuma
Associate Professor, Faculty of Science and Engineering, Chuo University
Areas of specialization: Interactive communication between humans and robots, creating intelligent space, human interfaces

Read in Japanese

Introduction

A little over three years have passed since I wrote my first article on Chuo Online in autumn of 2011. The article was about a summary of my research, entitled “Coexistence between humans and robots ~ An approach based on intelligent space and ethology ~.”[1] This time, I would like to give some more detailed examples of the projects being implemented in the pursuit of coexistence between humans and robots. In order for robots to operate safely and effectively in the everyday living environment of humans, it is necessary for robots to carry out spatial observations in the same way humans do.

For example, let’s revisit the classroom that I mentioned in my previous article.

When someone enters a classroom and wants to move towards the blackboard, he or she will identify where the aisle is and move along the aisle towards the blackboard. If another person is moving towards the same isle, he or she might choose a different aisle or wait on this side of the aisle until the other person has passed. The fact that people are able to adjust their behavior depending on the circumstances in this way is what makes us clever. People know from experience that the person on the other side of the aisle might want to use the same aisle. They are able to plan their next move by predicting a number of possible outcomes.

In this article I want to introduce an example of what needs to be done to allow robots to achieve this same feat.

Making robot maps

When people visit a place for the first time and try to get somewhere without any information, they are likely to get lost unless they have a particularly good sense of direction. They won’t know where they are or where their destination is. At times like these, people use a map. A map can be used to find out where you are, and where the routes are. When we look at maps we are not usually very aware of the routes, but it is actually very important to know which routes you can move along and which routes you cannot. This is because in most cases it is not possible to move in a straight line as you head where you want to go.

Robots also have maps that they use in a similar way to learn about their environments. In the same way as humans, robots use these maps to find out where they are and which routes they can move along. Human maps and robot maps differ in that robot maps are expressed in robot language (Note 1). What’s more, humans would almost never use a map to move around a room, but robots need a map wherever they are (although there are times when robots operate without maps). Let me give you an example of a map indoors. The areas shaded in blue in Diagram 1 (b) show where objects are placed, signifying the areas the robot cannot enter. There is a corner near the door, and the shape of the cylinder is shown at the cylinder’s location on the map. These features of the map allow the robot to guess where the obstacles are.

Diagram 1: Robot map

(a) The indoor environment used to model the map (LRF stands for Laser Range Finder, which is a sensor used to obtain data for the map.)

(b) The environment map created for the robot (This type of map is called a grid map. It divides a space into equally small areas and shows the extent to which each area is occupied by obstacles.)

Inputting human movement history into robot maps

Robots can use this map to find out where the isle is. Now let’s reconsider our first scenario, which was how a robot would know when somebody else wanted to move down the same aisle. Two pieces of information are needed in order to interpret this situation. The first information is about how “there is someone on the far side of the aisle.” The second says that “he or she wants to move along the aisle.”

The first piece of information can be identified by measuring physical objects in the same way as when the sensor was used to create the map. However, the second piece of information cannot be measured without knowing that “there is an aisle (or location with the potential to serve as an aisle)” and also knowing the “frequency with which people walk by the location.” In other words, it is not possible to guess this second piece of information based purely on the knowledge that “there is a person there.”

Diagram 2 A grid map showing the frequency of people’s movements. (The black areas show where the robots cannot enter (areas where there are obstacles); the areas changing from light blue to yellow show the history of people’s movements in each area; and the white areas show where there are no obstacles and no history of people’s movement.)

Our idea was to write data into the map about which locations have the potential to be passed by people and the frequency of these movements. [2] An example of this can be seen in Diagram 2. This map shows people’s movements by frequency and speed on a grid map. The greater the history of movement in an area, the more yellow the area becomes. If a robot refers to this map, it will be able to guess which directions the person is likely to move along by identifying a person in a yellow field. In an environment with limited routes, such as a classroom, it is relatively easy to guess people’s movements because their movements are limited. However, as shown in Diagram 2, the characteristic of this kind of map is that it allows robots to guess the direction of people’s movements even in open spaces where they might be able to move in any direction.

Applications of robot maps containing data of the history of people’s movements

So what can robots actually do by using robot maps that allow them to learn how people might move? I want to introduce a few research examples to you.

First, they are able to use this movement history to plan a robot movement route.

Diagram 3 Movement history and robot movement routes (the green area shows the history of people’s movements; and the black dots show the robot’s movement track)

(a) They can move using a route that completely avoids the green paths used by people.

(b) They can try to move in an efficient way, while avoiding people coming from the other side of the path when necessary.

(c) If nobody comes, they can move the shortest distance.

Diagram 4 Multi-resolution grid map based on movement history

What’s more, if we look at Diagram 3, we can see that areas with a history of people’s movement can be found alongside areas that neither have a history of people’s movement nor any obstacles in them. We then changed the size of the area divisions on the map according to history of people movement. [3] By taking this approach, we allowed the robot to carry out finely-tuned movements in areas that people often moved through, and to plan less specific routes in areas that people did not move through. As the size of the unused area increases, the number of squares used to make up the map decreases which makes it easier to find a route.

The last example of an application I want to introduce to you is slightly different, and involves robot maps made by paying close attention to the areas where people often come to a halt. In the previous cases, we entered data into the map on the assumption that the faster people had moved through an area the more that area was used for moving. This map shows a higher probability on the assumption that people were carrying out activities in areas where their speed slowed or where they often came to a halt.

Diagram 5 Robot map showing where people often come to a halt [4]

On a slightly different topic, we are also carrying out research into turning electric wheelchairs into robots so that they can move to a destination simply by having users specify where they want to go. The aim is to track the route of an electric wheelchair and then move it by combining an instruction direction given by the user with route plans similar to those shown in Diagrams 3 and 4. (I hope you can imagine how this would work.) However, the challenge is whether or not accurate instructions can be given for where the user wants to go. It is difficult to give instructions to a specific destination if your hands are shaking, and even if eye contact is used to specify the direction instead of his or her hands, it would be difficult to give instructions to a specific destination using only straight lines. This is one example of an application whereby, if the above map (Diagram 5) were available it would be possible to identify places where people often come to a halt or places where people often carry out activities. It would be possible to guess appropriate places as the destination for a wheelchair based on rough direction instructions.

In this article, I have introduced research that aims to make it possible to change behavior depending on the circumstances and to select behavior that is appropriate for a specific location by displaying a history of human movement and human activity in the form of a robot map. My goal is to continue building environments and robot behavior so that people and robots can live side by side.

(Note 1)^There may be mechanisms that allow robots to read the same maps that humans read, but generally speaking robots are provided with robot maps.

References:
  1. ^ ChuoOnline新規ウインドウ
  2. ^ S. Hiroi et al., The Robotics and Mechatronics Conference, 2013.
  3. ^ T. Furuyama et al., The 32nd Annual Conference of the Robotics Society of Japan, 2014.
  4. ^ A. Takimoto et al., IEEE INDIN 2014, 2014.
Mihoko Niitsuma
Associate Professor, Faculty of Science and Engineering, Chuo University
Areas of specialization: Interactive communication between humans and robots, creating intelligent space, human interfaces
Professor Niitsuma was born in Iwaki City, Fukushima Prefecture in 1979.
She earned her Ph. D. in Electrical Engineering from The University of Tokyo, 2007.
After working as a postdoctoral researcher at the Institute of Industrial Science, The University of Tokyo, and an assistant professor on the Faculty of Science and Engineering, Department of Precision Mechanics, Chuo University, she was appointed associate professor on the Faculty of Science and Engineering, Chuo University in April 2013.
Her current research topics include communication between humans and robots with the aim of supporting human activities through robotics (application of dog attachment behavior to robots, smart electric wheelchairs) and human interfaces (presentation of wall dimensions using relative pitch difference, support for remote operation of industrial robots, etc.)
http://www.mech.chuo-u.ac.jp/~hslab/新規ウインドウ