Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

Factors That Influence the Outcome of an Introductory Conversation Between a Robot and a Child

Info: 8746 words (35 pages) Dissertation
Published: 2nd Feb 2022

Reference this

Tagged: PsychologyTechnology

Abstract

Over the last couple years, the amount of research done on child-robot interaction has grown steadily. In this paper we present a partial overview of this research by answering the question: What are the factors that influence the outcome of the initial conversation between child and robot?

We categorize these factors by regarding the CRI process as consisting of a user, robot and a context component. We then present a set of recommendations for researchers that aim to design a robot that has a successful first conversation with a child.

Because children interact better with embodied robots we recommend these over virtual robots. For physical contact, animal robots should be preferred over humanoid robots. In case of a humanoid robot, it should be human-like but clearly distinguishable from a human. During the conversation, girls prefer if the robot approaches them and boys appreciate more frequent and longer gazing. The robot should be able to sense the child’s emotion and express its own using other ways besides facial expressions. Additionally, it should be able to align its interests with the child and adjust its behavior to the child’s affective state.

Keywords: robotics, children, interaction, conversation, introduction, social robot, CRI, HRI

1 Introduction

As the field of robotics advances so does the viability of using robots in various fields such as health care, education and therapy. However, special care needs to be taken when designing robots for interaction with children. As the amount of research on child-robot interaction (CRI) steadily increases, it becomes increasingly more difficult for researchers to maintain a clear overview of all the past and recent findings in this field. This paper aims to firstly provide at least a partial overview by answering the following question: What are the factors that can influence the outcome of the initial interaction between a child and a robot? From this overview we will also provide recommendations for engineers who are designing robots for children.

We focused specifically on the initial interaction because this is usually when humans gradually establish common ground. Just like for humans we believe that the initial interaction between a child and robot has a large impact on the quality of their relation and subsequent interactions. To answer our research question we conducted an extensive literature review on papers written by other researchers within the field of CRI.

For illustrative purposes, we can model CRI as a process that roughly consists of three interacting components: the robot, the child and the context. For each of these components we identified , in a short initial study, factors that have an influence on how children and robots interact. For the robot component we found that physical appearance and behavior have been studied most extensively. Aspects of the child that have been given a lot of research are age, gender and culture. When it comes to context, researchers often study the effects of additional contextual clues and the effect of aligned interests between robot and child. (With these areas identified, we tried to find the effects of those factors on the interaction between robot and child in our full literature study in order to give an informed answer to our main research question.)

We structure our paper using our human-robot-context model of factors influencing the CRI process. We begin with human factors in section 2. We present robot factors in section 3 and contextual factors in section 4. In section 5 we discuss our recommendations for robot engineers followed by a short conclusion in section 6.

([Taico] We maken de deelvragen niet expliciet. [C] zo is het mss iets beter)

2 Human Factors

In this section three human factors that influence the interaction between child and robot will be listed. These factors are the age and gender of the child and the child’s cultural background.

Age

Children from different ages respond in different ways to a robot that interacts with them, as is shown by different studies [19, 20, 36, 52]. We will briefly discuss the methods and results of these studies.

Kahn et al. [20] studied the interaction between a robot and children between 2 to 6 years old while Kahn et al. [19], Melson et al. [36] and Tung [52] performed their studies with children between 7 and 15 years old. Kahn et al. [20] let the children interact with a robotic dog and with a stuffed dog. After the interaction, the children were asked several yes/no questions to rate the interaction. Children of both age groups shared the same thoughts, about the robotic dog or the stuffed dog. Both groups perceived the robotic dog as more real than the stuffed dog. In another study by Kahn et al. [19], children interacted with an humanoid robot by having a conversation and by playing a game. Afterwards, the children were interviewed to find out what their thoughts were about the robot in terms of its mental, social and moral state.

In this research the youngest children of the subjects believed that the robot had a mental, social and moral state. Melson et al. [36] used the same robotic dog as Kahn et al. [20], but compared the interaction with a child and the robot dog to the interaction with a real dog. They interviewed the children about the same topics as Kahn et al. [19]. The younger children were more likely to believe that the robotic dog was real, but all children preferred the real dog over the robotic one.

Tung’s [52] research consisted of two experiments. The first made children sort pictures of robots in order of realism and the second made children answer questions about robots that were presented on paper. The results showed that younger children were more socially attracted to humanoid robots than older children. Physical attraction, however, was not affected by the age of the children.

Kahn et al. [19], Melson et al. [36] and Kahn et al. [20] agree in their conclusion that younger children have more belief that the robot is real. The result of Tung’s [52] study does not line up with the results of the other studies. That can possibly be explained by the methodology, in Tung’s experiment the children did not interact with robots, instead they were only shown pictures.

Gender

Another factor that influences the interaction between a robot and a child is the gender of the child, this is shown by different studies [23, 36, 52].

According to Melson et al. [36], the interaction between a child and a robot dog is not gender dependent, but there is a difference in how the boys and girls think about the robot. Boys are more likely to believe that the robot dog is more like a real dog. Tung [52] claims that for girls, the social and physical attractiveness of a robot is greater than that of boys when the robot is more human-like. Kanda et al. [23] studied the making of friends between the children and the robot. They noted that the robot has to behave differently to establish a friendship with boys or girls. For instance, the robot has to move towards the girls in order to build a friendship, whereas with boys, the robot has to look at their face.

Kanda et al. [23] conclude that girls were more attracted to the robot and that friendship was more easily established with girls than with boys. Kanda et al. [23] and Tung[52] agree in their conclusion, while Melson et al. [36] disagrees because he found that gender has no influence on the interaction between robot and children. A possible explanation for this could be that Melson et al. [36] used a robot dog, whereas a humanoid robot was used in the other two studies, but to verify this conclusion some further research should be done.

Cultural

The final human factor we will discuss is the influence of culture on child-robot interaction. In order to gauge this influence, we will use two different studies [40, 47]. Shahid et al. [47] performed an experiment with Dutch and Pakistani children. The experiment consisted of playing a card game, in which the children had to guess whether the next card in a row would be lower or higher than the previously drawn card. The children played this game individually, in pairs and together with a robot (iCat). When the game was played with the iCat, it gave instructions and an explanation to the children. The Pakistani children had more fun while playing with the robot than the Dutch children. Both groups of children had the most fun when they played with a friend, closely followed by playing with the robot and they had the least fun when they played alone.

Pearson and Borenstein [40] argue that the general opinion that the Japanese have a more positive attitude towards robots than Americans, is incorrect. They argue that the attitude of Japanese and American children towards robots will not differ a lot. With this in mind, they say that it is more important to focus on the behavior and movement of the robot in general than to adjust the robot for a certain culture.

3 Robot Factors

The robot itself is the main factor that plays a role in the interaction between robot and child. Many factors contribute to the perception of the robot by the child. These factors can be divided into two categories. First the factors that influence the appearance of the robot are discussed, followed by the influence of the robots behavior on the interaction.

Appearance

In this section, we will answer what influence robot appearance has on child robot interaction by looking at different aspects that compose the appearance of the robot.

Uncanny Valley

Mori [38] described the correlation between how human the robot looks and how the test subject perceived its appearance. He discovered the uncanny valley effect, which shows that when the robot becomes too close in appearance to a real human, any imperfections can be viewed very negatively. Children experience this uncanny effect as well, in a similar way as adults do [55]. Therefore a humanoid robot for children must stay clear of the valley, in order to be able to have the child perceive its appearance in a positive way. This is done by exploiting human characteristics, as seen on the NAO robot [48]. There is more in depth information about the human characteristics for humanoid robots in the last paragraph of this appearance section.

Different types of robots

Robots come in different shapes and sizes with each their own purpose. This section will take a look at the effects these different types have on the interaction with a child. This field in human-robot interaction is, even though well researched, a field with no clear all combining overview or consensus. Therefore we will sometimes use multiple researches to show different results on close to identical topics.

Animalrobot For children, animal robots are mostly used as pets. Fujita was one of the developers at Sony who created AIBO, which is a robot dog that has uses as a toy, but also has educational purposes.

Fujita [13] stated in his article about AIBO that children pay more attention to the behavior of the robot dog in comparison to its appearance. He also states that, compared to humanoid robots, children tend to make more physical contact with animal robots. This was concluded from research done with child patients in a hospital.

Virtual robot Belpaeme et al. [3] researched the difference between the interaction of children with a physical and virtual robot. They researched asking children to play a quiz game with a physical and a virtual robot. The researchers tracked the kind of attention the child gave to different parts of the robot. They concluded that children give more attention to physical robots.

A robot can also inherit aspects from both physical and virtual robots. Blomberg et al. [5], for example, created a robot head called ”Furhat”, which is a robot that has a physical head, but a virtual (projected) face. This way the developers want to steer away from the ’Mona Lisa effect’ [43]. This effect occurs when three-dimensional objects are displayed on two-dimensional surfaces. These objects will then be perceived in the same way from every angle the perceiver looks at the two-dimensional surface. An effect of this is that a virtual robot cannot use gaze direction as a way to direct its attention [5]. It is also possible for virtual robots to suffer from the uncanny valley effect. Paetzel et al. [39] researched this for the Furhat on children, and concluded that they do not experience this effect with this robot.

Humanoidrobot Ishiguro et al. [17] developed a humanoid robot called Robovie, which was (in 2001) a high-end robot in the field of communication. Robovie was used in a field trial, done by Kanda et al. [21]. Their conclusion was to use a humanoid robot over other types of robots, because a ”robot that possesses a humanoid body will be more successful at sustaining interaction because people see it as similar to themselves and that it interacts as they do.” [22]. However, they also stated that further research would be needed to form a final conclusion about the best appearance for a robot.

Choice between virtual and physical robots From the research done by Ligthart and Truong [33], they concluded that the choice between a physical or virtual robot depends on different aspects of the robot. When the task a robot has to do is physical, a physical robot is preferred. When the task of the robot is social in nature, the sociability of the robot is more important than the body type. The initial attitude towards robots of the users also has an influence: positive users have no significant preference between physical and virtual robots, where relatively negative users find the virtual robot more socially attractive.

The human characteristics for humanoid robots When looking at a humanoid robot, there are multiple human characteristics that can be magnified in comparison to the way they are on humans. An example of one of these characteristics are the big eyes and the big head that most humanoid robots have. Geller [14] states that, for animators, the big eyes tells the perceiver that the robot is not human. He also states that ”realistic eyes are the key to avoiding the facial “creepiness” associated with the uncanny valley”[14]. Kato et al. [24] state that the robot they created has big eyes for its emotional expressions. Rosenthal-von der Pütten and Krämer [44] found in their research that robots with these characteristics are considered ”likable, non-threatening, submissive, not particularly human-like and somewhat mechanical” [45]. Following these possible explanations, we conclude that these exploited characteristics have a positive effect on adults. We assume that this same positive effect applies to children as well.

Behavior

In this section we give an overview of some of the possible ways that researchers have found to express emotion with a robot. Furthermore, we also glance over why a designer would use emotion in the first place and when a robot should express emotion. Afterwards we will briefly discuss different ways researchers model robot behavior and the techniques to implement those. Finally we will provide some insight into control strategies in robots.

Improving social interaction by showing emotions

Before we explore the situations where a robot should display emotion and how a robot could accomplish this we first ask why we would do this. As emotions are a key part of behavior, this is an interesting topic to take into account when designing a robot. Breazeal [6] states that emotions are an important motivational system and play an important role in determining the behavioral reaction of the subject. In addition, Breazeal and Brooks [8] state that the robot system (Kismet) they developed has an emotion system that helps it behave and interact with people in a socially acceptable and natural manner. Following this, according to Kozima et al. [29] empathy plays an indispensable role in our social interaction and communication.

Because of the importance of empathy, one wonders what the relation is between emotion and empathy and how we could use this in a robot. Kim et al. [25] found that robotic emotional expression using color with LED’s is effective. They did this by changing the color of the robot head to the color of a bruise in some spots when the robot would be emotionally hurt. And when the robot received positive stimuli from the user the robot would show a neutral color or blushes on the cheek’s of the robot. This means that the robot can influence the attitude of the user by using color as a way expressing emotion with a robot. Following this, Kim et al. [26] concluded humans felt the same empathy for their robot when it showed a bruise compared to when it used speech to convey its emotions.

Moreover, research by Hindriks et al. [16] suggests that children respond more positively to a socio-cognitive robot, which takes the mood of the child into account when making its decisions, than to an ego-responsive robot, which does not do so. A similar effect was found by Leite et al. [31]. This research also suggests that children perceive an emphatic robot more positively compared to a neutral robot.

The opportune moment to show emotions

Now that we’ve looked at why showing emotion during social interaction can yield positive results, we now look at when the robot should do this according to research. In order for a social robot to react to the child in an appropriate way, it has to be able to sense, process and interpret not only the information that the child gives directly, but also any verbal and non-verbal social cues that the child expresses [11].

Furthermore, it could be beneficial to increase trustworthiness of the robot. Van den Brule et al. [53] found that signaling trustworthiness with behavioral warning cues is a promising, yet challenging, way to calibrate trust. Their research suggests that when the robot is uncertain about a certain action and communicates this to the user, it is seen as more trustworthy than when it does not give this information.

Next to that, people tend to react to their conversation partner and mimic the emotion of that partner [15]. Although it is not clear why we do this, a robot could still use this to its advantage during a conversation by mimicking the user’s emotion at the right time. Potentially using social cues and affect recognition to detect the right moments.

A lot of research has been done into systems for affect recognition. Castellano et al. [11] found that eye gaze and smiles can be used to determine the level of engagement between a child and an iCat and if this engagement was positive or negative. After humans identified specific nonverbal behaviors and their duration, they were able to create a model that was well over 90% accurate [12].

Zhang et al. [56] have developed a system that uses facial expression to detect basic emotions in real time, achieving slightly more than 70% accuracy. Speech can also be used for affect recognition. Tahon et al. [49] have been able to get between 50% and 60% accuracy in distinguishing between anger, positive and neutral based on short segments of speech from children. However, since their corpus was relatively small, these results might be biased.

While there are systems that can detect basic emotions, better accuracy would be preferable. In addition, these systems can only distinguish between basic emotions. More research is necessary to determine if it is worthwhile to distinguish between more complex emotional states. There is also more research to be done into using these systems for autonomous recognition and decision systems in robots, and their effectiveness in child-robot interaction.

Possible ways to show emotion

Now that we’ve established that it could be beneficial for a robot to show emotion we will look at a few ways to let a robot convey emotion to a subject. Facial expression is one of the most studied ways of robotic emotional expression [10, 27, 46]. Other options of expressing emotion include, but are not limited to, posture[9, 51], gaze direction[6], body movements[7, 9, 37], color [25, 26, 51], sound[9] and voice [2, 26, 51]. Bethel and Murphy [4] give a more extensive overview of the factors in affective expression.

In addition, Kozima et al. [28] found that rhythmic play helps nonverbal communication and establishing engagement between child and robot. According to Bethel and Murphy [4], facial expression has been shown to be quite effective in the expression of affect, but some researchers feel that body movement and posture may reveal underlying emotions that might be hidden otherwise. Therefore it might be worth considering to include one or more features other than facial expression to express emotion with, such as speech or color. This is strengthened by the apparent difficulty of implementing facial expressions. While other features may be even more difficult to implement when designing a robot, they might be worth considering.

Modeling the behavior

There are multiple ways to create the behavior of the robot. The methods can be divided into three main groups: a scripted conversation, user modeling and the ’Wizard of Oz’ method. These methods are not mutually exclusive and can therefore be combined, for example a modeled conversation can contain scripted sections.

A scripted conversation usually has a very limited response to the environment because creating a flexible script is very hard. Due to this limited response the child’s attention is easily lost, as the limits of the responsiveness are quickly discovered [3]. As not all topics are predetermined in a dynamic environment and the conversation might not have a definite endpoint, this domain is not well scriptable [32]. This makes it hard to construct a fully scripted conversation in a complex non-static environment and therefore this method is best applied for a conversation in a static environment or as part of a more advanced dialog system.

Various studies have shown that adaption of the robots behavior based on the environment results in a better response of the child towards the robot [3]. This adaption can be created using user models that use a model to decide on the actions to take based on the input from the user and data gathered before the conversation.

The ’Wizard of Oz’ method, that simply puppeteers the robot, is a method that is even better to create good social interaction. This is because robots are not yet advanced enough to have autonomous social interaction at a human level [41]. While using this method, few children realize that the robot is controlled by a human [42]. Furthermore it realizes a robot with near human social skills (as it is a human limited by the hardware of the chosen robot). This method also has the advantage that there is no need to spend much time on creating complex behavioral models. This makes the method very useful in the first phases of a project. Using the ’Wizard of Oz’ method is however not the best method available in the later phases of a project and, ultimately, the production phase. In these phases, manually controlling the robot can be impractical or even unwanted. You might also argue why you wanted to create an interactive robot in the first place, when your final product will be directly controlled by a human. This makes behavior based on a, often complex, model the best practice in creating a good dynamic conversation.

Control strategies

There are Many strategies exist that can be used to model the robots behavior. In its most simple form the conversation is linear and the behavior can be modeled as a simple procedural program. Many conversations, however, are non-linear and can be influenced by multiple factors. Allen et al. [1] created a classification hierarchy for different techniques one could use to model dialog. This hierarchy, shown in Table 1, sorts the techniques on the complexity of the tasks they are best suited for. As Lemon et al. [32] concluded, finite state and frame based dialog modeling cannot be be used for a conversation in a dynamic environment as those conversations are open-ended and must adapt to environmental changes. But for a simple task these simple models can be perfectly valid as a way to control the robot, eventhough they only contain inflexible behavior patterns.

Besides the methods listed by Allen et al. [1], there is also the possibility to incorporate self learning in the behavior of the robot. By the use of operand conditioning the robot can learn to select suitable behavior autonomously [18]. This method is not suited for dialog modeling, but it can be used to show good behavior during said dialog based on what the robot has learned from the users preferences.

4 Context Factors

Besides the factors related to the child and the robot, there are factors that originate from the context of the conversation. The following sections elaborate on the effects of user related off-activity talk, the conversation itself and contextual cues respectively. We also show some implications that the group size has on the capabilities needed by the robot.

Technique Used Example Task Task Complexity Dialogue Phenomena Handled
Finite-state script Long-distance calling Least complex User answers questions
Frame based Getting train arrival and departure information   User answers questions
Sets of contexts Travel booking agent   Shifts between predetermined topics
Plan-based models Kitchen design consultant   Dynamically generated topic structures, collaborative negotiation subdialogues
Agent-based models Disaster relief management Most complex Different modalities (for example, planned world and actual world)

Table 1: Table by Allen et al. [1] listing different techniques and the related task complexity.

Relevance to the user

One might expect that when a robot shows the same personality traits as the user, a bond between the human and machine can be established more easily. This effect is however not conclusively found in the literature. A study by Tapus and Matarić [50] found that users had a preference for a robot that matched their personality. Another study by Woods et al. [54] did not find this strong relation, however. It must be noted that these studies were done with adults, so a study that tests the effect of a robot with a similar personality on children might result in other and more conclusive results. Children however, assign themselves other personalities than adults assign to them [3] which makes it harder to select the personalities for a robot that matches theirs.

A method that does have a significant effect on the bond between a robot and a child is to make the robot show interest in the factors that influence the lives of the child. A study by Kruijff-Korbayová et al. [30] had a robot that, during the main conversation, showed interest in the diabetes of the child by asking questions about it. This off-activity talk (OAT) about the life of the child was appreciated by them. The researchers noticed that the children were willing to share their experiences with their disease with the robot. Despite the interest shown by the robot, it did not mean that the OAT group perceived the robot better than the non-OAT group. The OAT group did, however, book a next session with the robot more often than the participants in the non-OAT group despite both groups indicating they wanted a next session. This shows that children that interact with a robot that shows interest in the factors that are relevant in the child’s life can improve the engagement that the child feels toward the robot and therefore the bond between them.

Contextual cues

By providing more (social) cues to the child about the capabilities of a robot, the interaction can be improved. This tactic can be applied to some types of robots, for example animal shaped ones, more easily than for others. Robins et al. [42] used this technique on a pet dog. They found that providing the child with extra cues like a dog house, a ball, a bone and a bowl increased the level of interactivity between the child and the robot dog. They also think there is a relation between how natural these cues are and how much they influence the interaction. For example, two bones where provided but all children solely used the natural looking bone in favor of the pink colored one. Providing more contextual cues to the children can also raise the expectations of the child about the capabilities of the robot [42]. Therefore it is advised to limit the provided cues to ones that are supported by the behavior of the robot, as other cues will only raise the expectations without delivering a reaction from the robot.

Individual versus group interaction

Another contextual factor that influences the interaction between a child and a robot is the amount of people that engage in the interaction. In the case of question answer conversation, having a group instead of an individual helps the conversation to get to answers more effectively and therefore make the conversation more natural [5].

The differences in group size in regard to the interaction between human (and more specifically child) is a topic that is not yet covered by existing research. The existing studies all focus on either individual or group interactions, which makes this comparison a topic for future research. From the studies that focus on the interaction with a group, we can see the factors that need to be addressed in this type of conversation.

To engage in a group conversation a robot needs to have the capability to determine who is speaking (and to whom) and must use this information to have the correct responses to the situation [35]. Kanda et al. solves this by looking at the distances between the people in the crowd and the robot. Another way to direct the robots attention, is using sound localization via a head-related transfer function [34], although this may be hard in crowded areas with many background noises.

5 Recommendations

In this section we will give an overview of the factors that are most important for the first interaction between a child and a robot, followed by a recommendation for further research.

In the area of human factors, age and gender are most influential, while cultural differences don’t seem to have a large impact on the interaction. In terms of age, younger children are more socially attracted to a humanoid robot than older children. This could be because the current robots aren’t mature enough, don’t have the right capabilities to be as attractive for older children. This would make younger children a better target audience at the current point in time.

When it comes to gender, the robot should act slightly different when trying to become friends with boys than when trying to become friends with girls. Research shows that girls prefer the robot to approach them, while boys prefer the robot to look at their face a lot more. There are probably more of these subtle differences that all impact the level of comfort of the interaction, but the two factors mentioned can provide a good starting point. More research is required to provide a more complete model of the differences in interaction.

In the area of robot factors, there are two key factors to keep in mind: the appearance of the robot and the behavior of the robot.

For the appearance of the robot, the decision between using a virtual robot and a physical robot depends on the task that the robot will perform, the sociability of the robot and the initial attitude of users towards robots. The expressive capabilities of a virtual robot are greater than those of current physical robots. For example, a virtual robot can easily show many different facial expressions, while current physical robots don’t have that much control. When physical contact is the main focus, animal robots can be preferable. In all other cases, a humanoid robot should be preferred. Facial features of the robot should be realistically human-like, but the robot should be clearly distinguishable from a human to avoid the Uncanny Valley effect.

In terms of behavior, research shows that the robot should be able to express emotions, for which facial expressions are deemed most important. We found that it is preferable to combine facial expressions with at least one other technique of expressing emotions. For best results, the robot should adapt it’s behavior based on the environment and based on the affective state of the child. For this, the robot needs to be able to perceive this affective state, for which a few different systems exist. The most promising system is facial recognition, but a multi-modal combination of systems would be best.

Finally, there are also large contextual factors that are important for child-robot interaction: relevance to the user, contextual cues and group size. Children like the robot more if it shows interest in the child. It is not yet clear in the literature if aligning the interests of the robot with those of the child has a significant positive effect, as conflicting results have been found.

Contextual cues do seem to have a positive effect on the interaction, but providing more contextual cues increases the expectations of the children as well. The amount of contextual cues should be limited to the capabilities of the robot. When the robot is meant to interact with a group, the robot needs additional capabilities to detect which member(s) of the group are speaking in order to engage in social interaction with the group.

A lot of research into child-robot interaction features children playing a game with a robot (Chess, Go Fish, etc.). We believe that there can be a lot of value in experiments where robots are placed in a real life context. Furthermore, topics as exploited human characteristics and shared personality traits, are not yet studied with a focus on children. The effects that are found here for adults do not have to be the same when applied to children, especially younger children. Therefore, more research must be done on those topics.

We expect that robots that can autonomously adapt their behavior based on their own interaction models and that use the affective state of the child in their decision making, will perform much better than robots that don’t. However, more research is necessary, as this aspect of the field is still in it’s infancy. There are already various systems for perceiving affective states, but few studies actually apply these systems in a (semi-)autonomous decision system for the robot.

6 Conclusion

In this paper we elaborated on the findings of our literature study in the field of child robot interaction. We identified many factors, with respect to the robot, the child and the context of the interaction, that have an effect on the interaction between a child and the robot. This resulted inFrom our findings we created a set of recommendations for developers of robots that need to interact with children, so they have an overview of the important factorsfactors that are important to consider. Our study (survey) also identified some aspects that are not yet thoroughly covered by research or are solely aimed at adults. These areas can therefore benefit from more research.

References

James F Allen, Donna K Byron, Myroslava Dzikovska, George Ferguson, Lucian Galescu, and Amanda Stent. 2001. Toward conversational human-computer interaction. AI magazine 22, 4 (2001), 27.

A. Batliner, S. Steidl, B. Schuller, D. Seppi, T. Vogt, J. Wagner, L. Devillers, L. Vidrascu, V. Aharonson, L. Kessous, and N. Amir. 2011.

Whodunnit – Searching for the most important feature types signalling emotion-related user states in speech. Computer Speech and Language 25, 1 (2011), 4–28. DOI:http://dx.doi.org/10.1016/j.csl.2009. 12.003 cited By 75.

Tony Belpaeme, Paul E Baxter, Robin Read, Rachel Wood, Heriberto Cuayáhuitl, Bernd Kiefer, Stefania Racioppa, Ivana KruijffKorbayová, Georgios Athanasopoulos, Valentin Enescu, and others.

2012. Multimodal child-robot interaction: Building social bonds. Journal of Human-Robot Interaction 1, 2 (2012), 33–53.

C. L. Bethel and R. R. Murphy. 2008. Survey of Non-facial/Non-verbal Affective Expressions for Appearance-Constrained Robots. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 38, 1 (Jan 2008), 83–92. DOI:http://dx.doi.org/10.1109/ TSMCC.2007.905845

Mats Blomberg, Gabriel Skantze, Samer Al Moubayed, Joakim Gustafson, Jonas Beskow, and Björn Granström. 2012. Children and adults in dialogue with the robot head Furhat-corpus collection and initial analysis.. In WOCCI. 87–91.

Cynthia Breazeal. 2003. Emotion and sociable humanoid robots. International Journal of Human-Computer Studies 59, 1–2 (2003), 119–155. DOI:http://dx.doi.org/10.1016/S1071-5819(03)00018-1 Applications of Affective Computing in Human-Computer Interaction.

C. Breazeal. 2004. Social interactions in HRI: The robot view. IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and Reviews 34, 2 (2004), 181–186. DOI:http://dx.doi.org/10.1109/ TSMCC.2004.826268 cited By 147.

Cynthia Breazeal and Rodney Brooks. 2005. Robot emotion: A functional perspective. Who needs emotions (2005), 271–310.

Cynthia Breazeal and Paul Fitzpatrick. 2000. That certain look: Social amplification of animate vision. In Proceedings of the AAAI fall symposium on society of intelligence agents—the human in the loop.

L. Canamero and J. Fredslund. 2001. I show you how I like you – can you read it in my face? [robotics]. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Systems and Humans 31, 5 (Sep 2001), 454–459. DOI:http://dx.doi.org/10.1109/3468.952719

Ginevra Castellano, Iolanda Leite, André Pereira, Carlos Martinho, Ana Paiva, and Peter W. McOwan. 2010. Affect recognition for interactive companions: challenges anddesign in real world scenarios. Journal on Multimodal User Interfaces 3, 1 (2010), 89–98. DOI: http://dx.doi.org/10.1007/s12193-009-0033-5

Ginevra Castellano, André Pereira, Iolanda Leite, Ana Paiva, and Peter W. McOwan. 2009. Detecting User Engagement with a Robot Companion Using Task and Social Interaction-based Features. In Proceedings of the 2009 International Conference on Multimodal Interfaces (ICMI-MLMI ’09). ACM, New York, NY, USA, 119–126. DOI:http://dx.doi.org/10.1145/1647314.1647336

M. Fujita. 2004. On Activating Human Communications With PetType Robot AIBO. (2004), 1804–1813.

T. Geller. 2008. Overcoming the Uncanny Valley. (2008), 12–15.

Ursula Hess and Sylvie Blairy. 2001. Facial mimicry and emotional contagion to dynamic emotional facial expressions and their influence on decoding accuracy. International journal of psychophysiology 40, 2 (2001), 129–141.

Koen Hindriks, Mark A. Neerincx, and Mirek Vink. 2012. The iCat as a Natural Interaction Partner. Springer Berlin Heidelberg, Berlin, Heidelberg, 212–231. DOI:http://dx.doi.org/10.1007/978-3-642-27216-5_14

H. Ishiguro, T. Ono, M. Imai, T. Maeda, T. Kanda, and R. Nakatsu. 2001. ”Robovie: an interactive humanoid robot”. (2001), 498–504.

Kazuko Itoh, Hiroyasu Miwa, Munemichi Matsumoto, Massimiliano

Zecca, Hideaki Takanobu, Stefano Roccella, Maria Chiara Carrozza, Paolo Dario, and Atsuo Takanishi. 2005. Behavior model of humanoid robots based on operant conditioning. In Humanoid Robots, 2005 5th IEEE-RAS International Conference on. IEEE, 220–225.

P.H. Kahn, T. Kanda, H. Ishiguro, N.G. Freier, R.L. Severson, B.T. Gill, J.H. Ruckert, and S. Shen. 2012. ”Robovie, you’ll have to go into the closet now”: Children’s social and moral relationships with a humanoid robot. Developmental Psychology 48, 2 (2012), 303–314. DOI: http://dx.doi.org/10.1037/a0027033 cited By 59.

Peter H. Kahn, Jr., Batya Friedman, Deanne R. Perez-Granados, and Nathan G. Freier. 2004. Robotic Pets in the Lives of Preschool Children. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’04). ACM, New York, NY, USA, 1449–1452. DOI: http://dx.doi.org/10.1145/985921.986087

Takayuki Kanda, Takayuki Hirano, Daniel Eaton, and Hiroshi Ishiguro. 2004. Interactive robots as social partners and peer tutors for children: A field trial. Human-computer interaction 19, 1 (2004), 61– 84.

Takayuki Kanda, Takayuki Hirano, Daniel Eaton, and Hiroshi Ishiguro. 2004. Interactive robots as social partners and peer tutors for children: A field trial. Human-computer interaction 19, 1 (2004), 79.

Takayuki Kanda, Shogo Nabe, Kazuo Hiraki, Hiroshi Ishiguro, and Norihiro Hagita. 2008. Human friendship estimation model for communication robots. Autonomous Robots 24, 2 (2008), 135–145. DOI: http://dx.doi.org/10.1007/s10514-007-9052-9

S. Kato, S. Ohshiro, H. Itoh, and K. Kimura. 2004. Development of a communication robot Ifbot. (2004), 697–702.

Eun Ho Kim, Sonya S Kwak, Kyung Hak Hyun, Soo Hyun Kim, and Yoon Keun Kwak. 2009. Design and development of an emotional interaction robot, mung. Advanced Robotics 23, 6 (2009), 767–784.

E. H. Kim, S. S. Kwak, and Y. K. Kwak. 2009. Can robotic emotional expressions induce a human to empathize with a robot?. In RO-MAN 2009 – The 18th IEEE International Symposium on Robot and Human Interactive Communication. 358–362. DOI:http://dx.doi. org/10.1109/ROMAN.2009.5326282

H. Kobayashi and F. Hara. 1995. A basic study on dynamic control of facial expressions for Face Robot. In Proceedings 4th IEEE International Workshop on Robot and Human Communication. 275– 280. DOI:http://dx.doi.org/10.1109/ROMAN.1995.531972

Hideki Kozima, Marek P. Michalowski, and Cocoro Nakagawa. 2009. Keepon. International Journal of Social Robotics 1, 1 (2009), 3–18. DOI:http://dx.doi.org/10.1007/s12369-008-0009-8

Hideki Kozima, Cocoro Nakagawa, and Hiroyuki Yano. 2004. Can a robot empathize with people? Artificial Life and Robotics 8, 1 (2004), 83–88. DOI:http://dx.doi.org/10.1007/s10015-004-0293-9

Ivana Kruijff-Korbayová, Elettra Oleari, Anahita Bagherzadhalimi, Francesca Sacchitelli, Bernd Kiefer, Stefania Racioppa, Clara Pozzi, and Alberto Sanna. 2015. Young Users’ Perception of a Social Robot Displaying Familiarity and Eliciting Disclosure. In International Conference on Social Robotics. Springer, 380–389.

Iolanda Leite, Ginevra Castellano, André Pereira, Carlos Martinho, and Ana Paiva. 2012. Modelling empathic behaviour in a robotic game companion for children: an ethnographic study in real-world settings. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. ACM, 367–374.

Oliver Lemon, Anne Bracy, Alexander Gruenstein, and Stanley Peters. 2003. An information state approach in a multi-modal dialogue system for human-robot conversation. PRAGMATICS AND BEYOND NEW SERIES (2003), 229–242.

Mike Ligthart and Khiet P Truong. 2015. Selecting the right robot: Influence of user attitude, robot sociability and embodiment on user preferences. (2015), 682–687.

Yosuke Matsusaka, Tsuyoshi Tojo, Sentaro Kubota, Kenji Furukawa, Daisuke Tamiya, Keisuke Hayata, Yuichiro Nakano, and Tetsunori Kobayashi. 1999. Multi-person conversation via multimodal interface – a robot who communicate with multi-user -. In EUROSPEECH, Vol. 99. 1723–1726.

Yosuke Matsusaka, Tojo Tsuyoshi, and Tetsunori Kobayashi. 2003. Conversation robot participating in group conversation. IEICE TRANSACTIONS on Information and Systems 86, 1 (2003), 26–36.

Gail F. Melson, Peter H. Kahn Jr., Alan Beck, Batya Friedman, Trace Roberts, Erik Garrett, and Brian T. Gill. 2009. Children’s behavior toward and understanding of robotic and living dogs. Journal of Applied Developmental Psychology 30, 2 (2009), 92–102. DOI: http://dx.doi.org/10.1016/j.appdev.2008.10.011

H. Miwa, T. Okuchi, H. Takanobu, and A. Takanishi. 2002. Development of a new human-like head robot WE-IEEE International Conference on Intelligent Robots and Systems 3 (2002), 2443–2448. https://www.scopus.com/ inward/record.uri?eid=2-s2.0-0036450854&partnerID=40&md5= f8ff6c49835fa002175b7adbed9f1b3b cited By 66.

Mashiro Mori. 2005. The Uncanny Valley. Energy, vol.7, no.4 (2005), 33–35.

M. Paetzel, C. Peters, I. Nyström, and G. Castellano. 2016. Effects of Multimodal Cues on Children’s Perception of Uncanniness in a Social Robot. Proceedings of the 18th ACM International Conference on Multimodal Interaction (2016), 297–301.

Yvette Pearson and Jason Borenstein. 2014. Creating “companions” for children: the ethics of designing esthetic features for robots. AI & SOCIETY 29, 1 (2014), 23–31. DOI:http://dx.doi.org/10.1007/ s00146-012-0431-1

Laurel D Riek. 2012. Wizard of oz studies in hri: a systematic review and new reporting guidelines. Journal of Human-Robot Interaction 1, 1 (2012).

Ben Robins, Kerstin Dautenhahn, Chrystopher L Nehaniv, Naeem Assif Mirza, Dorothée François, and Lars Olsson. 2005. Sustaining interaction dynamics and engagement in dyadic child-robot interaction kinesics: Lessons learnt from an exploratory study. In Robot and Human Interactive Communication, 2005. ROMAN 2005. IEEE International Workshop on. IEEE, 716–722.

S. Rogers, M. Lunsford, L. Strother, and Kubovy M. 2003. The Mona Lisa effect: Perception of gaze direction in real and pictured faces. In Studies in Perception and Action VII. 19–24.

Astrid M. Rosenthal-von der Pütten and Nicole C. Krämer. 2014. How design characteristics of robots determine evaluation and uncanny valley related responses. Computers in Human Behavior, vol. 36 Issue C (2014), 422–439.

Astrid M. Rosenthal-von der Pütten and Nicole C. Krämer. 2014. How design characteristics of robots determine evaluation and uncanny valley related responses. Computers in Human Behavior, vol. 36 Issue C (2014), 430.

Jelle Saldien, Kristof Goris, Bram Vanderborght, Johan Vanderfaeillie, and Dirk Lefeber. 2010. Expressing Emotions with the Social Robot Probo. International Journal of Social Robotics 2, 4 (2010), 377–389. DOI:http://dx.doi.org/10.1007/s12369-010-0067-6

Suleman Shahid, Emiel Krahmer, and Marc Swerts. 2014. Child–robot interaction across cultures: How does playing a game with a social robot compare to playing a game alone or with a friend? Computers in Human Behavior 40 (2014), 86–100. DOI:http://dx.doi.org/10.1016/j. chb.2014.07.043

S. Shamsuddin, H. Yussof, L. Ismail, F.A. Hanapiah, S. Mohamed, and H.A. Piah. 2012. Initial Response of Autistic Children in Human-Robot Interaction Therapy with Humanoid Robot NAO. Signal Processing and its Applications (CSPA) 8 (2012), 188–193.

Marie Tahon, Agnes Delaborde, and Laurence Devillers. 2011. Reallife emotion detection from speech in human-robot interaction: Experiments across diverse corpora with child and adult voices. In Interspeech.

Adriana Tapus and Maja J Matarić. 2008. User personality matching with a hands-off robot for post-stroke rehabilitation therapy. In Experimental robotics. Springer, 165–175.

M. Tielman, M. Neerincx, J.-J. Meyer, and R. Looije. 2014. Adaptive emotional expression in robot-child interaction. ACM/IEEE International Conference on Human-Robot Interaction (2014), 407– 414. DOI:http://dx.doi.org/10.1145/2559636.2559663 cited By 16.

Fang-Wu Tung. 2011. Influence of Gender and Age on the Attitudes of Children towards Humanoid Robots. Springer Berlin Heidelberg, Berlin, Heidelberg, 637–646. DOI:http://dx.doi.org/10.1007/978-3-642-21619-0_76

Rik Van den Brule, Gijsbert Bijlstra, Ron Dotsch, Daniël HJ Wigboldus, and Pim Haselager. 2013. Signaling Robot Trustworthiness: Effects of Behavioral Cues as Warnings. LNCS 8239 (2013), 583–584.

Sarah Woods, Kerstin Dautenhahn, Christina Kaouri, R Boekhorst, and Kheng Lee Koay. 2005. Is this robot like me? Links between human and robot personality traits. In Humanoid Robots, 2005 5th IEEE-RAS International Conference on. IEEE, 375–380.

S. Woods, K. Dautenhahn, and J. Schultz. 2004. The design space of robots: Investigating children’s views. Proceedings of the 2004 IEEE International Workshop on Robot and Human Interactive Communication (2004), 20–22.

Li Zhang, Ming Jiang, Dewan Farid, and M Alamgir Hossain. 2013. Intelligent facial emotion recognition and semantic-based topic detection for a humanoid robot. Expert Systems with Applications 40, 13 (2013), 5160–5168.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Technology"

Technology can be described as the use of scientific and advanced knowledge to meet the requirements of humans. Technology is continuously developing, and is used in almost all aspects of life.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: