This section presents the review results about scientific contributions dealing with the intelligent systems adopting politeness strategies in interactions with humans. Such works have been grouped into two major categories. The first one contains works that have primarily studied politeness strategies as factors influencing human trust toward intelligent machines. The second one includes works that studied politeness’s role in accepting intelligent systems in different social contexts.
5.1 Trust
Advances in AI are making intelligent machines increasingly able to perform complex tasks. Trust represents a crucial factor in determining humans’ propensity to rely on such systems in situations characterized by high uncertainty. Lee and See (
2004) o,define trust as “the attitude that an agent will help achieve an individual’s goals in a situation characterized by uncertainty and vulnerability” that can be influenced by several factors.
Several researchers have shown that the capacity of intelligent machines to exhibit polite behaviors in appropriate situations is among the factors that may contribute to their recognition as trustworthy collaborators with human partners.
Miller (
2005) and Parasuraman and Miller (
2004) argue that etiquette is closely tied to what is traditionally interpreted as
polite or
rude behaviors. Thus, the etiquette in the machine also holds the power to cause positive or negative reactions in the individuals. Such works found their bases on the affective method. The
affective method (Lee and See
2004) is strictly correlated to the affect generated by and toward an entity. People tend to be trustful toward something that is perceived to be pleasant. The experiments conducted by Miller have explored the role of politeness, considering two kinds of communication styles (i.e., polite and impolite) that a system (i.e., flight simulator) may adopt during an interaction with an individual and by considering high and low-reliability system conditions. When adopting a polite style, the system is patient and not interruptive. It only asks for a new query once the user has finished the current one. The system behaves the opposite way when adopting the second communication style. The results showed that polite communication styles improve user trust in both cases of low and high system reliability. Conversely, impolite communication increased the perceived user workload in high and low-reliability system conditions.
Spain and Madhavan (
2009), similarly to Miller, conducted a study to determine the effects of machine etiquette on perceived system trust. Etiquette was manipulated by tuning the politeness level adopted by the device, varying among three conditions: polite, neutral, and rude. The study showed that participants trusted the polite system more than the rude one. Participants also perceived the polite system as more reliable than the neutral and the rude ones, even if each system was equally reliable.
In Lee and Lee (
2022), Lee et al. propose the utilization of politeness to encourage driver-vehicle interaction by improving drivers’ trust in autonomous vehicles. Several practical implications are outlined for vehicles’ speech-interaction strategies in the study. As a result of including politeness in speech interface design, vehicles were recognized as being more sociable and trustworthy, and their requests were more actionable and accepted by drivers. The higher the perceived politeness of the vehicle, the higher the trust in the vehicle’s objective performance and personal kindness.
In Terada et al. (
2021), Terada et al. investigated whether differences in the politeness strategies used by a virtual agent impact negotiated outcomes. The participants engaged in an online negotiation with one of three agents that adopted positive, off-record, or no politeness strategies, respectively. Results showed that agents who used the off-record strategy were able to achieve more significant concessions from their human counterparts. In contrast, positive politeness, which does not threaten the other’s face, led to fairer negotiated agreements.
Firdaus et al. (
2022) proposed an approach for inducing politeness in dialogue generation based on user profile. The work presents a response generation framework that employs a reinforced deliberation decoder to fine-tune the responses to introduce politeness according to the personalized user information and dialogue context. Results shows that politeness levels differ across age groups and gender in a personalized Dialog Agent, for which politeness is essential to obtain the information needed.
Jucks et al. (
2018) explored how students assess the communicative behavior of a smart speaker, which implements both polite and rude social strategies. The study’s participants was relatively youthful with technical expertise. Moreover, a simulated indirect communication scenario has been employed for the experimentation. The participants were passive observers of the system and did not engage in direct interaction with him. Their engagement was limited to listening to the generated responses of the system. Under these conditions, the study revealed that polite responses were regarded as superior in terms of appropriateness and pleasantness in comparison to rude ones. The polite speaker was evaluated to be more accurate, albeit not considered as having a superior level of expertise compared to the impolite system. Moreover, nearly all social assessments, including likability or the goodwill aspect of trustworthiness, were judged more positively for the polite speaker. Nevertheless, no differences were found in evaluations of expertise, which is viewed as a more content-related aspect of trustworthiness, even if the system’s accuracy, which is also a content-related aspect, was deemed higher for the polite system.
Similar results have been found in social robotics. Social robots (Fong et al.
2003) are autonomous robots, often endowed with an anthropomorphic physical aspect, that may interact and communicate with humans by following social behaviors and rules designed for the work they have to perform. They have been conceived to be introduced in daily life for performing several kinds of activities in different application domains. The
intuitive trust people feel when faced with a robot is a determinant factor so that an interaction between the individual and the machine can be established. Some researchers believe that humans are more trusting to cooperate and interact with robots that behave politely with them.
In Tsui et al.
2010, Tsui et al. investigated the level of trust an observer perceives towards a robot in a passing corridor scenario. The study aims to understand how a robot should behave in front of a human that needs to move through the same space. Mainly, the study wants to evaluate the bystander’s expectations about the robot’s adherence to social protocol and the overall trust in the robot to do the right thing. The experiment results showed that people found the stop behavior of the robot to be the most polite. Participants considered the stop and slow behaviors more trustful than neutral or fast ones. People expect that robots move in the same polite manner as people do. In other words, robots should continue at a relatively constant speed unless there is a need to yield, in which case the robot should slow or stop.
Hendriks et al. (
2011) investigated what type of personality people desire for a robotic cleaner. Since the autonomy of such kind of robot allows him to move around homes without instructions, people need to trust him. Authors found that people prefer a calm, polite, cooperative robot vacuum cleaner that works efficiently and follows routines. When users learn to know the personality of their robot, they can interact adequately and predict how the robot responds.
In Srinivasan and Takayama (
2016), Srinivasan and Takayama examined the effectiveness of different polite strategies adopted by a robot in obtaining human help for performing its tasks. According to the politeness theory, the experiment has considered three factors that can influence helping behaviors: the status, familiarity, and the request size of the robot. Authors found that people were more willing to comply with robot requests that require less effort and that use a positive politeness strategy.
Inbar and Meyer (
2015,
2019) conducted experiments using static images of a robot that performs an access-control task and interacts with people (younger and older, male and female) by applying polite or impolite behavior. The study showed that politeness could be a crucial determinant of people’s perception of peacekeeping robots. The experiment results also highlighted that the people’s age and gender interacting with the robot had no significant effect on participants’ impressions of the robot’s attributes.
Ramachandran and Lim (
2021) investigated the design of a robot to perform nursing tasks in hospitals. The robot is designed with animated eyes, a voice with a local accent, and context phrases with politeness to mimic the behavior of nurses when engaging with patients. Results show that users are more likely to perceive the robot as trustworthy if it communicates politely.
The study in Babel et al. (
2022b) aimed to investigate how an autonomous service robot can assert itself when making a request in a human–robot conflict situation. Mainly, the authors aim to examine if polite and assertive request sequences that fulfill the human politeness scheme will lead to higher compliance than sequences that do not and if they will be rated as more acceptable, polite, and trustworthy than sequences that do not. In the first case, participants were reluctant to follow a robot’s request in the domestic context. Most participants considered their tasks more important than those of the robot. Conversely, in the second case, strategies that adhered to human politeness norms were more accepted and rated as more polite and trustworthy than those contradicting it in the first trial. During a second trial, the evaluations for the strategies rejecting human politeness norms became more positive. As the authors claim, it could be due to the habituation of the robot’s requests. Novelty effects are common in HRI experiments, mainly if participants have limited prior robot experience.
Finally, the study proposed in Kumar et al. (
2022) focuses on the impact of social robots’ politeness during interactions with humans, using Lakoff’s politeness rules (Lakoff
1973) such as: (i) refraining from coercively imposing one’s actions or perspectives on others, (ii) providing alternative options to enable individual decision-making, and (iii) generating a sense of parity among the involved parties. The primary objective of the present study is to assess the impact of polite robot behaviors on the users’ subjective perceptions concerning their enjoyment, trust, and satisfaction levels while utilizing distinct categories of non-humanoid robots, namely a robot arm and a mobile robot, under diverse conditions such as video and live settings, and among diverse age groups including both young and old adults. According to the findings, robots that adhered to Lakoff’s politeness rules obtained superior ratings in all three dependent variables (namely, enjoyment, satisfaction, and trust) compared to the other levels. Moreover, the participants experienced satisfaction with the interpersonal engagement in the real-time mode and perceived it as significantly more trustworthy than the video format. Furthermore, the study found that the participants experienced greater satisfaction when engaging with the mobile robot than the manipulator. A notable disparity was found between younger and older participants’ preferences. Specifically, the three rules-based robots were the most favored option among the younger study subjects.
5.2 Social acceptance
Acceptance of technology is a concept that indicates the willingness within a group of users to employ IT tools to support the tasks that it is designed to support (Dillon
2001). The more AI systems are pervasive, the more critical their approval becomes, even more with technology that has no direct and immediate utility for an individual. Therefore, it is crucial to comprehend the factors that motivate potential users to embrace these technologies in their day-to-day routines to implement AI systems effectively. In particular, human–machine interactions experienced a transformation over the past decades. From simple interactions for achieving functional objectives, we moved toward interactions incorporating emotional responses, thus providing positive experiences related to using technology (Norman
2004).
Nowadays, utilitarian and hedonic views are considered equally important for studying technology acceptance. Utilitarian variables are attributes connected to the usability of a technological device. Conversely, the hedonic variables are connected to the user experience using a device. Acceptance models, such as the Technology Acceptance Model (Davis
1998) and the Unified Theory of Acceptance and Use of Technology (Venkatesh et al.
2003), have initially shown the utilitarian factors (such as usefulness and ease of use) as critical elements for user acceptance of technology. Such models have been extended to incorporate additional variables such as subjective norms, computer playfulness, and enjoyment (Venkatesh and Davis
2000; Venkatesh and Bala
2008). In particular, several studies in human–robot interactions point to the relevance of including the hedonic view in evaluating robots (Heerink et al.
2010; Weiss et al.
2011; De Graaf and Allouch
2013). Such works show that enjoyment is a crucial variable for social robot acceptance, directly influencing the intention to utilize a robot and its perceived ease of use. The perception of engaging with a social entity is pivotal to the positive reception of technology.
Notably, it has been demonstrated that adopting politeness strategies can further enhance the acceptance of robotic systems. Several works studied the effect of robot politeness during user interactions. Similar to human–human interaction, variations in the robot’s politeness may cause considerable differences in robot’s perception.
Smith et al. (
2022) claim that social robots (especially those designed as sociable partners) to be successfully integrated into society are expected to behave according to human social norms, especially politeness ones. Hence, their work focuses on understanding how people adhere to different politeness norms in intention and context-sensitive ways to use these insights to design social robots. Mainly, the authors focused on determining when it is appropriate to use indirect language, i.e. the Indirect Speech Act, where, as previously said, the utterance’s literal meaning does not match its intended meaning. The results of this study indicates that speakers typically use direct speech if any of the following conditions are met (and employ indirect speech in all other scenarios): (i) the utterance is an acknowledgment; (ii) there is potential for harm; (iii) the utterance is directed at oneself; (iv) the utterance requests (rather than provides) information.
Mutlu (
2011) presented a study exploring how robots can use verbal and non-verbal cues to improve their proxemic relationship with people, affecting outcomes such as distancing and rapport. During the experiment, the gaze behavior (gaze following towards aversion) and verbal politeness cues (polite and impolite robot’s introduction) are varied to shape the participant’s physical and psychological distancing from the robot. The robot that used politeness cues has been considered significantly likelier than those who received the impolite introduction. The gaze cues influenced how much physical distance people maintained with the robot, mainly when they did not interact with the robot. Politeness cues influenced people’s rapport with the robot and the tendency to disclose personal information.
En and Lan (
2012) conducted experiments about politeness maxims in a dialogue between humans and robots, demonstrating improved human–robot engagement. The experimental study conducted by Salem et al. (
2013) with a receptionist robot suggested that also the interaction context significantly impacts participants’ perception of the robot according to the adopted politeness strategies.
Castro-González et al. (
2016) presented a preliminary study assessing the effects of the robot’s verbal attitude on several subjects during rock-paper-scissors games. Robots behave politely or impolitely during the game by changing the contents of their utterances. Compared with rude robots, polite robots were considered more engaging and likable.
In Westhoven et al. (
2019), results from web- and video-based studies on the perception of a robot help request are reported. The eye expressions and politeness of speech have been considered variables for evaluating effects on the hedonic user experience, perceived politeness, and help intention. The study showed that politeness could increase successful help requests. According to their results, a robot asking humans for help using sad or fearful eye expressions in combination with polite language improves perceived politeness. It yields a significantly better hedonic user experience and higher help intention than other tested combinations.
Kaiser et al. (
2019) studied the efficacy of kinesic courtesy cues on people’s approval of non-humanoid mobile robots. Inspired by the non-verbal communication that humans perform by using body language (posture, gesture, movement), Kaiser et al. showed that, also in the machines, body language conveys social messages of universal significance, leading to people’s approval of robot behavior. For instance, developing robots that emulate polite human social behavior, such as stopping and moving out of a bottleneck (such as a doorway), can result in a more accepting robot than a technology that behaves less human-like and is regarded as more courteous by humans.
In Ribino and Lodato (
2018) and Ribino et al. (
2018), Ribino et al. proposed a norm-based approach for improving robot social acceptance. Mainly, a normative approach that allows exploiting the advantages of goal modeling to make social robots able to reason about dynamic situations proactively is proposed. An experiment has been conducted by adopting a Nao Robot endowed with the ability to choose appropriate social norms in interaction scenarios with older adults.
A different point of view is provided by the research conducted by Babel et al. (
2022a). The purpose of this study was to examine how a robot can effectively and appropriately request priority in resource conflict situations while taking into account the robot’s type (humanoid, zoomorphic, or mechanoid), the level of politeness in the request (polite appeals or assertive commands), and the modality employed (verbal or displayed). The research findings indicate politeness was more socially acceptable than assertive commands for all robot types. However, it did not necessarily translate to greater effectiveness in achieving desired outcomes. For the humanoid robot, a verbal appeal proved to be a more effective communication method than a verbal command. This is because impolite verbal communication, such as a command, may be perceived as contradictory to the participants’ expectations. The displayed command was deemed more acceptable than the verbal command across all robot models. According to their findings, the effectiveness of the humanoid robot is enhanced when it shows the command instead of saying it. Distancing from the assertive request could be most effective for the humanoid robot since non-humanoid robots are unlikely to cause face threats due to their non-humanlike design. Moreover, the humanoid robot’s use of polite verbal requests did not increase effectiveness or acceptability compared to other robots utilizing visual requests. Results of this study support the position that there is a marked contrast in the compliance rates between the humanoid and mechanoid robots, with the former exhibiting substantially low levels of conformity. According to their findings, the non-anthropomorphic design has the potential to confer a strategic advantage upon public service robots in cases where the demands of the task necessitate a heightened level of assertiveness, such as with security robots.
Other works instead focus on the impact of cultural differences in robotic acceptance. The study in Salem et al. (
2018) proved that participants, especially Arabic ones, showed a more positive attitude toward the robot. The use of positive politeness strategies, among others, affected people that participated in the human–robot experience.
Nomura and Saeki (
2010) also analyzed the effects of a robot acting politely on the human perception of the robot. They conducted a psychological experiment in Japan with a small-sized humanoid robot that performs four types of motion according to the politeness level within the Japanese community. The experimental results showed the effects of the robot’s polite motions on human impressions of the robot and some relationships between the impressions and behaviors toward the robot. Mainly, the results highlighted some gender differences. In males and females, the politeness of robot behaviors influences the impressions of extroversion and politeness toward the robots. In females, the extroversive impression influences behaviors toward the robots, while in males, the polite impression influences behaviors.
It also seems promising to develop robotic assistants for the elderly to contribute to the establishment of a positive emotional and social relationship between the user and the system by adopting politeness strategies. Hammer et al. (
2016) investigated linguistic variations for a robotic elderly assistant to convey different levels of politeness in human–robot interaction. Authors saw that wordings such as requests and actions formulated as shared goals were perceived as polite and persuasive. Thus, they can be used as standard strategies for improving the acceptability of a recommendation. There is also evidence that older adults respond positively to robotic companions if they emulate social behavior that matches the seriousness of the situation (Goetz et al.
2003). Moreover, there are also studies showing that recommendations would be more persuasive if they are provided by a polite robot that motivates the elderly to maintain a healthy lifestyle (Buoncompagni et al.
2021). However, if robots look too human-like but do not match the high expectations in terms of behavior, people tend to get disappointed and distrustful of them (Walters et al.
2008).
In Kim et al. (
2022), authors explore the perception of a robot within a group of people. Specifically, in human–human interactions within a group, individuals display a greater propensity to positively or negatively evaluate their peers based on considerations of social relations within the group. The authors aim to investigate if such a phenomenon also occurs within human–robot interaction. In this study, a human–robot interaction within a heterogeneous cohort of individuals across different age groups has been designed. The authors examined the impact of the respect of social relations on users’ evaluation of a robot. The study’s participants deemed the robot behaving against social relations comparatively less useful and impolite. The negative assessment was due to the fact that robots catered to the younger individuals first instead of prioritizing the elderly within a setting that encompasses individuals of varying ages. This result indicates that the robot that fails to follow the social norms negatively affects the user.
Further studies have examined the positive relationship between politeness and human willingness to use robots, particularly in healthcare contexts. Patients’ compliance with healthcare practitioners’ recommendations is essential in this field. It refers to the extent to which they conform to their recommendations. Healthcare recommendations that are followed more closely contribute to better patient health status and higher satisfaction with healthcare services. Adopting politeness strategies is another way to enhance compliance. In this direction, the work proposed in Lee et al. (
2017) investigated the perceived level of robot politeness as an element that improves the patient’s behavior following the guidelines provided by the healthcare assistants. The study found that using a lower politeness level in providing a recommendation is more similar to a command rather than a suggestion. Conversely, polite behavior adopted by a social robot when it recommends to patients positively influences the user compliance to follow them. However, it does not ensure the patient’s intention to adhere.
The research conducted in Ajibo et al. (
2021) sought to explore the efficacy of robotic behavior in fostering adherence to the COVID-19 guidelines outlined by the World Health Organization (WHO), specifically about social distancing and the use of masks. The primary objective was to explore the subjective assessment of three attitudinal behaviors a robot displays, specifically being polite or gentle, displaying disapproval, and exhibiting anger, when engaging with individuals who consistently violate established guidelines. Notably, the research examined the impact of participants’ compliance awareness on the robot’s behavior impressions, explicitly concerning its social acceptance. Individuals showing limited awareness of adherence to the WHO guidelines prefer polite and gentle admonishments as a viable and effective method of addressing violations of said guidelines. These individuals consider this approach to be effective and productive. From a different viewpoint, individuals with a heightened sense of compliance awareness also demonstrate a greater inclination towards exhibiting polite and gentle behaviors when determining the appropriateness of actions towards violators. Nevertheless, they evaluated displeased and angry behaviors as more efficacious in enforcing adherence to social norms.
The work proposed in Zhu and Kaber (
2012) aimed to explore the potential impact of various etiquette strategies on the task performance of both human and robot participants in a simulated medicine delivery task. A humanoid robot and a mechanical-looking robot were employed to deliver medication reminders to participants concurrently involved in a primary cognitive task of solving a puzzle. The study’s findings indicate that the participants were not sensitive to the positive language being communicated through robots, which included expressions of appreciation for human values. Furthermore, this particular strategy did not yield desirable outcomes in terms of reinforcing or augmenting the positive self-image of human users. Utilizing a direct communication style lacking in linguistic courtesy and a combination of positive and negative face-saving strategies to minimize user imposition resulted in moderate perceived etiquette scores among users. On the other hand, implementing a negative face-saving approach centered on advocating for users’ freedom of choice positively impacted user task execution and robot performance. Furthermore, there was evidence that humanoid robot features can supply additional social cues for patients, thus contributing to enhancing both human and robot performance, albeit not improving user-perceived etiquette.
In Jackson et al. (
2020), instead, politeness is investigated as a means to reject immoral commands that humans may ask robots. Despite the inappropriateness of human commands, robot rejection must be handled with tact to maintain robot acceptability and likeability. By studying the interrelationship between gender and politeness, the authors investigated the effects of gender stereotypes on human perceptions of robotic non-compliance. Researchers found that robots were perceived more positively when their gender coincided with those of their human interactants.
While previously cited studies have explored politeness strategies for robot acceptability and confirmed their effectiveness in human–robot interaction, recent studies have also been conducted in the field of intelligent voice assistants to understand the factors motivating individuals to use such devices. Indeed, AI voice assistants, including Amazon Echo, Google Assistant, Apple’s Siri, and Microsoft’s Cortana, have profoundly changed how people consume content and perform their requests. Some academic researchers have explored the factors influencing the great individual’s acceptance of voice technology. Behind the primary aspect related to utilitarian purposes, there is also evidence that individuals are motivated to interact with such devices due to their ability to convey great social benefits in the form of social presence and attractiveness (McLean and Osei-Frimpong
2019).
Moreover, the impact of cultural differences on smart speaker acceptance has been considered in Ouchi et al. (
2019). In this study, two speaking styles for intelligent speakers are adopted, normal and polite. In the polite style, the smart speaker used honorific words. In the normal style, the smart speaker talks informally, which is not polite in Japan. The participants’ impressions of their conversations with the smart speaker are analyzed. The results show that the normal style was considered more friendly, although the polite style was considered kinder than the normal one.
Gupta et al. (
2007) introduced a system that integrates a spoken language generator with an artificial intelligence planner to represent Brown and Levinson’s politeness theory in task-oriented dialogues, named POLLy (Politeness for Language Learning). The authors conducted an empirical investigation to examine how individuals’ perceptual abilities regarding politeness differed across various discourse contexts. The study involved a cohort of native English speakers from two distinct cultural backgrounds, British and Indian. According to several situations, the participants were instructed to evaluate several utterances automatically generated by POLLy, as if they were communicated to them by a Friend or a Stranger partner. The findings indicate that how POLLy communicates aligns with B &L’s expectations regarding language use and the given context. The findings suggest that statements from a friend were perceived as more courteous than a stranger’s comments in all four B &L methods. This result indicates that when the gap between individuals is significant, it is more fitting to use a genteel expression. However, if the expression implies excessive distance, it may be perceived as overly courteous. Moreover, Indians tended to consider the statements to be considerably more courteous in comparison to the British. When the person making a request was a friend, this was particularly noticeable, but there was little difference in perceptions when the individual was a stranger. Concerning cultural disparities, the study revealed notable distinctions in the assessment of courteousness between Indian and British individuals in situations where there existed a high degree of imposition, such as requests, as well as in circumstances where social distance, as measured by B&L, was less pronounced, such as when conversing with a friend.
Politeness has also been of particular interest in the research about virtual agents. To evaluate the impact of politeness in learning settings, Wang et al. (
2008) conducted an experiment in which students completed a learning task with the help of an on-screen agent. The agent responded to the student’s queries by offering polite or direct suggestions. The polite agent positively affected the scholars’ learning outcomes compared to the direct agent. This politeness effect was more effective with learners who needed more help and consequently had a more productive interaction with the agent. Thus, pedagogical agents can be based on the politeness model by Brown and Levinson to achieve the same benefit as real tutors do by respecting students’ social faces.
Authors in Yang and Dorneich (
2018a,
b) investigated adapting the interaction style of intelligent tutoring system (ITS) feedback based on human–automation etiquette strategies. The results demonstrated that systematically adapting interaction style based on etiquette strategies significantly influenced motivation, confidence, satisfaction, and performance. Hence, if virtual tutors can effectively utilize the interaction style to mitigate negative emotions, then ITS designers may implement mechanisms to design affect-aware adaptations that provide the proper responses in situations where human emotions affect the ability to learn.
In Wu and Miller (
2010); Wu et al. (
2009), authors evaluated the effects of politeness in interactions with virtual agents, demonstrating that variations in etiquette have consequences for human performance. Mainly, their results show that users found polite and familiar virtual agents more likable and trustworthy. Users also perceived less workload when interacting with polite virtual agents. They also showed that gender impacts the perception of politeness utterances and influences polite behaviors between different gender.
The study presented in Rana et al. (
2021) examined the impact of politeness on Chatbot conversations according to gender, age, and personality. The findings revealed gender-based dissimilarities concerning emotional expression in the adult phase, whereby women demonstrated superior overall emotional sensitivity and intelligence compared to males. The females exhibited higher sensitivity towards the chatbot’s standardized response, leading to a reduced rating compared to its polite alternative. Moreover, individuals between the ages of 18 to 24 exhibited a lower capacity to discern the level of politeness of the response compared to those aged 25 and above. From a politeness perception perspective, the combined influence of age and gender as a filter determines an optimal evaluator for the chatbot’s politeness performance. Specifically, analysis reveals that a female from the higher age group is better positioned to judge the chatbot’s level of politeness accurately. It is worth noting that, in contrast, a male of the same age group detects only a slight variation in politeness. The younger age cohort exhibits comparable variation in this regard; however, their perception of politeness is measurably lower when contrasted with the elder demographic. If a candidate possesses introverted, disciplined, and innovative traits, they exhibit superior politeness perception in the chatbot. Conversely, individuals who have extroverted and unimaginative attributes are unlikely to discern substantial differences between the two chatbots.
In their study, Hu et al. (Hu et al.
2022) examined the potential employment of Politeness Theory in designing and implementing conversational interfaces for smart display devices intended to improve the user’s experience and alleviate the cognitive burden of older people. A field deployment study was conducted, in which a sample of elderly individuals was instructed to utilize a more direct or polite version of a smart display. The empirical findings established four classifications of users, giving guidelines for a tailored design. The research indicates that individuals classified as
Socially-Oriented Followers tend to view technological devices as friends and prefer politeness over directness.
Utility-Oriented Followers perceive the functionality of a device to be analogous to that of a human. Consequently, these individuals conform to the guidance provided by the system and articulate positive feedback about the system’s overall politeness.
Socially-Oriented Leaders perceive the device in a non-human role. They exhibit a predominantly positive or neutral attitude towards politeness in general. However, they do not expect human-like behavior to be reflected in the system. Finally,
Utility-Oriented Leaders perceive the system as a machine. They exhibit a neutral, if not adverse, attitude concerning politeness, opting for directness over politeness. They consider politeness odd and unnecessary.
Finally, other recent works extend politeness research to human-vehicle interaction and explore the effects of politeness strategies on improving the user experience and acceptability of the technology. In Miyamoto et al. (
2019), authors used RoBoHoN for developing a driving support agent. RoBoHoN is a small, easily portable, robot-shaped phone with several functions and capabilities. Used as a driver agent, RoBoHoN selects an utterance based on the politeness theory, considering the driver’s age, gender, and driving characteristics. The experiment focused on the difference between the end-of-sentence style (honorific words–non-honorific words), representing psychological distances in Japanese conversation. The results suggested that the agent using positive politeness strategies is more effective for improving familiarity than an agent using negative politeness strategies. However, the results refer to a group of university and graduate students. Thus they are only generalized to a narrow category of drivers.
In Miyamoto et al. (
2021), the same previous authors question if a driving support agent should provide explicit instructions. Through a video-based study, they evaluated the acceptability of politeness strategies in DSA utterances. The experiment results showed that NPS with a formal sentence-final style was significantly higher in evaluation items related to the functionality of DSA. Although the authors expected the off-record strategy to be evaluated highly, results show the usefulness of DSA in providing explicit instructions while considering linguistic considerations.
In Lee et al. (
2019), Lee and Lee described as vehicle politeness can positively influence drivers’ experience and promote their intentions to cooperate with the vehicle. Vehicles with polite instructions are highly evaluated for social presence, politeness, satisfaction, and intention to use, especially in typical working situations. However, the adverse effects of politeness in failure situations reveal that the selective politeness strategy works better than the permanent polite strategy. In failure situations, the drivers perceive polite messages as unnecessary excuses. They prefer immediate solutions when complex traffic environments cause failures. Fast recovery is a critical determinant of drivers’ positive experiences in failure situations. The results also showed that the effects of vehicle politeness were stable even after controlling some demographic factors such as age and gender.
The study presented by Lanzer et al. (
2020) explored the impact of an autonomous delivery vehicle’s courtesy on individuals’ adherence to its instructions, trust, and acceptance in two distinct settings—a pedestrian crossing and a street—among participants from Germany and China. The polite communication strategy has positively impacted compliance, acceptance, and trust. Results showed that there were cultural differences between German and Chinese participants. The findings indicated that the employment of the polite strategy exhibited superior outcomes across both groups; however, variances were observed among scenarios. In the Chinese sample, it was observed that the outcome of the crosswalk scenario differed from the German sample, exhibiting an opposing trend. In the street scenario, when the same polite strategy was applied in the German sample, almost everyone complied with the autonomous delivery vehicle’s request to move out of the way. In contrast, in the Chinese sample, many people did not. Both explanations of the described behaviors are congruent with different cultures’ road traffic regulations and customs. Some German participants stated they did not comply with the AV’s request to wait at the crosswalk because a traffic rule obligates all vehicles to stop at a zebra crossing. On the contrary, in the Chinese sample, pedestrians may be accustomed to prioritizing vehicles at zebra crossings since there is generally a very low rate of vehicles stopping at crosswalks, even though Chinese drivers are also required by law to yield to pedestrians at crosswalks. Furthermore, the findings demonstrate that the implemented strategy significantly altered the vehicle’s acceptance level. In both samples, a notable discrepancy in acceptance rates was observed between the dominant and polite communication strategies. The latter exhibited the highest level of acceptance and the former demonstrated the lowest degree of acceptance. The utilization of a polite as opposed to a dominant mode of request communication was associated with a significantly higher degree of perceived utility and satisfaction with the autonomous delivery vehicle. The same results for trust. The autonomous delivery vehicle was trusted more when it politely than dominantly communicated its request.