2.1 Cryptocurrency
Blockchain and current proliferation of cryptocurrencies signal oncoming changes in governance structures, and what is envisioned as the future of financial inclusion (Swan
2015; Tapscott and Tapscott
2016; Hayes
2017; Freund
2018; Muralidhar et al.
2019). The circulation of cryptocurrencies is enabled by decentralized, anonymous transactions that do not depend on central banks or governments (Luther
2016). To understand the technology and the accompanying social change, we first elaborate on how cryptocurrencies work.
Cryptocurrencies are digital cash based on blockchain technology. The most famous one is Bitcoin, which was created by a figure named Satoshi Nakamoto (a pseudonym of one person or a group of people) in 2008 (Nakamoto
2008). Simply put, a cryptocurrency “coin” does not exist in any server, i.e., it is not a saveable file, but is a record of a transaction, i.e., one sending a coin to another, in a
distributed ledger. If traditional cash is a physical node (object) and one transaction represents an edge (from-to relationship), blockchain is built on records of edges (countless from-to relationships) rather than nodes themselves (objects). In this analogy, traditional banking systems are based around nodes (money) whereas cryptocurrencies are built on edges (from-to relations). There is no “thing” or money to steal or mutate. No intermediaries like banks are needed to convert, send, or store money; it is purely peer-to-peer. Transaction records are
public, i.e. each block gets added to the chain of records that all can see. But, for a block to be added to the chain, it undergoes a validation process involving heavy-duty encryption techniques that miners can solve or other variations of consensus based encryption and decryption methods that can be unique to each type of blockchain; different cryptocurrencies have their own network’s preferred method of validation (Tapscott and Tapscott
2016; Swan
2015).
Peer-to-peer payments as edges in a network are reliable because of cryptographic techniques that support each cryptocurrency’s security protocol Hayes (
2017). Thus the sender and receiver identities are anonymized, but records of transactions are public. Some challenges for cryptocurrencies include high market volatility, hacking, and the lack of chargeback (refund) when disputes over a transaction are made, i.e., transactions are normally irreversible (Hayes
2017) in that the entire blockchain would have to be re-written to change one block’s relationship to its prior and following blocks. The bigger picture is that cryptocurrencies are the first working representation of blockchain, i.e., “blockchain 1.0”. Blockchain’s principle of decentralized registry of contracts can be applied in many ways. Thus, the next phase is known as “blockchain 2.0” (Swan
2015). Blockchain 2.0 can track contracts like birth or marriage certificates, and physical assets like housing, as well as patents and trademarks, since both private and public records can be added to the blockchain, to list a few examples (Swan
2015). Due to the immutable, distributed, and publicized nature of blockchain technology, it is often described to not require trust between people, for the technology embeds trust in its design through transparent and secure transactions (Tapscott and Tapscott
2016; Freund
2018). However, designing trustworthy technology is different from how users form trust (or distrust (Muir
1987)) in any technology. Especially since the cryptocurrency ecosystem is still in its infancy, there are many uncertainties about who and what to trust and why.
2.2 Trust in and through blockchain technology
Blockchain is said to be “trustless”— involved parties do not need to trust each other for their transactions to take place as a technical feat. Trust that individuals have in each other and organizations, e.g., banks, is not necessary for a blockchain network to operate. Only stakeholders’ trust in the blockchain itself is needed. The assumption is that one cannot trust or distrust people one does not know. Peoples’ identities are unknown and unnecessary since public ledgers are built through the distributed workload of cryptographic problem-solving (Swan
2015; Christidis and Devetsikiotis
2016). The ethos is that blockchain technology is the
object of trust by design, and therefore is a
mediator of trust by default.
Two non-exclusive distinctions matter. Technology can be a
mediator of trust between people and it can also be an
object of trust for people (Friedman et al.
2000; Kelton et al.
2008; Kiran and Verbeek
2010). For instance, a messaging app
mediates trust between oneself and people one talks to through the app, and it also be an
object of one’s trust, i.e., a trusted app (Nickel
2013,
2015). Blockchain is particularly “disruptive” as technology (Mendoza-Tello et al.
2019) because it disrupts trust as a notion and experience; the conjoined trust in blockchain as an object and mediator of trust replaces the need to have trust in involved stakeholders. Yet this is precisely why trust comes to the fore of our social fabric, when people cannot easily understand novel situations and other stakeholders therein due to technological changes. Specifically when
blockchain asks us to do away with making sense of trust in the first place trustless technology emphasizes the importance of trust as a social phenomenon. The technical implementation of trust reveals rearranged social orders, when power dynamics between all affected parties stand to be negotiated and renegotiated (Strauss
1978). People thus struggle to contextualize social order when trust in (and knowledge of) other stakeholders do not seem necessary, technically. The messy, nuanced social reality of trust is illustrated by stakeholders’ interactions in the cryptocurrency market, the current upshot blockchain.
Cryptocurrencies can be used to buy and sell goods within our existing financial system, but they are being traded and saved as a new form of virtual financial system itself, with stakeholders continuously negotiating the emerging norms of defining, gaining, and maintaining trust. Formerly identified stakeholders of cryptocurrency networks are users, exchanges, miners, and merchants (Sas and Khairuddin
2015; Shcherbak
2014).
Users sign up on exchanges in order to buy, trade, or manage cryptocurrency investments.
Miners validate cryptocurrency payments by solving cryptographic puzzles, and
merchants accept users’ cryptocurrency payments for goods that they sell (Shcherbak
2014). We add that
information distributors, such as Brokerbot, are distinct stakeholders, yet to be fully accounted for. The latest news on cryptocurrency and other relevant data are dispersed by information distributors, meaning other stakeholders heavily rely on information distributors to understand the “pulse” of the market. Hence, people’s trust in apps like Brokerbot is important because of its functionalities and information shared, but Brokerbot is not “trustless” by design; it operates in the risky cryptocurrency domain as a named stakeholder.
Blockchain can provide
technological trust that people’s transactions are valid and secure, i.e., they reliably, transparently go through, but
social trust between stakeholders (Sas and Khairuddin
2015) is still important for decentralized systems to work. Reputation based trust between individuals,e.g., on eBay
2 (Cabral
2012), is not possible in anonymized networks, and may even be unhelpful. People can and do trust entities like exchanges, merchants, or information distributors, but these are frequently new and unknown entities (in contrast to some of the well-known banks and brokers). Further, users are asked increasingly to exercise self-reliance and manage risks of cryptocurrency related activities (Gao et al.
2016). Users therefore handle their own risks, which they will do so if the primary focus is on cryptocurrencies’ usefulness (Mendoza-Tello et al.
2019).
The need for trust becomes more salient when perceived risk is high, and trust becomes less important when perceived risk is low (Nickel and Vaesen
2012). Justifying trust in highly risky scenarios becomes difficult. Conversely, nowhere is trust more salient and important than in high-risk domains. We do yet not know if trust will only increase when general adoption of cryptocurrencies takes place, as we saw with web-based personal finance systems. Previously, people’s perceived trust in online banking and transactions depended on perceived risks that new technologies introduced (Mukherjee and Nath
2003). As general use of and knowledge in digital financial systems increased, perceived risks lessened (Mukherjee and Nath
2003). Cryptocurrencies introduce new potential risks due to limited knowledge and experience people have with cryptocurrencies and blockchain technology.
There is a danger in attempting to spell out trust purely in terms of risks. Trust is often translated as a reduction in risks by the means of user surveillance or a check-list of safety features of a system, which replaces, rather than encourages, trust (Nickel
2013,
2015). While it is commonplace to define trust through its negation, i.e., risk, a more holistic understanding is to see “trust as a presumptive element in
all concerted action irrespective of its ‘risky’ character” (Watson
2009, p. 484). Hence, trust is a dynamic process (Nickel
2013,
2015) because situational factors heavily influence our willingness to trust (Snijders and Keren
2001). Trust is socially generated over time, contextualized to how various stakeholders’ roles are created, legitimized, and negotiated (Strauss
1978) in the wake of disruptive technology.
A social ecology that emerges through blockchain puts together previously unexpected collection of ecologies. Brokerbot, for instance, combines social media, cryptocurrency news channels and exchanges. Cryptocurrency related information, as well as distributors and consumers of information, exist across various social ecologies. Hence, if types of information and people we trust depend on how social situations change (Harper et al.
2017), blockchain introduce novel social situations. Decentralized networks carve out a new “ecology of ecologies” and many cryptocurrency stakeholders’ roles are in flux.
Information distributors like Brokerbot often have to build up users’ trust over time, known as “slow trust” that comes from long-term usage by delivering consumable and accurate information reliably. This differs from “swift trust”, meaning relationships get speedily established and lost (Corritore et al.
2003). A challenge is that decision-making based on cryptocurrency information is often swift, as well as the value of such information itself, while stakeholders’ needs constantly change; the process of trust can have an uncertain timescale within the cryptocurrency network. Shared sense-making involves time it takes for users’ psychological states and technology to develop together (Nickel
2013,
2015).
The issue is that the social backdrop as a shared space is incomplete due to blockchain’s novelty. Trust is a shared
background disposition for any interaction between people to take place; interactants in a social exchange “trust in other parties’ ability and motivation to make similar sense of a situation, using similar sense-making methods” (Watson
2009, p. 481). “Sense-making” is a tacit process in that when trust is questioned, doubted, or betrayed, we try to make sense of the
absence of trust. But, when local sense-making is working and allows parties to have shared trust, we rarely notice its presence.
Trust is thus not an outcome measure, but a taken for granted disposition that underscores all social interactions (Garfinkel
1963; Watson
2009). However, on “trustless” blockchain networks, there is no central governing body to regulate cryptocurrency market fluctuations with rapidly changing stakeholders; many are on their own to make sense of trust. With anonymized transactional processes, making sense of trust between stakeholders is a new, unfamiliar endeavor. We can thus examine the evolution of trust as changing
moral motivations between stakeholders (Muir
1987; Kiran and Verbeek
2010) when the supposed trustless technology requires us to grasp trust in a new light.
2.3 Trust in and through bots
We saw that in the context of the cryptocurrency ecosystem, trust is often defined by its negation, i.e., risk. When it comes to chatbots, trust is often defined by a replacement, i.e., transparency (of systems or agents). When we face trust squarely rather than leaning on its negation or replacement, the common denominator between the cryptocurrency domain and the usage of a chatbot is the
social nature of trust. Trust in and through blockchain showed that beyond engineering efforts, we should see trust as foregrounding our social interactions (Garfinkel
1963; Watson
2009), given the risky cryptocurrency market. Similarly, trust in and through bots zooms in on the social, dyadic level of trust, in which there is tendency to conflate trust with transparency. First we cover transparency and why it frequently replaces trust in terms of agents, before turning to trust in and through (perceived to be social) bots.
Transparency stands for how a technological system communicates to users about its inner workings, e.g., why it got to a certain decision, and in whose hands responsibilities lie for its inner workings (Floridi and Cowls
2019; High-Level Expert Group on Artificial Intelligence
2019). Hence, prior research indicates a strong link between agents that transparently communicate information with users and people’s resulting trust in these agents, especially in collaborative settings (Muir
1987; Lim et al.
2009; Mercado et al.
2016). People are likely to trust bots that gives clear, rational reasoning via suitable modalities, like visual GUIs, text, or speech, compared to those that do not give explanations (Wang et al.
2016; Lyons et al.
2017; de Visser et al.
2012; Jian et al.
2000).
Problems arise when transparency can lead to over-trust and under-trust; people can over-rely on agents when they should not Skitka et al. (
1999) and Schaffer et al. (
2019). Or, they may under-trust agents when they should trust them when agents’ information sharing can aid doubt rather than trust (Springer and Whittaker
2018). The missing point is that when studies rely on transparency cues to arrive at trust, trust is purely treated as an outcome measure, not taking into account its social nature (Garfinkel
1963; Watson
2009), malleable as a process (Nickel
2013,
2015).
The divide between trust as a technical outcome vs. its social nature persists if one believes that risk (at meso to macro levels for cryptocurrency) and transparency (at the micro level for chatbots) can be controlled to bring about trust or replace trust. Hence, our interactions with many forms of technology reveal the social-technical gap; agents can support users in their technical capacities, but such capacities cannot be a priori designed to meet users’ social needs (Ackerman
2000). The pro and con of chatbots are that they appear social and are treated in social ways (Nass et al.
1994), but they are technically unable to be as social at the same level as human beings. Then why do people still attempt to socialize with bots?
Chatbots as
social interfaces can seem trustworthy for people (Nordheim et al.
2019), especially when bots can potentially support people’s emotions (Zamora
2017; Lee et al.
2019). For long-term use, trust in a chatbot is important to build relationships (Clark et al.
2019). We see that even for short-term interactions with functional bots, trust makes a difference. There are five factors that contribute to people’s trust in a chatbot that operates in low-risk settings, e.g., customer support. They are a chatbot’s (1) perceived expertise, (2) responsiveness, as well as (3) perceived risk and (4) brand perception, which are more on the chatbot’s environment, and (5) a user’s tendency to trust technology (Nordheim et al.
2019; Følstad et al.
2018). Many of these factors on trust hint at transparency on a bot’s functionality, not its social ability
A chatbot can be all-around assistants like Siri
3 or Alexa
4. It can serve a specific function, such as Swelly
5, a bot for conducting polls on messaging channels. But as of now, many chatbots do not meet user expectations, even if they are increasing in number and specialties (Luger and Sellen
2016; Clark et al.
2019). People go through a lot of trial and error to orient themselves to what tasks conversational agents are capable of in which contexts, and how to be properly understood by these agents (Luger and Sellen
2016; Porcheron et al.
2018; Cowan et al.
2019). Hence, most bots are more transparent about their functions than how socially intelligent they can be.
Chatbots exist on
social platforms that people already use like Facebook or Slack. Even for text-based, functional bots that respond to our commands (writing “save” as a command rather than writing to the chatbot “save this for me”), people may expect a bot to communicate using “human language” when bots are on chat platforms (Lee et al.
2017). An expectation of social intelligence may be attributed to chatbots’ long history. The most famous chatbot is Weizenbaum’s Eliza that people attributed anthropomorphic traits to when it posed as a therapist without complex AI (Weizenbaum
1983). Since the publication on Eliza in 1966, we have seen the growing use and interest in bots with a recent upsurge (Dale
2016). Examples like Microsoft’s Xiaoice
6 are built to be anthropomorphic or to pass the Turing test (Wang
2016).
Weizenbaum’s paper dates back more than 50 years. Blockchain is a relatively new technology, with Bitcoin as the most famous cryptocurrency dating to 2008 (Nakamoto). While how long any technology has been present in the public conscience is not trivial, the more important difference is that a chatbot, unlike blockchain, is inherently easier to grasp. Humankind has seen conversation partners in inanimate objects and nature for millennia, technology notwithstanding. Bots that appear socially capable are more familiar to us than blockchain technology that promises trust as a technological default, rather than as a social default. When blockchain network prioritizes anonymity, knowable social actors and social trust between stakeholders can be rare, and may become more prized. Trust cannot be erased as the familiar backdrop of our social bonds. We now briefly explain how Brokerbot worked before turning to our methodology and results.