1 Introduction

The aim of this thought piece is to explore the interface between AI and disability, and the ethical dilemmas which this raises. To do so, we shall use narrative accounts, in the form of diaries as two disabled people, to analyse how AI is used as part of our daily lives, and the promise, support, and frustrations this brings. We shall also undertake a brief literature review of academic and professional articles on the topic of AI, disability, and ethics. Finally, we shall draw conclusions as to how developers might better approach the construction of AI software and technology.

2 Literature review

Within this thought piece, we shall use the social model of disability [19] which originated within the UK in the 1970s. In the document Fundamental Principles of Disability [22], UPIAS (Union of Physically Impaired Against Segregation) defined disability not as an impairment of the body or brain, but as a “relationship between people with impairment and a discriminatory society.” The influence of Marxist thought and labour movement traditions is clear in the work of UPIAS and in Capital, Karl Marx [15] defined capital and labour not as things but as relationships. That is, the social model implies that it is society which disables individuals by the constructs which it places around us, and that it is because society is not inclusive that individuals are disabled.

More than a billion people live with disability and there is a need to explore how AI technologies can affect this diverse group. AI research can be a force for good for disabled people as long as they are not marginalised. A roadmap which includes AI and ethical issues has yet to be developed according to the Alan Turing Institute [1].The creation of a network of experts and resources for AI and inclusion could help to address the “unmet need of assistive products crucial …… to implement the UN Convention on the rights of persons with disabilities” [25].

A number of authors have written on the topic of disability, AI, and ethics (justice, or fairness). We shall summarise recent developments in this area below. It is to be hoped that these developments lead to a more inclusive, and hence ethical in my view, approach to the design of AI systems for those with disabilities, including myself.

Bennett and Keyes [2] present two case studies, one on decision-making and the other on AI for the visually impaired to demonstrate how through failures to consider structural injustices in their design, they are likely to result in harm not addressed by a “fairness” framing of ethics. They call on researchers into AI ethics and disability to “move beyond simplistic notions of fairness, and towards notions of justice.”

White [24] discusses fairness for people with disabilities, identifying some of the central problems, and took a philosophical perspective motivated by a concern for social justice, emphasizing the role of ethics. Lillywhite and Wolbring [14] identified many ethical issues within AI and machine learning as fields and within individual applications. They also identified problems in how ethics discourses engage with disabled people.

Coeckelbergh [4] proposes four objections to introducing AI in health care. First, a robot is able to deliver care, but it will never really care about the human. Second, AI cannot provide “good care”, as true care requires empathetic contact with humans. Thirdly, AI may be able to provide care, but in doing so violates the principle of privacy, “which is why they should be banned”. Finally, AI technologies such as robots provide “fake care” and are likely to “fool” people by making them believe that they are receiving genuine care.

Trewin [20] argues that fairness for disabled people is different to fairness for other protected attributes such as age, gender, or race, because of the extreme diversity of disabilities, and suggests ways of ensuring fairness for disabled people in AI applications.

Floridi et al. [10] report the findings of AI4People, an initiative designed to lay the foundations for a “Good AI Society”. They introduce the opportunities and risks of AI for society and present ethical principles that should underpin its development and adoption. If adopted, these recommendations would “serve as a firm foundation for the establishment of a Good AI Society.” In 2019, Techshare Pro held a panel on; “Ethics, Machine Learning and Disabilities” which was chaired by Ability Net and included the Head of Public Engagement at the Ada Lovelace Institute [21].

In [8], the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. According to the Guidelines, trustworthy AI should be: “lawful—respecting all applicable laws and regulations, ethical—respecting ethical principles and values, and robust—both from a technical perspective while taking into account its social environment”. OpenAI is an AI research and deployment company based in San Francisco. Their mission is to ensure that AI benefits all of humanity. In [17], OpenAI released a charter that will guide the AI development in acting in the best interests of humanity.

Therefore, it seems that there are many initiatives which are striving to solve the problems, and address the issues, inherent in developing AI systems which are fair and ethical for, and serve the needs of, those with disabilities.

It is true that AI technologies have the potential to dramatically impact the lives of people with disabilities. However, widely deployed AI systems do not yet work properly for disabled people, or worse, may actively discriminate against them. Guo et al. [12] identify how AI may “impact particular disability constituencies if care is not taken in their design, development, and testing.” This is something which Laura and I, have both experienced, in our day-to-day lives and our interactions with, and use of, simple AI systems. The diaries which we have prepared and presented below demonstrate this.

Closer to home, in terms of my own disability, Yozbatiran et al. [26] use data from one subject to demonstrate the feasibility, safety, and effectiveness of robotic-assisted training of upper extremity motor functions after incomplete spinal cord injury. Developments such as this give me some hope that advanced AI technology may yet assist in my further recuperation.

3 Methodology

The methodology employed in this thought piece is a mixture of autoethnography and reflection. We present two narrative accounts based on our own experiences of the use of day-to-day AI technology, such as speech technology on mobile phones, and technology to help the visually impaired [16]. We have employed autoethnographic research approaches [3] and techniques of reflection [5, 18] in this work.

“Autoethnography is an approach to research and writing that seeks to describe and systematically analyse personal experience in order to understand cultural experience” [6, 7, 13]. Autoethnography involves analysing your own experiences and feelings, preferably as they occur, and relating this to the academic literature; and using those experiences to draw wider conclusions, resulting in lessons for others.

Reflection [18, 23] has many similarities with autoethnography. Reflection involves analysing particular occurrences, again preferably as they occur, and thinking about what is learned from these occurrences and what decisions are taken as a result. One method often used in reflection, and applied in this paper, is the critical incident approach [9]. The critical incident approach involves identifying and analysing particular incidents which occur, and which makes the individual question their own beliefs or practices. To help us do so, we have each prepared a diary of our day-to-day interactions with simple, readily available, AI systems. These diaries appear below, in the form in which they were written, using first person as they are personal accounts.

4 Peter

Critical incident On 29 April 2016 at 5 am, I was returning to my bedroom in the dark. I found myself plummeting down the stairs. I landed awkwardly, with my head hanging over the stairwell. I realised immediately that I had broken my neck; I could not feel my arms or legs. I shouted for my wife, Marie, who telephoned for an ambulance. I was rushed into intensive care at the Royal Victoria Infirmary, Newcastle, UK. I was later transferred to the Spinal Injuries Unit at James Cook Hospital, Middlesbrough, UK. I spent 6 months in hospital learning how to speak, eat, and breathe again. I started physiotherapy and regained some mobility. The damage to my spinal cord is incomplete, which means that I have some mobility, but none which is really functional in that I cannot feed myself or walk.

4.1 Peter’s diary (including a series of critical incidents)

7 AM. I am awoken by my carer who has been watching over me and moving me in the bed during the night. The carer gives me a snack, usually toast, my morning medication and then a second carer arrives to wash me and prepare me for the day.

9 AM. I have been washed and showered by my carers. I check my medication with my carers, and I realise that I need to order some painkillers, so I use speech activated software to telephone my GP surgery. After a few attempts, my mobile phone picks up the correct number and dials. Unfortunately, no one answers, and I am unable to end the call. I am asked to leave a message, which I do not really want to do, but I cannot stop the call either. My carer arrives from the dining room after finishing the ironing and switches off my mobile for me. I ring again and this time I get through first time and order my medication.

10 AM. I decide to answer my emails. To do so, I use speech technology and the assistance of a carer. The carer needs to switch on my computer, load my browser, and enter into my email account. When I was in hospital, I was trained to use a device which involved a camera which recognised a silver dot placed on my forehead or on a pair of glasses. Unfortunately, I found it so difficult to use, and ultimately annoying, that I prefer to ask a carer to use the mouse for me. I use speech technology to help me answer my emails. The technology is trained to recognise my voice and most of the time it does okay; however, on several occasions, it spells words incorrectly, and sometimes fails altogether. In the time it has taken me to type and correct this paragraph (including asking my carer to retype several of the words), I could previously have typed two pages. Simple commands which involve words such as “Word” are likely to open the word processing software “Word”. As another example, asking the speech software to type the word “dot” resulted in a “full stop” being typed. In many ways the speech software is a tremendous help; in others it is very frustrating.

12 noon. I need to text one of my carers to bring some frozen meals in from the town for me. It takes me several attempts to get the text right. First, I have to identify the correct person to send the text to. The speech software on my mobile phone often gets confused between similar names, so I have to be careful that I am not sending a text to the wrong person. When I have identified the correct person, the software does not give me much time to speak the text. If I break for a few seconds, it stops, reads the text back to me, and asks if it is okay to send it. As I have not finished the text, I have to start again. There is probably a way of continuing with the same text, but I have not been able to find it. Therefore, I start the text again and, in the end, although it may not be totally correct, it is close enough to be understandable, so I send it anyway. My carers are used to garbled texts and can usually decipher them! Sometimes, my texts are somewhat embarrassing; for example, I tried to send a text to one carer with a kiss, in the form of a letter “X”, and it came out as “sex” and was sent before I had the chance to realise what I had done!

2 PM. I have a Zoom meeting with two former colleagues. My carers manage to launch the meeting for me successfully. However, after about 20 min, I must have said something which resembles the words “wake up” and which is the command to activate my speech software. Annoyingly, a dictation box appears in the middle of my screen. Although my colleagues cannot see it, I can no longer see them. I need to shout the command “go to sleep” at my screen. Sometimes this works, sometimes it does not. More than often I have to shout for my carer who comes from the next room, closes the dictation box, and switches off my microphone which operates the speech software. Another minor irritation over.

5 PM. I like to listen to some music; mostly old hits from the 1960s. I ask my home hub to play a particular song. It does not recognise the title (or does not like my voice) and plays a completely different song. I try and get it to stop, but it cannot recognise my voice over the music. I have to wait until the track finishes and ask it to play the song I originally wanted to hear.

9 PM. My evening carer arrives and with the help of the carer who has been at work on the afternoon, turns are over in my bed. I have a snack, the carer gives me my evening medication and I tell my home hub to “go to sleep” which takes several attempts as it often ignores me, and we settle down to watch some television. Another day over. A similar routine begins the next morning.

5 Laura

Critical incident Marie and I are in the Sunderland Eye Infirmary, UK, with our 3-year-old daughter, Laura. Laura has developed juvenile arthritis, which has started to affect her sight. The consultant has just looked into her eyes and asked his colleague to also have a look. They are both concerned about the damage to her eyes. I ask how serious the damage is and the consultant tells us that he is quite concerned about it and how it might affect her sight. I ask, “Could she go blind?” The consultant replies “Yes, it is possible that she might.” Marie and I look at each other, in shock. Subsequent to this Laura have many excellent treatments at Sunderland Eye Infirmary, at St Thomas’s Hospital, London, UK, and the Prince Charles Eye Unit, Windsor, UK. Sadly, they are unable to save her sight and she is blind by the time she is 5 years old.

5.1 Laura’s diary (including a series of critical incidents)

5.45 AM. I’m awoken by my 3-year-old daughter. I shout out to my phone, asking my speech technology to tell me the time. Definitely too early to be awake! I then ask my speech technology to set an alarm for 7 AM.

7.15 AM. I’m making breakfast and want to find a particular cereal from the many boxes on the shelf. I use an app which, after taking a photograph, can identify and describe an object. I select a box, take a photo, and wait. It describes the colour of the packaging. I rotate the box to its perpendicular side. This time, the app lets me know I’m holding a box. I try a few more attempts, turning the box, moving the phone up and down to focus on different areas of the cereal packet. After around six tries, the phone finally lets me know it is not the box I’m looking for. I select another packet. Three attempts in and I decide to go and ask my partner instead.

8.00 AM. My daughter brings me a selection of printed books, “Can I have a story Mammy” she asks. We choose a book and I am pleased to see its one I’ve previously labelled. There is a sticker on each page which, when tapped with a special pen-shaped device, triggers a recording of my partner reading the story. After a few pages, my daughter asks, “What’s that?” pointing to one of the pictures. I again use the photograph app to describe the image. Because my daughter has placed my finger on the picture in question, I’m able to get an accurate photo and, therefore, the required description first time. Hurray!

8.30 AM. My daughter wants to wear her yellow tights today. My partner has now left the house and the fabric on each pair feels the same, not giving any clues as to which colour, they are. I use a colour identification app, again taking a photo of the clothing before the technology lets me know its colour. “Orange” the app replies. My daughter only has yellow, blue, or white tights, so I guess that “orange” probably means yellow and get her dressed.

9.00 AM. The post has arrived. I lie a letter down flat on my scanner and press the scan key. The scanner’s voice lets me know that the document is blank which tells me I need to flip it over to the other side. Once I’ve done this, I do not need to worry about positioning as the scanner will read the document even if it is placed upside-down. As expected, once turned over the scanner immediately reads the letter in a clear voice once I’ve again pressed the scan key. The letter is a hospital appointment. I ask my speech technology to make note of the date in my calendar.

10.00 AM. I am on the way to the community centre, my guide dog in one hand and my daughter holding the other. I’m using a GPS app on my phone. I started off using headphones, only using one earpiece, keeping the other free to listen to traffic, my daughter, etc. This still did not feel totally safe, so I take out the earphones, set the phone’s volume to high, and place it in my pocket. I can just about hear the phone as we walk, however, occasionally, a passing car or my daughter’s voice blocks out what the phone is telling us. At most of the junctions, the phone lets me know which street we have arrived at, however, sometimes this happens after we have already crossed over and began moving away from said street. I mainly use this app for re-assurance on routes with which I am already very familiar. Experience using it in lesser known areas has shown me that it can be slow to pick up where I am standing, resulting in issuing turnings and any unexpected noise can result in missing the app’s audio prompts.

12.00 Noon. It is raining, so we have decided to get the bus home. I am again using the GPS app which also lets me know which bus stop we are at. This can be incredibly useful rather than relying on the driver remembering to tell me when I’m at my stop or trying to track the bus’s movements to ascertain where we are. In truth, I use a combination of the latter method and the app. Again, there can be a slight lag with the app’s information, so I try and ensure I have some understanding of where on the route we are and use the app for confirmation purposes. Using these methods, we arrive at the correct stop and successfully disembark from the bus.

12.30 PM. I text my partner to let him know we had a happy and successful morning. I use a screen reader on my phone which reads out whatever my finger has highlighted on the touch screen. This enables me to quickly locate my partner’s number and open a new text. I also sometimes use this method for typing out the message, however, today for extra quickness, I press the dictation button on the phone and speak out my message. Once finished, I then use the screen reader to read back the message. One or two words are not quite right. Sometimes, this would result in me deleting the whole message and trying again; however, my partner is used to working out dictation errors, so I press send.

2.00 PM. I am checking my emails. Using the arrow keys to scroll around the screen, my screen reader reads out whatever the curser is on. I successfully check and reply to my emails for the day.

2.30 PM. A friend has sent me a document to look at. I use my screen reader to find it in my downloads file and open it. Disappointingly, the file is a PDF and my screen reader is unable to read the text. I email my friend back and ask for it in another format.

3.00 PM. My daughter is having some time on her tablet, whilst my partner and I cook our evening meal. I ask my speech technology to activate the screen reader software on the tablet. Once this is enabled, I am able to enter the passcode, again listening to the screen reader’s feedback as I scroll around the screen. My daughter asks to play a particular game, so I locate it and open it up for her. I then ask my speech technology to disable the screen reader, so my daughter can use the tablet without accessibility mode.

3.10 PM. My daughter would like a different game on the tablet. I repeat the process of enabling the screen reader to open up her preferred app. An in-app voice lets me know that this app is not compatible with my screen reader and I need to turn on the app’s own accessibility programme. I do this, wanting to locate the particular game which my daughter wants to play. The app’s accessibility mode works differently to my screen reader and it is difficult to scroll around the screen, although it does audibly identify what my finger is on. It also selects as soon as I touch it, resulting in me opening lots of unwanted games. I ask my partner for help.

6.00 PM. I ask my speech technology to call my dad. After our conversation, I then scroll around the screen to locate the “end call” button which my screen reader audibly identifies. My speech technology is unable to hang-up a call, so a combination of approaches is needed when making and ending calls.

7.00 PM. The house is quiet, and I decide to read some of my book. I use an eBook app on my phone, open it up (using my screen reader software), and then scroll to the book I am reading. I then enable the speech setting, so that the eBook can be read out by the in-app reader. This works well, although some dedicated eBook devices are not accessible with screen readers and do not have options to enable speech.

8.30 PM. My speech technology has something to say. It is letting me know that I have an appointment tomorrow. I had set a reminder several days ago. Thanks for the reminder my speech technology.

10.00 PM. I ask my speech technology to set an alarm for 7 AM the next day.

6 Conclusions

There is much activity in the area of AI, disability, and ethics. This is very commendable and offers hope and promise to disabled individuals, such as Laura and me. However, our own experiences, as detailed in our diaries, demonstrate how on a daily basis, AI technology can assist, and frustrate us. Sometimes AI technology lives up to its promise; on other occasions, it lets us down. However, overall, our own experiences of AI technology are positive.

The main lesson to be learned from the literature, and from our own experiences, is the importance of involving disabled people in the design of AI software and technology which is intended for use by those with disabilities. It is no good waiting until the testing or evaluation stage to involve disabled people. This needs to happen as soon as design begins. What is needed is true co-design, where disabled people are part of the design team and the process of design. This should include a representative group of people with a diverse range of disabilities. Of course, this is not easy to do; however, it is also vital if AI technology is to achieve its true potential. Guided by the disability movement’s mantra, “Nothing about us without us” [11].