Skip to main content
Erschienen in: Ethics and Information Technology 2/2024

Open Access 01.06.2024 | Original Paper

Use case cards: a use case reporting framework inspired by the European AI Act

verfasst von: Isabelle Hupont, David Fernández-Llorca, Sandra Baldassarri, Emilia Gómez

Erschienen in: Ethics and Information Technology | Ausgabe 2/2024

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Despite recent efforts by the Artificial Intelligence (AI) community to move towards standardised procedures for documenting models, methods, systems or datasets, there is currently no methodology focused on use cases aligned with the risk-based approach of the European AI Act (AI Act). In this paper, we propose a new framework for the documentation of use cases that we call use case cards, based on the use case modelling included in the Unified Markup Language (UML) standard. Unlike other documentation methodologies, we focus on the intended purpose and operational use of an AI system. It consists of two main parts: firstly, a UML-based template, tailored to allow implicitly assessing the risk level of the AI system and defining relevant requirements, and secondly, a supporting UML diagram designed to provide information about the system-user interactions and relationships. The proposed framework is the result of a co-design process involving a relevant team of EU policy experts and scientists. We have validated our proposal with 11 experts with different backgrounds and a reasonable knowledge of the AI Act as a prerequisite. We provide the 5 use case cards used in the co-design and validation process. Use case cards allows framing and contextualising use cases in an effective way, and we hope this methodology can be a useful tool for policy makers and providers for documenting use cases, assessing the risk level, adapting the different requirements and building a catalogue of existing usages of AI.
Hinweise
The views expressed are purely those of the authors and may not in any circumstances be regarded as stating an official position of the European Commission.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Nowadays, Artificial Intelligence (AI) is living a groundbreaking moment from many perspectives, including the technological, societal and legal ones. On the one hand, more and more powerful and technologically mature AI systems are being used by the wide public on a daily basis, including recommender systems, decision-support systems, content generation systems, person identification and object recognition systems, and conversational systems. On the other hand, policy makers around the world are making progress in creating legal bases for regulating the trustworthy use of AI. Some recent examples are the US Executive Order (EO) on Safe, Secure, and Trustworthy Development and Use of AI (White House, 2023), which serves as a tool for creating industry standards, guidelines, practices, and future regulations. On the same day, the G7 leaders published the International Code of Conduct (G7, 2023) under the Hiroshima AI Process. However, at present, the most substantial policy initiative, and notably the sole legislative proposal, is the EU’s proposal on harmonised rules on AI, commonly referred to as the AI Act. The AI Act establishes a harmonised regulation directly applicable throughout the EU single market. In December 2023, a provisional agreement was reached between the European Parliament and the Council (European Commission, 2023), but the definitive regulatory text is not yet available. Currently, there are three different versions of the text: the Commission proposal, presented in April 2021 (European Commission, 2021), the general approach presented by the Council on December 2022 (Council of the EU, 2022), and the amendments adopted by the European Parliament on June 2023 (European Parliament, 2023). All versions follow a risk-based approach, implementing stricter rules for AI systems with higher risks.
With this exponential trend in the daily use of AI, there is a pressing need to establish robust mechanisms to foster a better understanding of AI systems by all affected stakeholders –both experts and non-experts– in order to help ensuring their trustworthy, safe and fair use. Indeed, several studies have acknowledged that the issue of how to communicate about the functioning and potential limits of increasingly complex AI systems remains an open challenge (Laato,Tiainen,Najmul Islam and Mäntymäki, 2022). In particular, transparency in the form of well-structured documentation practices is considered a key step towards ethical and trustworthy AI, as outlined by the High Level Expert Group (HLEG) on AI in its “Ethics Guidelines for Trustworthy AI" (European Commission, 2019). Indeed, transparency and documentation are fundamental elements of the AI Act (Panigutti et al., 2023), which is considerably grounded in these guidelines.
Some methodologies for AI documentation have emerged and been rapidly adopted in the recent years. Nevertheless, their target audience is typically AI technical practitioners (e.g. AI developers, designers, data scientists) leaving aside other important personas such as policy makers or citizens (Hupont et al., 2023). Moreover, the focus is mainly put on technical characteristics (e.g. performance, representativity) of the data used for training (Gebru,Morgenstern,Vecchione,Vaughan,Wallach,Iii and Crawford, 2021) and/or general-purpose AI models (Mitchell et al., 2019). When it comes to document more specific use cases of AI systems, i.e. a real-world deployment of an AI system in a concrete operational environment and for a particular purpose, documentation is generally limited to a brief textual description without a standardised format (Louradour and Madzou, 2021). This is particularly relevant in the context of the AI Act, where the specific use case, delimited by the intended purpose of the AI system, will mainly determine the risk profile, and, consequently, the set of legal requirements that must be met. Hence, the AI Act’s approach further reinforces the need to adequately document AI use cases, which are directly related to the intended purpose of an AI system.
The technique of use case modelling has been used for decades in classic software development (Cockburn, 2001). The use cases modelled in this way provide insights into how different actors interact with a software system, the user interface design and the main system’s components. It allows developers to identify the system’s boundaries and required functionalities, ensuring that all stakeholders are satisfied and have a shared understanding of the system’s expected behaviour (Fantechi,Gnesi,Lami and Maccari, 2003). The use case modelling technique therefore serves as a common means of communication between stakeholders, including developers, designers, testers, business analysts, clients and end users, allowing for effective collaboration and reducing misunderstandings with respect to functional requirements.
Building upon some preliminary work focusing on the affective computing domain (Hupont and Gomez, 2022), this study explicitly focuses on iterating on a classic software use case modelling methodology, the widely-used Unified Markup Language (UML) specification (Object Management Group, 2017). The proposed use case cards represent an evolution and adaptation of elements present in UML, offering a practical and standardised template for documenting the intended use of AI systems. By building on UML, our approach provides an accessible and user-friendly iteration that specifically addresses AI use cases’ concrete documentation requirements. Moreover, the proposed use case cards effectively frame and contextualise the operational and intended purpose of the AI system. They are therefore conceived to serve as a preliminary tool for assessment of the level of risk under the AI Act.
To ensure that use case cards cover all the information needs required for the assessment of use cases through the lens of the European AI Act, the methodology has been developed following a co-design process involving European Commission’s AI policy experts, AI scientific officers and an external UML and User Experience (UX) expert. Several examples of use case cards are then validated in a user study to check for adequacy, completeness and usability. The use case card template and all implemented examples are publicly available at the GitLab repository https://​gitlab.​com/​humaint-ec_​public/​use-case-cards.
The remainder of the paper is as follows. Section 2 reviews the central role of use cases within the AI Act, identifies the needs in terms of information elements for their documentation, and reflects on how current AI documentation methodologies fail to cover these needs. Section 3 presents the use case card documentation methodology and details its completion process. Section 4 elaborates on the co-design process and validation of use case cards with key stakeholders. Finally, Sect. 5 concludes the paper.

Background

The central role of use cases in the AI policy context

An AI model is a mathematical construct that generates an inference, or prediction, based on input data (Estevez et al., 2022). It can either be the result from training based on a machine learning algorithm, or the outcome of other approaches based on symbolic or knowledge-based AI methods (OECD, 2023a). Popular examples of AI models include object detectors, language/image generation models or content search algorithms. While some AI models are designed for specific purposes (a.k.a. narrow AI models), many others are conceived as general-purpose AI models that can be eventually adapted and deployed in multiple application areas (Gutierrez et al., 2023). The level of generality varies, and models can range from being versatile to highly specialised based on their intended use case. For instance, an object detector can be embedded in car’s software system to recognise vehicles, road signs and pedestrians (Gupta et al., 2021), or be used for automatic people counting during a demonstration for surveillance purposes (Sánchez,Hupont,Tabik and Herrera, 2020). Similarly, the same large language model can be adapted to function as a chatbot system for e-commerce (Zhou et al., 2023) or in the medical domain (Li et al., 2023).
An AI system is typically built by combining one or more AI models. The compromise agreement of the AI Act (Council of the EU, 2023) aligns the definition of AI system with the approach recently proposed by the OECD (OECD, 2023b)1. Bringing one or more AI models to a real-world application is not immediate, as it implies the effort of integrating them in a functional system (i.e., an AI system), including the necessary infrastructure, user interfaces, data pipelines, and other components required for the application to operate effectively in a production environment (Hupont et al., 2022). Further, it is important to consider in the process the use cases or variety of scenarios where the resulting system can be deployed. Use cases illustrate how users can utilise the AI system to accomplish their goals and therefore provide a key user-centric perspective on its functionality.
The EU AI Act supports precisely this human-centric approach, putting the concept of intended purpose at the centre of regulation. The definition provided for the intended purpose is the same across the three versions of the AI Act, that is:
“[...] the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation”
According to the proposed regulation, the system’s intended purpose determines its risk profile which can be, from highest to lowest (European Commission, 2023): (1) unacceptable risk, covering harmful uses of AI or uses that contradict European values; (2) high-risk, covering uses identified through a list of high-risk application areas that may create an adverse impact on safety and fundamental rights; (3) limited risk, covering uses that pose risks of manipulation and are subject to a set of transparency obligations (e.g. systems that interact with humans such as conversational agents, are used to detect emotions or generate or manipulate content such as deep fakes); and (4) minimal risk, covering all other AI systems. Figure 1 illustrates this risk level approach. It is important to note that the risk categorisation is also consistent across the three version of the AI Act, although the specific use cases included in each risk level slightly differ among them.
The AI Act establishes a set of harmonised rules that associate use cases with risk levels, which in turn imply different legal requirements. Of particular significance are AI systems classified as high-risk, which are further subjected to conformity obligations. The rules to categorise an AI system as high risk are provided in Article 6 in all the three versions of the AI Act. Despite minor nuances between the three legal texts, we have the following two options. First, an AI system intended to be used as a safety component of a product, or that is itself a product, covered by Union harmonisation legislation listed in Annex II (e.g. machinery, toys, medical devices regulations) if it requires undergoing a third-party conformity assessment. Second, AI systems falling under one or more of the critical areas and use cases referred to in Annex III (e.g. remote biometric identification systems, AI systems used to prioritise the dispatch of emergency services, those used as polygraphs by law enforcement). Therefore, the risk level depends on a series of key information elements that are essential to document its intended purpose. We have compiled them in the list presented in Table 1. As can be observed, the system should be put into context by providing information on the operational, geographical, behavioural and functional contexts of use that are foreseen; who the users and impacted stakeholders will be; and what the system’s inputs and outputs are. In addition, it is equally important to clearly specify the intended use of the system as well as its foreseeable potential misuses. Lastly, three elements are particularly important when it comes to identifying an AI system’s risk level. The first one is the type of product, which is linked with Annex II and the possible need for a third-party conformity assessment. The second element is to determine whether the AI system is a safety component or not. The third one is the application area, which is linked with Annex III.
Having all these information elements adequately covered in a unique use case documentation methodology would be a valuable tool both for policy makers and AI system providers to better navigate the AI Act and properly assess the risk level of AI systems as well as tailoring the different requirements. However, current AI documentation approaches fail to provide full coverage as we will see in the next section.
Table 1
Key information elements related to use cases under both the Commission Proposal (European Commission, 2021) and the Council Mandate (Council of the EU, 2022) of the AI Act
Element
Description
Related legal text
Intended purpose
Use for which an AI system is intended by the provider, including the specific context and conditions of use
Art. 3(12)
User
Any natural or legal person, public authority, agency or other body, under whose authority the system is used
Art. 3(4)
Stakeholders
Persons or group of persons on which the system is intended to be used and/or that are impacted by the AI system
Aticle 7, Annex IV(2b)
Input data
Data provided to or directly acquired by the system on the basis of which the system produces an output
Art. 3(32)
Outputs
Expected outputs of the AI system
Art. 3(32), Art. 13(3vi)
Foreseeable misuse
Use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems
Art. 3(13)
Type of product
Type of product or service of which the AI system is a component or the product itself. It can be a machine (e.g. industrial machine, robot, motor vehicle), device (e.g. sensor, medical device), some other hardware (e.g. equipment) or a software (e.g. standalone application, software service)
Article 6, Annex II
Safety component
Component of a product or of a system which fulfils a safety function for that product or system or the failure or malfunctioning of which endangers the health and safety of persons or property
Art. 3(14)
Application area
Area in which the AI system is intended to be applied (e.g. law enforcement, employment, marketing, education, healthcare)
Article 6, Annex III

Existing approaches for AI documentation

In recent years, key academic, government and industry players have proposed methodologies aimed at defining documentation approaches that increase transparency and trust in AI. Table 2 summarises the most popular ones, and analyses the extent to which they cover the use case-related information needs identified in the previous section. Note that the table exclusively considers documentation methodologies focusing on AI models, systems or services. For instance, it does not include works tackling only dataset documentation such as Datasheets for Datasets (Gebru,Morgenstern,Vecchione,Vaughan,Wallach,Iii and Crawford, 2021), The Dataset Nutrition Label (Chmielinski et al., 2022) or Data Cards (Pushkarna et al., 2022).
Firstly, the table shows the importance the AI community places on documentation, as big tech (Google, IBM, Microsoft, Meta) and high stakes institutions such as the Organisation for Economic Co-operation and Development (OECD) are behind most adopted methodologies. For instance, Google’s Model cards (Mitchell et al., 2019) can now be automatically generated from the widely used TensorFlow framework,2 which is strongly fostering its adoption by AI practitioners.
Nevertheless, as anticipated in the Introduction, the majority of methodologies have a strong technical focus. They are generally conceived as tools for AI developers and providers to demonstrate AI models’ performance and accuracy. Most recently proposed methodologies, including the Framework for the classification of AI systems by the OECD (OECD, 2022), AI usage cards (Wahle et al., 2023) and System cards (Meta, 2023), are expanding to cater to other audiences such as policy-makers and end-users. Even though some methodologies do explicitly ask about the intended use of AI the system (e.g. “What is the intended use of the service output?" in Arnold et al. (2019), “Intended Use" section in Mitchell et al. (2019) and “Task(s) of the system” in OECD (2022)), they do so in very broad terms and provided examples lack sufficient details to address complex legal concerns. Moreover, none of these methodologies are based on a formal standard or specification. In summary, to date there is no unified and comprehensive AI documentation approach focusing exclusively on use cases and covering information elements such as type of product, safety component and application area. Our proposed use case cards aim to bridge this gap.
Table 2
Comparison of state-of-the-art AI documentation approaches to our proposed use case cards. The symbol https://static-content.springer.com/image/art%3A10.1007%2Fs10676-024-09757-7/MediaObjects/10676_2024_9757_Figa_HTML.gif denotes a good coverage of the information element, https://static-content.springer.com/image/art%3A10.1007%2Fs10676-024-09757-7/MediaObjects/10676_2024_9757_Figb_HTML.gif is used for elements only covered from a technical perspective, and \(\times\) means no coverage. The methods have been assessed based on examples publicly available
https://static-content.springer.com/image/art%3A10.1007%2Fs10676-024-09757-7/MediaObjects/10676_2024_9757_Tab2_HTML.png

The use case card documentation approach

Revisiting UML for AI use case documentation

Among use case modelling methodologies, the one proposed in the Unified Modelling Language (UML) specification is the most popular in software engineering Koç et al.(2021). It has the advantage of being an official standard with over 25 years of use, supported by a strong community (Object Management Group, 2017). Further, it is user-friendly, offering a highly intuitive and visual way of modelling use cases by means of diagrams and a set of simple graphic elements (Fig. 2).
UML use cases capture what a system is supposed to do without delving into technical details (e.g. concrete implementation details, algorithm architectures). Instead, they focus on the context of use, the main actors using the system, and actor-actor and actor-system interactions. A use case is triggered by an actor, which might be an individual or group, who is referred to as the primary actor. The use case describes the various sets of interactions that can occur among the different actors while the primary actor is pursuing a goal. A use case is considered successfully completed when the associated goal is reached. Use case descriptions also include possible extensions to this sequence, e.g., alternative sequences that may also satisfy the goal, as well as sequences that may lead to failure in completing the goal.
Once the use case has been modelled in a diagrammatic form (Fig. 2-right), the next step is to describe it in a brief and structured written format. Although the UML standard does not impose this step, it is commonly carried out in the form of a tablet. The most widely-used layout is the one proposed in Cockburn (2001) and shown in Fig. 2-left.
The information elements related to use cases under the AI Act (c.f. Table 1) were found to closely correspond with those of the software use case documentation under UML, e.g.: context of use and scope \(\longleftrightarrow\) intended purpose; primary actor \(\longleftrightarrow\) user; stakeholders and interests \(\longleftrightarrow\) stakeholders; open issues \(\longleftrightarrow\) foreseeable misuses; and main course \(\longleftrightarrow\) inputs/outputs. For this reason, we decided to ground our proposed use case cards in UML. The process of transforming classic UML use case diagrams into use case cards was carried out in a co-design workshop with stakeholders which is detailed further in Sect. 4.1. In the next sections, we focus on presenting the final use case card design and explaining how to fill it.

Use case cards

The designed use case card template is shown in Fig. 3. It is composed of two main parts: a canvas for visual modelling (right) and an associated table for written descriptions (left). Both are very close to the UML standard, with a few additional information elements inspired by European AI policies as follows. The canvas contains the following visual elements:
  • AI system boundary: It delimits the functionalities of the AI system. It is represented by a rectangle that encloses all the use cases.
  • Actors: They represent users or external systems that interact with the AI system. They are depicted as stick figures placed outside the AI system’s boundary. Actors can be individuals, groups, other software systems or even hardware devices. Each actor has a unique name to identify their role.
  • Use Cases: They represent specific functionalities or behaviours of the AI system. They describe the interactions between actors and the AI system to achieve a specific goal. Use cases are represented as ovals within the system boundary. Differently from traditional UML, we distinguish between AI use cases (with blue background) and non-AI use cases (with white background). Each use case has a name that reflects the action or functionality it represents.
  • Relationships: They show the associations and dependencies between actors and use cases. Associations are depicted by solid lines connecting an actor to a use case, indicating that the actor interacts with or participates in that particular use case. Associations can also exist between use cases to represent dependencies between different functionalities. “Include” and “extend” relationships are depicted with dashed arrows. “Include” shows that one use case includes the functionality of another use case. “Extend” indicates that a use case can extend another one with additional behaviour. Generalisation is depicted by a solid arrow pointing from the specialised actor to the generalised actor (i.e. the specialised actor inherits the characteristics and interactions of the generalised actor).
As later discussed in the validation process (Sect. 4.2), a common issue for stakeholders who are not familiar with UML methodology is understanding the distinction between a system (in our case, an AI system) and a use case. The system perspective considers the AI system as a whole and helps in understanding its components (both AI and non-AI) and their relationships. Use cases, on the other hand, represent the specific interactions that actors have with the system and the functionalities the system provides them. By distinguishing systems from use cases, UML provides a modular and flexible modelling approach, allowing to focus on different aspects of the system at different levels of abstraction and granularity. Also note that for a system to be considered AI system in a use case card it has to content at least one AI use case.
The table layout has some changes with respect to the one proposed in (Cockburn, 2001). First, the intended purpose of the system encompasses three fields. Two of them already appeared in the original table, namely, context of use and scope. Both are to be filled with a short text description; we recommend a maximum of 100 words. Remaining field is Sustainable Development Goals (SDGs) and its values should by picked from the official United Nations’ list presented in Appendix A’s Fig. 12. Note that the purpose of this field is stating the SDGs to which the use case contributes (i.e. has a positive impact).
In addition, three new fields have been added as they are essential to determine the use case’s risk level –and thus the one of the AI system containing it– according to the AI Act. Their description can be found in Table 1 and below we comment on their possible values:
  • Type of product: It must be one value from the list in Appendix A’s Table 4. Top rows in the list correspond to type of products that might be subject to other EU regulations and, as such, be high-risk according to AI Act’s Annex II.
  • Is it a safety component?: This “yes/no” field determines whether the use case fulfils a safety function for a product or system whose failure might harm persons or material. It is therefore a flag field that indicates a high-risk level.
  • Application area(s): One or more areas of application of the use case, as listed in Appendix A’s Table 5. Some of these areas are high-risk under the AI Act and therefore need to be clearly identified.
Remaining fields correspond one-to-one with those in the original table. The only change appears in the description of the open issues field where we have emphasised the need to include foreseeable misuses of the system.

Filling in use case cards

This section illustrates the process of filling in a use case card through the example of a scene narrator application installed in a smartphone. This AI-based application aims at helping people with visual impairments to obtain information about their environment, namely, about surrounding objects, text (e.g. panels, signs, menus) and people (both familiar and unknown persons). The user wears goggles connected to the smartphone, allowing to take a picture of the scene by pressing a button in the right ear temple. Then, the application narrates with a synthetic voice and in natural language the scene description, such as:
“You are in an office; there are four persons in front of you, the one on your left is John; there is a table with four chairs and the exit door is at the end of the room on the left hand side.”
This application is inspired by real products in the market, including Microsoft’s Seeing AI App (Microsoft, 2023), Cloudsight’s TapTapSee (Cloudsight, 2023) and Google’s Lookout (Google, 2023). It is a complex application in computational terms, as it combines AI algorithms of different nature: object and person detection, optical character recognition (OCR), face recognition, text and synthetic voice generation. There are also data use and data privacy issues to be carefully addressed, e.g., regarding the management of captured facial images or the possibility of using extracted scene information for purposes other that assisting visually impaired people, such as targeted marketing.
We propose the use case card presented in Fig. 4. First, we focus on the visual modelling side. The key questions to ask are what is the AI system, which are the use cases within it we want to document and the main actors involved. The AI system can be easily identified as the scene narrator application. This system may have multiple use cases, ranging from classic software functionalities (e.g. installing the app, user registration, user logging, manage settings) to the more complex AI-based functionalities related to the scene narration part (i.e. object/person detection, OCR, face recognition, etc.). We decide to include within the system’s boundary only the uses cases directly linked to the scene narration functionality for the sake of clarity. Then, we reflect on a simplified interaction pipeline for the person with visual impairment to get a scene description, which is: opening the app on the smartphone \(\rightarrow\) taking a picture of the scene \(\rightarrow\) the system computes the scene description \(\rightarrow\) the person listens to the audio narration.
Within this pipeline, we realise that the whole AI core is contained under the computation of scene description phase. We therefore decide to introduce a describe scene use case as the principal one, which includes all AI-based functionalities (those with blue background colour). By modelling describe scene as the main use case with “include” dependencies to other AI functionalities, we simplify the documentation process to a single UML table.3 We additionally decide to show some non-AI use cases in the diagram to provide a complete and self-contained overview of the pipeline, namely: take scene photo and register familiar person. The register familiar person use case is particularly interesting, as it shows that certain persons (e.g. family, friends, caregivers) might be registered in the platform by the user, and thus subject to identification through face recognition. The last point to define in the diagram are the actors involved. The main actor is clearly the person with visual impairments as they are the one triggering the scene narration process. However, the modelling process has also identified other relevant actors, namely the (unknown) surrounding persons that might appear in the scene and the familiar faces that might eventually be present. Note that surrounding persons are a generalisation of familiar persons, and that the identify people use case “extends” the detect people one.
After the visual modelling exercise, we proceed to complete the table associated to the main use case describe scene. The context of use field provides an overview of pre-conditions and conditions of normal usage (e.g. the app is already installed in the smartphone, the primary actor wears goggles, s/he has already registered some familiar faces in the system), while scope delineates the specific functionality of the use case. This use case has a strong positive social impact, allowing for a better inclusion and social life for the visually impaired, and therefore contributes to two SDGs: good health and well-being and reduce inequalities. The use case is part of a software product and may not be considered a safety component, as it is meant to assist but not to fulfil a safety function. Interestingly, it has two application areas. The first one is social assistance, and the second one is remote biometric identification systems as it includes face recognition to identify familiar people. This is particularly important as the former is not considered a high-risk application area under the AI Act, while the latter does. Therefore, if the system’s provider prefers to bring the application to the market as a low-risk one, the face recognition functionality should be removed. The following fields are relatively straightforward to document, as they merely describe the main actors and course of actions within the use case. In our example, the main course field contains as steps the calls to the different AI algorithms. Extensions tackle problems that may arise, e.g. if the taken picture has poor quality, which are simply addressed with the failure protection mechanism of asking the person to retake the shot. Last, but of extreme importance, the open issues field allows the provider to clearly state that the application is conceived for ethical use. It stresses that the system is not intended for use by people who are not visually impaired, clarifies that data privacy is adequately treated (the provider does not keep a copy of taken scene images) and that under no circumstances will the provider engage in any marketing activities with the extracted information.
Through this example, we have shown that use case cards is a powerful, standardised methodology to document AI use cases. Beyond the goal of documentation, the process of filling in a use case card fosters reflections of the utmost importance about an AI system, such as its risk level, foreseeable misuses and failure protection mechanisms to put in place. Appendix B provides four additional use case cards involving different types of AI systems with varying levels of complexity, to provide the reader with a variety of illustrative examples.

Co-designing and validating use case cards with key stakeholders

The use case card methodology was developed following a two-phase protocol with key stakeholders, as depicted in Fig. 5. First, we carried out a co-design workshop involving two European Commission (EC) policy experts, three EC scientific officers and an external expert on User eXperience (UX) and UML. The resulting version of use case cards was then evaluated in a second phase through a questionnaire to 11 scientists contributing to different EU digital policy initiatives, and with varying expertise levels on UML and the AI Act. In the development of use case cards, a unique and essential aspect was the involvement of both EU policy makers and technical experts, particularly those with a very close and significant expertise in the AI Act. This collaboration ensured that the cards were not only technically sound but also aligned with the legal and regulatory frameworks of the AI Act. However, it is important to note that while the primary co-design phase focused on aligning the methodology with the AI Act through the involvement of policy and technical experts, the intended audience for these cards is much broader. This includes, e.g., AI system providers and users.
In the following, we provide details on the implementation of both phases and present the main results.

Co-design process

Co-design, co-creation or participatory design refers to an approach where stakeholders come together, as equals, to conceptually develop solutions that respond to certain matters of concern (Zamenopoulos and Alexiou, 2018). As such, the co-design method aims to develop a solution “with” the target individuals/groups rather than “for” them. There has been an increasing trend in recent years towards greater inclusion of stakeholders in designing and carrying out research through the adoption of co-design methods (Nesbitt,Beleigoli,Du,Tirimacco and Clark, 2022). Given the multidisciplinary nature of our work, involving both policy and technical matters, we decided to take advantage of this methodology in this first design phase.
The co-design phase involved six participants. Two of them were EC policy experts with legal background, and having high involvement and proficiency in the AI Act. Three are EC scientific officers with proficiency in AI and medium-to-high knowledge on UML. It is important to note that, although these three experts have primarily a technical profile, they are involved in a daily basis in digital policy issues, including scientific advice related to the AI Act. Finally, we invited an external expert with high expertise in AI, and a proficiency background in UX and UML.
We organised a two-day physical workshop to conduct the co-design of use case cards. Scientific officers alternated between asking questions and taking copious notes throughout the workshop, counting with all participants’ permission.
The three scientific officers and the external UML/UX expert prepared a three-hour tutorial on UML to kick off the first day. The tutorial started with the presentation of the UML standard (Object Management Group, 2017), with particular emphasis on the use case modelling part. Then, three exemplar AI use cases modelled in classic UML format (c.f. Figure 2) were presented for illustrative purposes: an affective music recommender, a driver monitoring system and a smart-shooting camera system.
After the tutorial, the six participants engaged in a guided discussion covering the following key points:
  • Potential of UML as a standard methodology for AI use case documentation.
  • Relevance, clarity and adequateness of the UML diagram and related table with regard to the AI Act (e.g. missing fields, ease to understand/implement).
  • Relevance of the method for the assessment of an AI system’s risk level according to the AI Act.
Results can be summarised as follows. First, participants unanimously agreed on the high overlap between UML’s information elements and those required to document use cases under the AI Act (c.f. Table 1). Therefore the standard was considered fit for purpose. Participants however identified missing fields essential in the context of the AI Act and that should be added to the UML table, namely: (i) the type of product to which the AI system belongs; (ii) its application area(s); and (iii) whether the use case is a safety component of a product.
Participants raised important additional points. They mentioned different uses of the methodology, including the creation of a public repository of AI use cases, useful in the context of the registration process mentioned in Article 51 and Annex VIII–part II of the AI Act. This repository would be a valuable and usable tool to help companies –and more particularly SMEs, with more limited legal resources– identify the risk level of their AI systems: “use case cards would give companies a hook to go through the AI Act”. Authorities would also benefit from such repository, allowing them to “have a better overview of the landscape of existing AI systems” and “engage with companies to articulate bordercases”. Although not an explicit information requirement under the AI Act, given its human-centric nature, it was deemed interesting to include the link of each use case to Sustainable Development Goals (SDGs) which “would help keep track of AI-for-good applications”.
During the second workshop day, participants proceeded to the design of use case cards according to the findings identified the previous day. They first added the four missing fields (i.e. “type of product”, “application area(s)”, “is it a safety component?” and “SDGs”) to the UML table, and agreed on its final layout (e.g. colours, order/position of the different fields). Then they developed the list of products types (c.f. Appendix A’s Table 4) and application areas (c.f. Appendix A’s Table 5), carefully considering AI Act’s Annex II and III, respectively. Finally, participants concluded with a practical exercise, where they converted the three UML use cases in the tutorial to the new use case card format. They additionally implemented two new use case cards: the scene narrator one (presented in the previous section) and a student proctoring one. The new use case cards can be found in Appendix B. This final exercise allowed us to confirm the ease of use and implementation of the methodology, whose adaptation is “straightforward with respect to traditional UML” as confirmed by the UML/UX expert.

Questionnaire-based validation study

Once the first solid version of the use case cards was available, we conducted a questionnaire-based study to validate two main aspects. On one hand, those components referring to the clarity and complexity of the proposed approach, such as its learning curve, its level of detail and granularity, the importance of the visual components with respect to the table, as well as open questions regarding possible missing or unnecessary fields. On the other hand, those elements related to the level of contextualisation with respect to the AI Act, risk level assessment, requirements, etc. A summary of the questions is provided in Table 3. As can be seen, 9 questions were designed to have a possible answer aligned with a 5-point Likert scale, 2 questions allowed for a yes/no answer plus an elaboration if the answer was yes, and 2 questions were designed as completely open questions.
Table 3
Summary of the questionnaire. Qx denotes 5-point Likert-scale questions and OQx stands for open questions
Questions and possible possible answers
Q1
Level of expertise on the AI Act:
 
\(\bigcirc\) None
\(\bigcirc\) Low
\(\bigcirc\) Mid
\(\bigcirc\) High
\(\bigcirc\) Very high
Q2
Level of expertise on UML:
 
\(\bigcirc\) None
\(\bigcirc\) Low
\(\bigcirc\) Mid
\(\bigcirc\) High
\(\bigcirc\) Very high
Q3
Difficulty to understand the use cases:
 
\(\bigcirc\) Very difficult
\(\bigcirc\) Somewhat difficult
\(\bigcirc\) Neutral
\(\bigcirc\) Somewhat easy
\(\bigcirc\) Very easy
Q4
How would you rate the level of detail provided in the table?
 
\(\bigcirc\) Too little detailed
\(\bigcirc\) Little detailed
\(\bigcirc\) Adequate
\(\bigcirc\) Quite detailed
\(\bigcirc\) Too detailed
Q5
How important do you consider the UML diagram with regard to the table for the use case?
 
\(\bigcirc\) Not important
\(\bigcirc\) Slightly important
\(\bigcirc\) Moderately important
\(\bigcirc\) Important
\(\bigcirc\) Very important
Q6
How do you assess the learning curve of the use case cards?
 
\(\bigcirc\) Not appropriate
\(\bigcirc\) Slightly appropriate
\(\bigcirc\) Moderately appropriate
\(\bigcirc\) Quite appropriate
\(\bigcirc\) Very appropriate
Q7
Is the use case card well contextualised in relation to the AI Act?
 
\(\bigcirc\) Not at all
\(\bigcirc\) Very little
\(\bigcirc\) Neutral
\(\bigcirc\) Somewhat
\(\bigcirc\) To a great extent
Q8
Does the use case card provide information to assess the risk-level according to the AI Act?
 
\(\bigcirc\) Not at all
\(\bigcirc\) Very little
\(\bigcirc\) Neutral
\(\bigcirc\) Somewhat
\(\bigcirc\) To a great extent
Q9
In the context of the AI Act, use case card is appropriate for: (1) risk-level assessment, (2) requirements, (3) catalogue of usages, (4) other:
 
\(\bigcirc\) Strongly disagree
\(\bigcirc\) Somewhat disagree
\(\bigcirc\) Not sure
\(\bigcirc\) Somewhat agree
\(\bigcirc\) Strongly agree
OQ1
Is there any important field that you miss in the table?
  
\(\bigcirc\) Yes \(\bigcirc\) No; if Yes, please indicate which one
OQ2
Is there any field that you would remove?
  
\(\bigcirc\) Yes \(\bigcirc\) No; if Yes, please indicate which one
OQ3
Please specify other potential uses:
OQ4
Please insert here any additional comment you may have:
The online survey included an introduction with the description of the project, the main goals and procedure. Then a brief introduction of the main components of use cases modelled with UML was provided, followed by a short description of the proposed structure for the use case cards. After some demographic questions, the participants were provided with three exemplar use case cards. The first one corresponds to the scene narrator system previously presented in Sect. 3.3 (Fig. 4). Remaining two correspond to the driver attention monitoring system and the student proctoring tool presented in Appendix  B (Figs. 15 and 16, respectively).
We involved 11 participants (5 female, 5 male, 1 prefer not to say), 7 of whom had a technical background (computer scientists/engineers), with the rest having varied profiles including 1 legal expert, 1 social scientist and 1 mathematician. All of them had experience in trustworthy AI, science for policy, and the AI Act, as well as varying degrees of knowledge of UML. More specifically, their knowledge about the AI Act was self-assessed between “low” and “very high”, with mean \(M1=3.27\) (question 1, Fig. 6-left), whereas their knowledge about UML was self-assessed between “none” and high, with mean \(M2=2.36\) (question 2, Fig. 6-right). Since the use case cards are intended to be used in the context of the AI Act, it is coherent to validate them with participants with some knowledge of the AI Act. However, in principle, it is not strictly necessary to have knowledge of UML, so validation should incorporate participants with little or no knowledge of UML.
Figure 7 shows the histograms of answers for the questions related to the intrinsic features of the method. The difficulty to understand the three exemplar use case cards was assessed as “somewhat easy” (\(M3=4.09\)), the level of detail as “adequate” (\(M4=3.00\)), the importance of the UML diagram (the canvas) between “moderately important” and “important” (\(M5=3.45\)), and the learning curve at the midpoint between “moderately appropriate” and “quite appropriate” (\(M6=3.55\)). Regarding the question on missing fields (OQ1), 6 participants answered “no” and 5 “yes”. The suggestions provided by those who answered “yes” can be seen in Fig. 8. Most of them can be easily integrated into the “Open issues” field of the table. Other suggestions such as “more explicit contextualisation with the AI Act” or “other relevant EU policies” could be considered in future versions. And as for the question on possible dispensable fields (OQ2), \(73\%\) of the participants answered “no”, and \(27\%\) “yes”. As depicted in Fig. 9, there were three concerns, one referring to the type of product, another focusing on the Sustainable Development Goals (SDGs), and one comment on the UML diagram. First, it is important to note that the type of product has to be considered together with the specific area. Otherwise, we cannot obtain a detailed classification. On the other hand, we believe that asking about the SDGs can have positive effects on AI systems providers, as a way for them to consider whether or not their systems contribute to sustainable development. Finally, the importance of the UML diagram has been positively assessed by most of the participants in question 5.
Concerning the alignment of use case cards with the AI Act, the feedback from the participants is also very positive. For example, regarding the level of contextualisation with the AI Act (question 7, Fig. 10 left), the mean answer is between “somewhat” and “to a great extent”, with \(M7=4.18\). Regarding its utility to assess the risk-level (question 8, Fig. 10 right) the answers are between “very little” and “to a great extent”, with a mean value very close to “somewhat” (\(M8=3.82\)). And the general feedback from question 9 (Fig. 11) is mostly positive towards an agreement on its appropriateness to different AI Act specific aspects.
From the participants’ answers to open question 3, we highlight the following suggestions for other potential uses:
  • Documentation and training”.
  • As a standard to show the use of AI systems to citizens”.
  • Compare similar AI systems”.
  • Create a database of sample use cases”.
  • For conformity assessment”.
  • Elaborating on possible mitigation measures after risk assessment”.
  • To help non-experts to understand how a product works”.
Some of these answers echo our goal of proposing a methodology for documenting use cases for AI systems that is easy to understand by a non-expert audience. Other answers also point in the direction of a possible standard that could help with documentation needs, risk mitigation or conformity assessment.
However, there are also some issues raised by some participants in the last open question. In almost all cases, the feedback obtained refers in one way or another to a limited expertise on UML for documenting use cases. For example, some participants did not clearly understood the difference between the “AI system” and the “use cases”, including some confusion about the type of dependencies between the use cases. This issue is highly correlated with the lack of previous knowledge on UML. Difficulties in learning and using UML are well-known issues in the research and industry communities (Siau & P P., 2006)4. However, the benefits of UML have been empirically validated in multiple studies (Chaudron,Heijstek and Nugroho, 2012). While we recognise the potential initial difficulties of a wider audience in interpreting the UML canvas, we do not expect a major impact for AI providers, as UML is a de facto industry standard for modelling software systems. Moreover, as most of the participants emphasised, the table is the main element of the proposed approach, and its clarity has been validated regardless of prior knowledge about UML.

Conclusions

In this work we present use case cards, a standardised methodology for the documentation of AI use cases. It is grounded on four strong pillars: (1) the UML use case modelling standard; (2) the recently proposed European AI Act; (3) the result of a co-design with high-profile stakeholders including European policy and scientific experts with a proficiency level on AI, UML and the AI Act; and (4) a validation with 11 experts combining technical knowledge on AI, social sciences, human rights and/or legal background, and having a strong experience in EU digital policies.
Differently from other widely used methodologies for AI documentation, such as Model Cards (Mitchell et al., 2019), Method Cards (Adkins et al., 2022a) or System cards (Wahle et al., 2023), use case cards focuses on describing the intended purpose and operational use of an AI system rather than on the technical aspects related to –in most cases, a generic– AI model. This allows to frame and put the use case in context, in a highly visual, complete and efficient manner. It has also be proven a useful tool for both policy makers and providers in assessing the risk level of an AI system, which is key to determine the legal obligations to which it must be subject.
It is important to emphasise nevertheless that use case cards is not meant to be a final and exhaustive documentation methodology for compliance with any future legal requirement. First, because the AI Act is still under negotiation and therefore subject to possible modifications in its road towards adoption. Second, because the objective of this work is the documentation of use cases, which is just a small piece of the technical documentation required to demonstrate full conformity with the legal text. While use case cards effectively frame and contextualise the operational and intended use of AI systems, they are primarily conceived as a preliminary tool for risk-level assessment under the AI Act. The specific documentation requirements will depend on the level of risk. For example, for high-risk systems, detailed documentation regarding the corresponding requirements (e.g., risk management system, data and data governance, technical documentation, etc.) will be necessary.
Use case cards has the potential to serve as a standardised methodology for documenting for use cases in the context of the European AI Act, as stated by participants in the co-design and validation exercises. In the future, we will involve a secondary phase of validation with AI system providers and users, to ensure use case cards are practical, clear and a useful tool for a diverse range of stakeholders. We also plan to develop a web-based prototype of this registry integrating a machine-editable version of use case cards and allowing for the automated analysis of related statistics such as the number of use cases per application area, per product type, and most covered SDGs.

Acknowledgements

The authors would like to thank all the stakeholders who participated in the co-design and validation phases of the use case cards.

Declarations

Conflict of interest

The authors have no competing interests to declare that are relevant to the content of this article.

Ethical approval

The methodology followed in the user study was subject to ethical and data protection procedures defined in the context of the HUMAINT project, Joint Research Centre. All participants both in the co-design phase and the questionnaire-based study were given an informed consent form with details regarding the purpose of the study, procedures and confidentiality treatment issues. They all voluntarily agreed to participate in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix A: Lists of SDGs, products and application areas

This appendix lists the Sustainable Development Goals (SDGs, Fig. 12), type of products (Table 4) and application areas (Table 5) to be used to fill in use case cards as in Sect. Use case cards.
Table 4
List of possible types of products for use case cards. Those marked with \(\bullet\) might be subject to other European Union harmonisation legislation and, as such, be considered high-risk according to AI Act’s Annex II. Based on the General Approach (Council of the EU, 2022)
Type of product
\(\bullet\)
Machinery
\(\bullet\)
Toy
\(\bullet\)
Recreational craft or personal watercraft
\(\bullet\)
Lift
\(\bullet\)
Equipment and protective systems for use in potentially explosive atmospheres
\(\bullet\)
Radio equipment
\(\bullet\)
Pressure equipment
\(\bullet\)
Cableway installation
\(\bullet\)
Personal protective equipment
\(\bullet\)
Appliances burning gaseous fuels
\(\bullet\)
Medical device
\(\bullet\)
In vitro diagnostic medical device
\(\bullet\)
Civil aviation
\(\bullet\)
2- or 3-wheel vehicle or quadricycle
\(\bullet\)
Agricultural and forestry vehicle
\(\bullet\)
Marine equipment
\(\bullet\)
Interoperability of the rail system
\(\bullet\)
Motor vehicles and their trailers
 
Other hardware product/system
 
Other software product/system
Table 5
List of application areas for use case cards. Subareas marked with \(\bullet\) are high-risk under AI Act’s Annex III. Based on the General Approach (Council of the EU, 2022)
Type of application area
Biometrics
\(\bullet\)
Remote biometric identification systems
Critical infrastructure
\(\bullet\)
AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic and the supply of water, gas, heating and electricity
Education and vocational training
\(\bullet\)
AI systems used to determine access, admission or to assign natural persons to educational and vocational training institutions or programmes
\(\bullet\)
AI systems intended to be used to evaluate learning outcomes
Employment, workers management and access to self-employment
\(\bullet\)
AI systems used for recruitment or selection of natural persons, notably to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates
\(\bullet\)
AI systems to make decisions on promotion and termination of work-related relationships, to allocate tasks or monitor and evaluate performance based on person’s behavior, personal traits or characteristics
Access to essential private services, public services and benefits
\(\bullet\)
AI systems used by public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, and to grant, reduce, revoke or reclaim such benefits and services
\(\bullet\)
AI systems used to evaluate the creditworthiness of natural persons or establish their credit score
\(\bullet\)
AI systems used to dispatch, or to establish priority in the dispatching of emergency first response services, including by firefighters and medical aid
\(\bullet\)
AI systems for risk assessment and pricing in the case of life and health insurance
Law enforcement
\(\bullet\)
AI systems used by law enforcement to assess the risk of a natural person for offending or reoffending or the risk for a natural person to become a potential victim of criminal offences
\(\bullet\)
AI systems used by law enforcement as polygraphs or to detect the emotional state of a natural person
\(\bullet\)
AI systems used by law enforcement to evaluate the reliability of evidence in the course of investigation or prosecution of criminal offences
\(\bullet\)
AI systems used by law enforcement to predict the (re)occurrence of a criminal offence based on profiling of natural persons or to assess personality traits and characteristics or past criminal behaviour
\(\bullet\)
AI systems used by law enforcement to profile natural persons in the course of detection, investigation or prosecution of criminal offences
Migration, asylum and border control management
\(\bullet\)
AI systems used by public authorities as polygraphs or to detect the emotional state of a natural person
\(\bullet\)
AI systems used by public authorities to assess a risk (security risk, risk of irregular immigration, health risk) posed by a person who enters or has entered into the territory of a Member State
\(\bullet\)
AI systems to assist public authorities to examine applications for asylum, visa and residence permits and associated complaints
Administration of justice and democratic processes
\(\bullet\)
AI systems used by a judicial authority to interpret facts or the law and to apply the law to a concrete set of facts
Entertainment and leisure
Marketing and retail
Culture, art and heritage
Clinical use in medicine and healthcare
Finances and banking
Social assistance
Video-surveillance for security
Transportation and mobility
Tourism, hospitality and restaurants
Industry and logistics
Politics
Other

Appendix B: Use case cards examples

This annex presents four extra use case cards examples. They were all developed with stakeholders during the co-design phase (Figs. 13 to 16). Two of them were additionally used in the questionnaire-based study (Figs. 15 and 16).
Smart camera. In this example the AI-based system is a smart camera that shoots a picture only when all the people posing in front of it are smiling. There are several products in the market with this feature that serve as inspiration (Canon, 2022; Nikon, 2022). The use case card of the smart shooting use case is shown in Fig. 13. This application is in principle simple and low-risk profile. However, it might lead to potential misuses that deserve documentation. For instance, a similar system was recently deployed in a working environment so that workers could only enter the front door or print documents when smiling to a camera. The management argued that it was intended to foster a positive working environment, but some workers felt their emotions were being manipulated (Business Insider, 2021).
Affective music recommender. Figure 14 shows the use case card of a music recommender system proposing songs to the user based on personality, mood and playlist history. This use case has been inspired by Amini et al. (2019). Several studies have demonstrated that music playlists can be used to infer user’s emotions, personality traits and vulnerabilities (Deshmukh & Kale, 2018); the other way round, certain music pieces can induce behaviours and manipulate listeners’ emotions Gómez-Cañón (2021). The use case card allows to frame the ethical use of the system by stating that the sole purpose is providing the most appropriate music recommendations, and in any case manipulate listener’s emotions or behaviour.
Driver attention monitoring. This AI system records a driver’s face from a car’s in-cabin camera and monitors facial behaviour to detect potential drowsiness and distraction. The monitor attention use case is the one in charge of detecting such situations and sending alerts in the form of beep tones and light symbols in the car dash (Fig. 15). Driver attention monitoring systems are nowadays commonly available as market products (Subaru, 2022; Post, 2022). The corresponding use case card states that the system is part of a safety component of the vehicle, which positions it as a high-risk system. Further, it highlights that the system is conceived to alert the driver but in any case to allow the vehicle to take full control of the car in an autonomous manner.
Student proctoring. This AI system detects potential cheating in students during exams. It is inspired by the literature (Baldassarri,Hupont,Abadía and Cerezo, 2015; Roa’a, 2022) and market products (Meazure Learning, 2023; Respondus, 2023). The use case card presented in Fig. 16 documents its main use case detect cheating. It is a complex one as it includes AI computational tasks of different nature: video analysis for the detection of third persons in the room and relevant objects (e.g. books, phones); detection of impersonation through voice and face identification; and detection of suspicious behaviours (e.g. talking, facial/gaze movements). Alerts are triggered to instructors for review and action. This system’s application area is high-risk and, as such, open issues such as ensuring non-discriminatory access and appropriate data governance must be carefully documented.
Fußnoten
1
"An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment" (OECD, 2023b).
 
2
TensorFlow machine learning framework. Available at: https://​www.​tensorflow.​org/​
 
3
Note that several use case tables can be linked to the same UML diagram, depending on how the system is modelled, how many components it has, and the level of granularity we want for documentation.
 
4
We refer the readers to paragraph 2 of the Sect. 3.2, where we attempt to clarify this common misunderstanding between system and use case in UML methodology.
 
Literatur
Zurück zum Zitat Adkins, D., Alsallakh, B., Cheema, A., Kokhlikyan, N., McReynolds, E., Mishra, P. & Zvyagina, P. (2022a). Method cards for prescriptive machine-learning transparency. In IEEE/ACM 1st International Conference on AI Engineering–Software Engineering for AI (CAIN) (pp. 90–100). Adkins, D., Alsallakh, B., Cheema, A., Kokhlikyan, N., McReynolds, E., Mishra, P. & Zvyagina, P. (2022a). Method cards for prescriptive machine-learning transparency. In IEEE/ACM 1st International Conference on AI Engineering–Software Engineering for AI (CAIN) (pp. 90–100).
Zurück zum Zitat Adkins, D., Alsallakh, B., Cheema, A., Kokhlikyan, N., McReynolds, E., Mishra, P., & Zvyagina, P. (2022b). Prescriptive and descriptive approaches to machine-learning transparency. In CHI conference on human factors in computing systems extended abstracts (pp. 1–9). Adkins, D., Alsallakh, B., Cheema, A., Kokhlikyan, N., McReynolds, E., Mishra, P., & Zvyagina, P. (2022b). Prescriptive and descriptive approaches to machine-learning transparency. In CHI conference on human factors in computing systems extended abstracts (pp. 1–9).
Zurück zum Zitat Amini, R., Willemsen, M.C. & Graus, M.P. (2019). Affective music recommender system (MRS): Investigating the effectiveness and user satisfaction of different mood inducement strategies. Amini, R., Willemsen, M.C. & Graus, M.P. (2019). Affective music recommender system (MRS): Investigating the effectiveness and user satisfaction of different mood inducement strategies.
Zurück zum Zitat Arnold, M., Bellamy, R. K., Hind, M., Houde, S., Mehta, S., Mojsilović, A., et al. (2019). AI FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development, 63(4/5), 6–1.CrossRef Arnold, M., Bellamy, R. K., Hind, M., Houde, S., Mehta, S., Mojsilović, A., et al. (2019). AI FactSheets: Increasing trust in AI services through supplier’s declarations of conformity. IBM Journal of Research and Development, 63(4/5), 6–1.CrossRef
Zurück zum Zitat Baldassarri, S., Hupont, I., Abadía, D., & Cerezo, E. (2015). Affective-aware tutoring platform for interactive digital television. Multimedia Tools and Applications, 74(9), 3183–3206.CrossRef Baldassarri, S., Hupont, I., Abadía, D., & Cerezo, E. (2015). Affective-aware tutoring platform for interactive digital television. Multimedia Tools and Applications, 74(9), 3183–3206.CrossRef
Zurück zum Zitat Chaudron, M., Heijstek, W., & Nugroho, A. (2012). How effective is uml modeling? Software & Systems Modeling, 11, 571–580.CrossRef Chaudron, M., Heijstek, W., & Nugroho, A. (2012). How effective is uml modeling? Software & Systems Modeling, 11, 571–580.CrossRef
Zurück zum Zitat Chmielinski, K.S., Newman, S., Taylor, M., Joseph, J., Thomas, K., Yurkofsky, J. & Qiu, Y.C. (2022). The dataset nutrition label (2nd gen): Leveraging context to mitigate harms in artificial intelligence. arXiv preprint arXiv:2201.03954. Chmielinski, K.S., Newman, S., Taylor, M., Joseph, J., Thomas, K., Yurkofsky, J. & Qiu, Y.C. (2022). The dataset nutrition label (2nd gen): Leveraging context to mitigate harms in artificial intelligence. arXiv preprint arXiv:​2201.​03954.
Zurück zum Zitat Cockburn, A. (2001). Writing effective use cases. Pearson Education India. Cockburn, A. (2001). Writing effective use cases. Pearson Education India.
Zurück zum Zitat Deshmukh, P. Kale, G. (2018). A survey of music recommendation system. International Journal of Scientific Research in Computer Science, Engineering and Information Technology (IJSRCSEIT)331721–1729. Deshmukh, P. Kale, G. (2018). A survey of music recommendation system. International Journal of Scientific Research in Computer Science, Engineering and Information Technology (IJSRCSEIT)331721–1729.
Zurück zum Zitat Estevez Almenzar, M., Fernández Llorca, D., Gómez, E. Martinez Plumez, F. 2022. Glossary of human-centric artificial intelligence. EUR 3113 EN, Publications Office of the European Union, Luxembourg, JRC129614. https://doi.org/10.2760/860665 Estevez Almenzar, M., Fernández Llorca, D., Gómez, E. Martinez Plumez, F. 2022. Glossary of human-centric artificial intelligence. EUR 3113 EN, Publications Office of the European Union, Luxembourg, JRC129614. https://​doi.​org/​10.​2760/​860665
Zurück zum Zitat Fantechi, A., Gnesi, S., Lami, G., & Maccari, A. (2003). Applications of linguistic techniques for use case analysis. Requirements Engineering, 8(3), 161–170.CrossRef Fantechi, A., Gnesi, S., Lami, G., & Maccari, A. (2003). Applications of linguistic techniques for use case analysis. Requirements Engineering, 8(3), 161–170.CrossRef
Zurück zum Zitat Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92.CrossRef Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Iii, H. D., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92.CrossRef
Zurück zum Zitat Gómez-Cañón, J. S., Cano, E., Eerola, T., Herrera, P., Hu, X., Yang, Y. H., & Gómez, E. (2021). Music emotion recognition: Toward new, robust standards in personalized and context-sensitive applications. IEEE Signal Processing Magazine, 38(6), 106–114.CrossRef Gómez-Cañón, J. S., Cano, E., Eerola, T., Herrera, P., Hu, X., Yang, Y. H., & Gómez, E. (2021). Music emotion recognition: Toward new, robust standards in personalized and context-sensitive applications. IEEE Signal Processing Magazine, 38(6), 106–114.CrossRef
Zurück zum Zitat Gupta, A., Anpalagan, A., Guan, L. Khwaja, A.S. (2021). Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues. Array10100057. Gupta, A., Anpalagan, A., Guan, L. Khwaja, A.S. (2021). Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues. Array10100057.
Zurück zum Zitat Gutierrez, C.I., Aguirre, A., Uuk, R., Boine, C.C. Franklin, M. (2023). A proposal for a definition of general purpose artificial intelligence systems. Digital Society2(3)36. Gutierrez, C.I., Aguirre, A., Uuk, R., Boine, C.C. Franklin, M. (2023). A proposal for a definition of general purpose artificial intelligence systems. Digital Society2(3)36.
Zurück zum Zitat Hupont, I., & Gomez, E. (2022). Documenting use cases in the affective computing domain using unified modeling language. In 10th International Conference on Affective Computing and Intelligent Interaction (ACII), Nara, Japan. Hupont, I., & Gomez, E. (2022). Documenting use cases in the affective computing domain using unified modeling language. In 10th International Conference on Affective Computing and Intelligent Interaction (ACII), Nara, Japan.
Zurück zum Zitat Hupont, I., Micheli, M., Delipetrev, B., Gómez, E. & Soler Garrido, J. (2023). Documenting high-risk AI: A european regulatory perspective. Computer, 56(5), 18–27. Hupont, I., Micheli, M., Delipetrev, B., Gómez, E. & Soler Garrido, J. (2023). Documenting high-risk AI: A european regulatory perspective. Computer, 56(5), 18–27.
Zurück zum Zitat Hupont, I., Tolan, S., Gunes, H. & Gómez, E. (2022). The landscape of facial processing applications in the context of the european AI Act and the development of trustworthy systems. Nature Scientific Reports 12, 10688. Hupont, I., Tolan, S., Gunes, H. & Gómez, E. (2022). The landscape of facial processing applications in the context of the european AI Act and the development of trustworthy systems. Nature Scientific Reports 12, 10688.
Zurück zum Zitat Koç, H., Erdoğan, A. M., Barjakly, Y., & Peker, S. (2021). UML diagrams in software engineering research: a systematic literature review. Multidisciplinary Digital Publishing Institute Proceedings, 74(1), 13. Koç, H., Erdoğan, A. M., Barjakly, Y., & Peker, S. (2021). UML diagrams in software engineering research: a systematic literature review. Multidisciplinary Digital Publishing Institute Proceedings, 74(1), 13.
Zurück zum Zitat Laato, S., Tiainen, M., Najmul Islam, A., & Mäntymäki, M. (2022). How to explain ai systems to end users: a systematic literature review and research agenda. Internet Research, 32(7), 1–31.CrossRef Laato, S., Tiainen, M., Najmul Islam, A., & Mäntymäki, M. (2022). How to explain ai systems to end users: a systematic literature review and research agenda. Internet Research, 32(7), 1–31.CrossRef
Zurück zum Zitat Louradour, S. Madzou, L. (2021). A policy framework for responsible limits on facial recognition, use case: Law enforcement investigations. World economic forum. Louradour, S. Madzou, L. (2021). A policy framework for responsible limits on facial recognition, use case: Law enforcement investigations. World economic forum.
Zurück zum Zitat Madaio, M. A., Stark, L., Wortman Vaughan, J. Wallach, H. (2020). Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In CHI conference on human factors in computing systems (pp. 1–14). Madaio, M. A., Stark, L., Wortman Vaughan, J. Wallach, H. (2020). Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In CHI conference on human factors in computing systems (pp. 1–14).
Zurück zum Zitat Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B. & Gebru, T. (2019). Model cards for model reporting. FAT* '19: Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220–229). Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B. & Gebru, T. (2019). Model cards for model reporting. FAT* '19: Proceedings of the Conference on Fairness, Accountability, and Transparency (pp. 220–229).
Zurück zum Zitat Nesbitt, K., Beleigoli, A., Du, H., Tirimacco, R., & Clark, R. A. (2022). User experience (ux) design as a co-design methodology: lessons learned during the development of a web-based portal for cardiac rehabilitation. Oxford University Press. Nesbitt, K., Beleigoli, A., Du, H., Tirimacco, R., & Clark, R. A. (2022). User experience (ux) design as a co-design methodology: lessons learned during the development of a web-based portal for cardiac rehabilitation. Oxford University Press.
Zurück zum Zitat Panigutti, C., Ronan, H. et al. (2023). The role of explainable ai in the context of the ai act. 6th ACM Conference on Fairness, Accountability and Transparency (FAccT). Panigutti, C., Ronan, H. et al. (2023). The role of explainable ai in the context of the ai act. 6th ACM Conference on Fairness, Accountability and Transparency (FAccT).
Zurück zum Zitat Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022). Data cards: Purposeful and transparent dataset documentation for responsible AI. FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency. Pushkarna, M., Zaldivar, A., & Kjartansson, O. (2022). Data cards: Purposeful and transparent dataset documentation for responsible AI. FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency.
Zurück zum Zitat Roa’a, M., Aljazaery, I.A., ALRikabi, H.T.S. Alaidi, A.H.M. (2022). Automated cheating detection based on video surveillance in the examination classes. iJIM1608125. Roa’a, M., Aljazaery, I.A., ALRikabi, H.T.S. Alaidi, A.H.M. (2022). Automated cheating detection based on video surveillance in the examination classes. iJIM1608125.
Zurück zum Zitat Sánchez, F. L., Hupont, I., Tabik, S., & Herrera, F. (2020). Revisiting crowd behaviour analysis through deep learning: Taxonomy, anomaly detection, crowd emotions, datasets, opportunities and prospects. Information Fusion, 64, 318–335.CrossRef Sánchez, F. L., Hupont, I., Tabik, S., & Herrera, F. (2020). Revisiting crowd behaviour analysis through deep learning: Taxonomy, anomaly detection, crowd emotions, datasets, opportunities and prospects. Information Fusion, 64, 318–335.CrossRef
Zurück zum Zitat Wahle, J.P., Ruas, T., Mohammad, S.M., Meuschke, N. Gipp, B. (2023). AI usage cards: Responsibly reporting AI-generated content. arXiv preprint arXiv:2303.03886. Wahle, J.P., Ruas, T., Mohammad, S.M., Meuschke, N. Gipp, B. (2023). AI usage cards: Responsibly reporting AI-generated content. arXiv preprint arXiv:​2303.​03886.
Zurück zum Zitat Zamenopoulos, T. & Alexiou, K. (2018). Co-design as collaborative research. Bristol University/AHRC Connected Communities Programme. Zamenopoulos, T. & Alexiou, K. (2018). Co-design as collaborative research. Bristol University/AHRC Connected Communities Programme.
Zurück zum Zitat Zhou, J., Liu, B., Hong, J.N.A.Y., Lee, K C. Wen, M. (2023). Leveraging Large Language Models for Enhanced Product Descriptions in eCommerce. arXiv2310.18357. Zhou, J., Liu, B., Hong, J.N.A.Y., Lee, K C. Wen, M. (2023). Leveraging Large Language Models for Enhanced Product Descriptions in eCommerce. arXiv2310.18357.
Metadaten
Titel
Use case cards: a use case reporting framework inspired by the European AI Act
verfasst von
Isabelle Hupont
David Fernández-Llorca
Sandra Baldassarri
Emilia Gómez
Publikationsdatum
01.06.2024
Verlag
Springer Netherlands
Erschienen in
Ethics and Information Technology / Ausgabe 2/2024
Print ISSN: 1388-1957
Elektronische ISSN: 1572-8439
DOI
https://doi.org/10.1007/s10676-024-09757-7

Weitere Artikel der Ausgabe 2/2024

Ethics and Information Technology 2/2024 Zur Ausgabe

Premium Partner