1 Introduction
2 Organisations underappreciate workplace risks of AI
3 AI can optimise workplaces, but also burden and harm workers
4 Gaps and challenges in WHS practices to identify and manage AI risks
5 Human dignity and autonomy in the AI using workplace
6 AI ethics frameworks
Human condition | Worker safety | Oversight |
---|---|---|
Human, social and environmental wellbeing: Throughout their lifecycle, AI systems should benefit individuals, society and the environment Human-centred values: Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals Fairness: Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups | Privacy protection and security: Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data Reliability and safety: Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose | Transparency and explainability: There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system and can find out when an AI system is engaging with them Contestability: When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system Accountability: Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled |
7 The AI Canvas
8 Conceptual integration of AI adoption and WHS viewpoints
-
How might the combination of AI ethics principles and AI canvas stages affect equality, contribution, openness, and responsibility in a workplace using AI; and
-
What measures might be missing but ought to be available as their absence could undermine the responsible, competent governance of human-AI relations in a workplace?
Main stages of development | AI Canvas | AI WHS Principles | Examples* | Safework Characteristics of work & hazards/risks | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Human condition | Worker safety | Oversight | ||||||||||
Human, social and environmental wellbeing | Human-centred values | Fairness | Privacy protection and security | Reliability and safety | Transparency and explainability | Contestability | Accountability | |||||
Ideation | Prediction: Identify the key uncertainty that you would like to resolve | • Using AI when an alternative solution may be more appropriate or humane. [5,12] • The system displacing rather than augmenting human decisions. [3] • Augmenting or displacing human decisions with differential impact on workers who are directly or indirectly affected. [7,9,13] • The resolution of uncertainty affecting ethical, moral or social principles. [9,11,14] | • Overconfidence in or overreliance on AI system, resulting in loss of/diminished due diligence. [3,7] | • Inadequate or no specification and/or communication of purpose for AI use/an identified AI solution. [2,7,9,15,16] | Predicting a worker's physical or mental exhaustion levels for monitoring purposes without instituting strategies to prevent exhaustion in the future (worker safety) | Psychological—Work demands | ||||||
Judgement: Determine the payoffs to being right versus being wrong. Consider both false positives and false negatives | • (Insufficient consideration given to) unintended consequences of false negatives and false positive. [2,4,11,12] • AI being used out of scope. [3,4,7] • AI undermining company core values and societal expectations. [5,14] • AI system undermining human capabilities. [5] • trading off the personal flourishing (intrinsic value) in favour of organisational gain (instrumental good). [14] | • Technical failure, human error, financial failure, security breach, data loss, injury, industrial accident/disaster. [1,7,16] • Impacting on other processes or essential services affecting workflow or working conditions. [1,13] | • Insufficient/ineffective transparency, contestability and accountability at the design stage and throughout the development process. [12,16] | False negatives or false positive disadvantage or victimise a worker, causing stress, overwork, ergonomic risks, anxiety, boredom, fatigue and burnout, potentially building barriers between people, facilitating harassment or bullying (human condition) | Psychological- Work demands | |||||||
Action: What are the actions that can be chosen? | • Inequitable or burdensome treatment of workers. [1,10] • gaming (reward hacking) of AI system undermining workplace relations. [4,16] • Worker attributing intelligence or empathy to AI system greater than appropriate.[3] • Context stripping from communication between employees.[3] • Worker manipulation or exploitation. [5,7] • Undue reliance on AI decisions. [3,7] | • Adversely affecting worker or general rights (to a safe workplace/physical integrity, pay at right rate/EA, adherence to National Employment Standards, privacy). [1,7] • Unnecessary harm, avoidable death or disabling injury/ergonomics. [1,7,8,16] • Physical and psychosocial hazards. [3,16] | • Inadequate or closed chain of accountability, reporting and governance structure for AI ethics within the organisation, with limited or no scope for review. [7,10,14] • (lack of process) for triggering human oversight or checks and balances, so that algorithmic decisions cannot be challenged, contested, or improved. [3,9] • AI shifting responsibility outside existing managerial or company protocols, and channels of internal accountability (via out- or sub-contracting). [13] | A workflow management system disproportionately, repeatedly or persistently assigns some workers to challenging tasks that others with principally identical roles can thus avoid (human condition) | Cognitive-Complexity and duration | |||||||
Development | Outcome: Choose the measure of performance that you want to use to judge whether you are achieving your outcomes | • Chosen outcome measure not aligning with healthy/collegial workplace dynamics. [1,7] • Outcome measure resulting in worker-AI interface adversely affecting the status of a worker/workers in the workplace. [3] | • Performance measures differentially and/or adversely affecting work tasks and processes. [2,6,10] | • Workers (not) able to access and/or modify factors driving the outcomes of decisions. [2,3,9,16] | Efficiency improvements have differential effects across the workforce, improving conditions for some, but not others, or creating or promoting competitive behaviours, undermining collaborations or collegial relations (human condition, worker safety) | Psychological- Organisation justice | ||||||
Training: What data do you need on past inputs, actions and outcomes to train your AI to generate better predictions? | • Training data not representing the target domain in the workplace. [7,15] • Acquisition, collection and analysis of data revealing (confidential) information out of scope of the project. [7] • data not being fit for purpose [5,8,11,16] | • Cyber security vulnerability. [1,11] • (In)sufficient consideration given to interconnectivity/ interoperability of AI systems. [2,9] | • Inadequate data logs (inputs/outputs of the AI) or data narratives (mapping origins and lineage of data), adversely affecting ability to conduct data audits or routine M&E. [7,9,10,12] • (Rapid AI introduction resulting in) inadequate testing of AI in a production environment and/or for impact on different (target) populations. [2,4] | Training data for a new system of leave and sick leave projections include only more recent workplace recruits with shorter tenure for whom better contextual data are available (human condition) | Psychological- Organisation justice | |||||||
Input: What data do you need to generate predictions once you have an AI algorithm trained? | • Discontinuity of service. [1,13] • Worker unable or unwilling to provide or permit data to be used as input to the AI. [9,15] | • Impacting on physical workplace (lay out, design, environmental conditions: temperature, humidity). [10,15] • (In)secure data storage and cyber security vulnerability. [1,2,7,10,16] • Worker competences and skills (not) meeting AI requirements. [13] • Boundary creep: data collection (not) ceasing outside the workplace. [8,15] | • Insufficient worker understanding of safety culture and safe behaviours applied to data and data processes within AI. [8,13] • Partial disclosure or audit of data uses (e.g., due to commercial considerations, proprietary knowledge). [14,15] | A workforce planning tool omits timely correction for seasonal factors, trends or shocks, leading to a shortage of staff or produce at key times (human condition) | Cognitive- Complexity and duration | |||||||
Application | Feedback: How can you use the outcomes to improve the algorithm? | • Assessment processes requiring review due to new approach or tool. [9] • Identifiable personal data retained longer than necessary for the purpose it was collected and/or processed. [10] | • Inadequate integration of AI operational management into routine maintenance ensuring AI continues to work as initially specified. [3,4,8,16] • No offline systems or processes in place to test and review veracity of AI predictions/decisions. [9] | A new HR recruitment process using AI achieves a more gender-balanced intake of new staff. Do the data input or algorithm require review to maintain this outcome? (worker safety) | Cognitive—Psychological, Information processing load, complexity and duration, organisation justice |