Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden.
powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden.
powered by
Excerpt
What should be our response to AI Agency trapping us in the data-driven web of the AI-powered machine? Among wide-ranging responses of AI communities to the crisis of AI Agency, the response in general is to warn us about the existential risk posed by Super AI, the dystopia of Black Mirror, and the crisis of automated decision-making. This response makes us aware of our techno-centric future where our identities are being linked to facial recognition, and of the deep concerns about data paralysing institutions and industries, thereby creating a culture of ‘data anxiety’. The second response is to articulate the implication of opaqueness and the lack of transparency of autonomous AI systems, raising concerns of manipulation of decision-making. This response also alerts us to the impact of AI Agency on the politics of governance. The third response is to question the very nature of intelligence of the artificial, raising questions about sentience and our understanding of the data-driven world. This response also warns us of the danger of getting used to blind faith in the machine, trappings of a human–robot co-existence society, and elimination of human intervention in autonomous decision-making. Further, it alerts us of empty slogans of transparency and compliance and “ethics washing” facades. The fourth response is to counter the images of a dystopic future, asking us to give attention to positive impacts and potentials of AI systems for societal benefit in domains such as human health, transportation, service robots, health-care, education, public safety, security and entertainment. The fifth response is to initiate a conversation on public accountability frameworks, including issues of governance, and the cultivation of a culture of algorithmic accountability arising from concerns of opaqueness, transparency and responsibility. Whilst recognising the need to cultivate trust and reliability of AI systems and tools, it argues for alignment of AI Agency to social, cultural, legal and moral values of societies, guided by ethical frameworks. To get a glimpse of varied voices and responses to the trappings of the AI Agency, we take a note of recent AI debates of forums such as those of The Economic Word Forum (2019), the STOA Study (2019), the AI and the Future of Humanity Exhibition (2020), The Royal Society (2018), and voices of the AI research community including those of the authors of this volume. …