01.06.2014
Introduction to special issue on computational methods for enforcing privacy and fairness in the knowledge society
Erschienen in: Artificial Intelligence and Law | Ausgabe 2/2014
EinloggenAktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Excerpt
We live in times of unprecedented opportunities for sensing, storing and analyzing micro-data on human activities at extreme detail and resolution, at a societal level. Wireless networks and mobile devices record the traces of our movements. Search engines record the logs of our queries for finding information on the web. Automated payment systems record the tracks of our purchases. Social networking services record our connections to friends, colleagues, and collaborators. Ultimately, these big data of human activity are at the heart of the very idea of a knowledge society: a society where decisions—small or big, by business or policy makers—can be taken on the basis of reliable knowledge, distilled from the ubiquitous digital traces generated as a side effect of our living. Increasingly sophisticated intelligent systems support knowledge discovery and deployment from human activity data, enabling the extraction and the (often automatic) use of models, patterns, profiles, and rules of human behavior. This paradigm shift towards the knowledge society comes, however, with critical risks for human rights:-
Privacy violation: during knowledge extraction, the risk is of unintentional or deliberate intrusion into the personal data of people whose data are being collected, analyzed and mined.
-
Discrimination: during knowledge deployment, the risk is the unfair use of the discovered knowledge in making discriminatory decisions about the people who are classified, or profiled.