Introduction
This paper explores the sustainability impact of technology from social, economic, and environmental angles, in the context of life engineering. Using the methodology of the critical book review, the paper examines the consequences of algorithms, artificial intelligence (AI), social media, and quantitative metrics on human life and their implications across different dimensions. After each one of the book reviews, the authors, who are Computer Science researchers and graduates, add their own thoughts and impressions, illustrating the impact that this information has on them.
Life engineering
Life engineering [
1] is the intentional and systematic practice of designing and shaping one’s life to achieve specific goals, enhance well-being, and optimize personal growth. It involves understanding oneself, setting clear goals, and implementing deliberate actions to create a life aligned with values and aspirations. By cultivating habits and behaviors that support desired outcomes, leveraging scientific research and practical wisdom, and adapting to challenges, individuals can lead more intentional and fulfilling lives. While the effectiveness of life engineering may vary, engaging in this process empowers individuals to take an active role in shaping their lives and finding greater purpose and satisfaction.
In the context of technology, life engineering addresses the design of technology with strong emphasis on meeting the needs and well-being of users. It has great importance in our relentlessly increasing AI-driven world. It focuses on the development of solutions to serve human needs by integrating technology into various aspects of their lives [
1]. This is done largely by applying user-centered design principles and with the introduction of cutting-edge advancements in AI, keeping in mind the social, cultural, and ethical implications this may have on the mankind. Thus, life engineering strives to create technology that enhances human experiences and promotes a harmonious coexistence between humans and machines.
The term “life engineering” was first introduced by the Swiss scholar Hubert Osterle [
1], referring to a discipline that utilizes Information Technology (IT) to improve individuals’ quality of life. Although there has been tremendous advancement in the IT field, the question remains the same: has the quality of life improved? In the era of ubiquitous computing, where electronic devices are abundant, how can we responsibly handle their disposal to avoid environmental issues? It is essential to consider sustainability and ethics when envisioning any new developments.
Life engineering in algorithms: a review of the book “Weapons of Math Destruction” by Cathy O’Neil
In her book “Weapons of Math Destruction” [
2], Cathy O’Neil describes the hidden dangers of algorithmic decision-making. This provocative book exposes how algorithms can perpetuate discrimination, and biases, and undermine fairness and accountability. With society increasingly relying on algorithms in various domains, understanding the harm caused by these Weapons of Math Destruction (WMDs) is essential, as O’Neil reveals through captivating examples the profound societal implications of flawed algorithmic systems. As discussed by Osterle H. et al. [
1], recent advancements have made information technologies central to both business and personal life, with IT now deeply integrated into daily routines and societal functions, significantly impacting how individuals interact with technology and manage their activities.
The main idea of the book is the negative impact of algorithms and data-driven decision-making on society. The book’s strength lies in its ability to shed light on the hidden dangers of algorithmic decision-making and its impact on society. Cathy O’Neil effectively presents compelling arguments and supports them with rigorous research and real-life examples. Specifically, the book addresses the issue of college ranking systems as discussed by Kui Z [
3], which heavily rely on biased standardized tests like the Scholastic Aptitude Test (SAT) and American College Test (ACT) [
4]. The preference for wealthier students creates unfair advantages and places immense pressure on students, leading to stress, inequality, and a devaluation of the love of learning. Additionally, the book delves into online advertising algorithms that collect personal data and create personalized ads. This raises concerns about privacy and manipulation especially in political campaigns targeting vulnerable audiences. Evans [
5,
6] examines similar issues within the online advertising industry, highlighting the economic and privacy implications. The importance of critical thinking in resisting the influence of these algorithms is emphasized.
The author also examines mathematical models used in parole decisions and sentencing Skeem et al. [
7], also highlight how these models lack transparency and their potential for bias. These models reinforce existing biases and disproportionately affect marginalized communities. The book emphasizes the need for ethical considerations and human judgment in the criminal justice system. Furthermore, the book explores how automated hiring systems, discussed by Sánchez-Monedero et al. [
8] perpetuate biases, limiting job opportunities, while flawed performance evaluation algorithms contribute to high-pressure work environments. Transparency and human involvement are essential for ensuring fairness and accountability in these processes. Lastly, the book addresses the negative consequences of flawed credit assessment algorithms, which harm financial opportunities and well-being. It also highlights the threat posed by micro-targeting algorithms in civic life, as examined by Witzleb et al. [
9] and Kreiss [
10] which threaten democratic integrity.
However, the book’s heavy focus on the negative aspects may overshadow potential benefits and positive applications. While advocating for ethical considerations and human oversight, the book significantly contributes by raising awareness of ethical concerns in algorithmic decision-making. It encourages critical evaluation of fairness, transparency, and accountability.
Social
The use of WMDs has significant societal implications. It exacerbates social inequality, widens the gap between privileged and disadvantaged groups, and unfairly allocates resources, limiting opportunities for certain communities. These algorithms also reinforce power imbalances, concentrating decision-making authority in the hands of those who control them. Additionally, they threaten privacy and autonomy through the collection and use of personal data, undermine transparency, accountability, and the ability to challenge algorithmic decisions and diminish trust in institutions and democratic processes. To mitigate the negative impact of WMDs, solutions include enhancing accountability, transparency, and algorithmic transparency, and addressing the consequences of these systems. Implementing such measures reduces harm, promotes fairness, ethical decision-making, and trust in algorithmic systems.
Economic
Algorithmic systems have both positive and negative effects on the economy. On the positive side, algorithms can improve efficiency by automating tasks, reducing costs, and increasing productivity. They can also facilitate personalized recommendations, enhancing customer experiences and driving sales. However, algorithms can also contribute to economic inequalities by favoring certain businesses or individuals, limiting competition, and consolidating market power. Additionally, algorithmic decision-making may lead to unintended consequences, such as biased outcomes or job displacement, which can negatively impact certain sectors of the economy and worsen income disparities. It is essential to carefully consider the ethical and societal implications of algorithmic systems to ensure a fair and inclusive economic landscape.
Environmental
Algorithms can help optimize resource allocation and aid in sustainable planning. However, the energy consumption associated with algorithmic processing and the generation of electronic waste are negative consequences. Mitigating these concerns requires prioritizing energy efficiency, responsible data management, and the adoption of sustainable practices in algorithm development and decision-making processes.
Conclusion
As Computer Science scholars, Cathy O’Neil’s book “Weapons of Math Destruction” had a profound impact on us, challenging our assumptions about algorithmic decision-making. The book’s exploration of how algorithms shape our lives, often without our awareness, was both eye-opening and unsettling. What struck most was the book’s emphasis on the hidden biases and inequalities perpetuated by algorithmic systems, alarming us to the far-reaching implications of seemingly neutral algorithms. The concept of WMDs itself provoked thought about the power and potential harm of unchecked algorithms, raising critical questions about ethics, human oversight, and regulations. We now approach technology and data-driven processes with a more critical mindset, questioning the fairness and potential harms they may perpetuate.
Unveiling the hidden consequences: a journey through the “Atlas of AI” by Kate Crawford
As artificial intelligence (AI) is gaining prominence in modern world, with its unpredictable nature and far-reaching consequences, a haunting question lingers: What is really going on behind the shiny appearance of AI and how was it all put together in such a complex way? In her enlightening book “Atlas of AI”, acclaimed scholar Kate Crawford [
11] answers this question by exploring the artistry of AI, viewing AI as a collection of disparate elements, similar to the sections of an atlas, encountering well-visited and lesser-known landscapes of computation and exposing a world unseen by many.
From a life engineering perspective, Crawford’s Atlas of AI [
11] offers a comprehensive examination of all the aspects and dimensions of AI, viewing it as more than a technical domain. The book opens with the disruption of conventional narratives and debunking prevalent myths surrounding artificial intelligence, particularly the misconception that the non-human systems operate similar to human minds. This perspective assumes that with sufficient training and resources, human-like intelligence can be achieved from scratch [
12], without addressing the fundamental ways in which humans are embodied. But the author argues that AI is much more than databases and algorithms and machine learning models, aligning with other prominent authors like Cathy O’Niel [
2] and Timnit Gebru [
13], who have also emphasized the multidimensional nature of AI beyond algorithms, and deep learning. She escapes the notion that AI is purely a technical domain by highlighting its dependency on natural resources, fuel, human labor, histories, classification, and power dynamics.
By going into the significance of life engineering and shedding light on the thought-provoking concerns raised in Crawford’s profound exploration, one can see how AI is deeply interconnected with the social, economic, and environmental aspects of our world. Life engineering prompts us to understand the relationships between AI and these fundamental dimensions of human existence.
Social
In today’s interconnected world, society has become increasingly reliant on artificial intelligence. The book addresses issues of labor exploitation, human-AI collaboration and privacy concerns [
14]. Crawford’s research sheds light on a troubling reality; instead of asking whether robots will replace humans, the more important question is how humans are increasingly treated as robots.
While bringing our attention to the surveillance technologies used in AI to monitor and control humans [
15], it raises questions about fairness, autonomy, and ethical considerations, as well as the impact of classification and its underlying semantics that influences our lives and societal structures [
16]. The book exposes the disconcerting realities of racism, bias and social concerns that emerge from the discriminatory outcomes of AI systems based on mere algorithms. However, the fact cannot be ignored that today’s world is heavily dependent on the benefits of AI systems.
Thus, it is very crucial to engage in the field of life engineering where we strive to design and shape AI systems that prioritize fairness and the well-being of humans, ensuring that technological advancements serve the greater good.
Economic
Life engineering calls for careful considerations of the economic consequences of AI as highlighted in the book “Atlas of AI”. The book emphasizes on the unequal distribution of benefits and resources resulting from AI’s growth. The power dynamics and profit motives that shape AI systems, lead to the concentration of wealth in the hands of a few. Moreover, AI operates as a structure of power with the influence of states that shape these systems. Military research funding, policing priorities, and economic interests have had a profound influence on the trajectory of the field. By illuminating these hidden dimensions, the goal is to ensure that AI is guided by more transparent and ethical principles that can serve the broader interest of the society.
Environmental
With data centers being among the world’s largest consumers of electricity [
17], the technology is hungry for electric power, especially in the continuous training of AI algorithms, causing significant impacts that echo throughout our planet. Moreover, the impact of mineral extraction needed to power contemporary computation, needs to be addressed. While the exact value of these extracted minerals remains elusive, the substantial consequences can already be observed. The toll on clean streams, breathable air, and the health of local communities cannot be quantified in monetary terms. The book highlights the need for responsible practices throughout the life cycle of technology, including ethical sourcing, reducing e‑waste, and minimizing environmental impact.
Conclusion
Throughout the journey into the pages of “Atlas of AI”, we embark on a thought-provoking exploration of the interlaced landscape of artificial intelligence. This journey not only highlights the issues, but also calls for action. One needs to ask hard questions about the way AI is produced and adopted, like whose interests does it serve? Who bears the greatest risks of harm? And where should the use of AI be constrained? Machines have an enormous appetite for data and energy, but how and what they are fed has a huge impact on how they will interpret the world.
Reflecting on the irreversible damage that has already been done, it becomes clear that we cannot rewind the systems that have propelled AI’s exponential growth. However, we have the power to pause, to take a moment and question the trajectory of this rapid advancement. Do we truly need such an overwhelming amount of computation in our world? Should we not explore alternative systems and solutions that are not as voracious in their appetite for power, energy, and data?
It is time to shift our focus towards prioritizing the well-being and benefits of human beings by embracing the principles of life engineering. We should carefully construct systems that are sustainable, ethical, and focus on the betterment of the society. Perhaps the solution to the problems lies in cultivating awareness of the direction the world is heading into and recognizing that maybe we do not actually need as much as we think we do.
By exploring the concepts of life engineering, we can chart a path towards a more balanced future. We need to strive for a harmonious coexistence, where technology serves as a tool to enhance our lives while respecting the limits of our resources and energy and promoting the prosperity of all.
Life engineering in social media and search engines: a review of the book “The Age of Surveillance Capitalism” by Shoshana Zuboff
Social media platforms have transformed how people connect and socialize [
18]. They enable individuals to maintain relationships, share experiences, and express themselves. Similarly, search engines provide instant access to information. Liaw et al. [
19] describe how the primary use of Internet is retrieving information from search engines. For instance, without customized search engines like Google, we would still rely on physical maps and library visits to find books, as well as the Yellow Pages for company and person searches.
These technologies gather user information to deliver more personalized services. Cleff et al. discuss in their paper [
20] how this information is often obtained through user consent buried under lengthy terms and conditions that users may be unaware of. However, some individuals willingly share their data in exchange for free services and convenience. Landwehr et al. [
21] discuss how companies providing these free services make profit through customized advertising and behaviour modification techniques, which Zuboff in her book [
22] also explains.
Shoshana Zuboff aims to raise awareness about the growing power and influence of tech companies, which the author refers to as “surveillance capitalists”, and the implications this has for individual privacy, autonomy, and democracy. The primary merit of her work lies in her extensive research and meticulous analysis, requiring a dedicated 9‑year effort to complete.
The book is divided into four parts, each providing unique insights into the surveillance capitalism phenomenon. In the first part, Zuboff delves into the foundation of surveillance capitalism, explaining how companies like Google have disregarded users’ privacy boundaries and extracted information from users’ data for their own profit, which was later used to influence the user’s behavior. She provides examples of how the absence of regulations has led to the flourishing of these invasive methods, leading to an increase in surveillance capitalism.
In the second part of the book, Zuboff illustrates how surveillance capitalism has moved from the virtual world to the real world. She points out how Google Street view [
23] is similar to web crawlers [
24] that crawl through our real world to monitor people and gather further information about them from the real world. The book’s third part delves into the social and political implications of surveillance capitalism. She draws a harsh comparison between surveillance capitalism and totalitarianism. She explains that while totalitarianism relies on violence and fear to maintain power, surveillance capitalism uses behavior modification and data collection to achieve the same end.
Finally, in the fourth part of the book, Zuboff offers a critique of surveillance capitalism and explores possible ways to resist its power. She emphasizes the importance of individuals taking control of their personal data and the need for laws to regulate data manipulation.
From a life engineering perspective, the book raises concerns and presents arguments about dangers posed by social media and search engines by relying on them for real-life decisions. These concerns can impact social, economic, and environmental dimensions.
Social
One of the arguments presented by Zuboff on the social dimension concerns the potential consequences they have on our trust in friends’ recommendations. The lack of trust in personal connections may diminish the richness of our social interactions and human experiences. Conversely, placing trust in information from social media platforms can lead to the dissemination of misinformation and fake news as discussed by Di Domenico et al. [
25].
Another argument is the lack of privacy; constant surveillance by these service providers can lead to less participation in social media as discussed by Van der Schyff et al. [
26]. Since the data gathered is not publicly shared, users are hesitant to provide their true thoughts.
Economic
From an economic standpoint, social media and search engines have tapped into the potential of predictive knowledge, creating new markets. By analyzing extensive data, companies can make highly accurate predictions about human behavior and preferences. These predictions are then sold to third-party companies, often through auction systems as mentioned by Varian [
27] where the highest bidder gains access to this valuable information. However, this raises concerns for small companies trying to sell their products. They may face challenges in competing with larger corporations that have greater resources to bid on and utilize predictive knowledge. The economic advantage enjoyed by bigger players can potentially hinder the growth and success of small businesses in the market.
Environmental
From an environmental perspective, social media and search engine technologies consume substantial amounts of energy to deliver information to users. Dunia et al. [
28] discuss the need for social media applications to be designed in a way that requires less energy consumption. Furthermore, the use of these technologies across multiple devices contributes to a significant generation of electronic waste [
29].
Conclusion
Overall, “The Age of Surveillance Capitalism” provides an inspiring analysis of the rise of surveillance capitalism and its impact on society. Zuboff’s writing is easily understandable, as she provides numerous examples to support her arguments, making it a relevant and accurate read in the present digital era. While the book has limitations, it is a must-read for anyone interested in the future of our digital society.
After having presented the impacts of social media and search engines in social, economic, and environmental factors, it is crucial to raise awareness among individuals about the presence and implications of data in these technologies. Practicing life engineering principles becomes essential for enhancing the quality of life. However, one cannot ignore the positive impact of these technologies, for example, instant access to information, and connectivity. Instead of allowing technology to dictate our values and behavior, adopting an ethical mindset encourages us to prioritize the things that matter most to us. This might involve digital minimization, and finding an alternative plan, which needs to consider the bottleneck of energy consumption that digital computation requires and find an analog computation method.
This paper has reviewed three books from the perspective of life engineering and sustainability: “Weapons of Math Destruction” by Cathy O’Neil, “Atlas of AI” by Kate Crawford, and “The Age of Surveillance Capitalism” by Shoshana Zuboff. The three books discuss a range of topics that are intertwined, yet distinct. The first centers on the use of algorithms, the second on the development and reliance of AI, and the third on search engines and data surveillance. In our critical review of the three books, we took the lenses of life engineering and focused on the three dimensions of sustainability: social, economic, and environmental.
From a social perspective, our main finding was that algorithms, AI, and search engines have a potentially negative impact on human lives. Most crucially, de-humanizing decision-making by delegating it to machines, leads to potentially grave infringements on human rights, reinforcing discrimination, and invading people’s privacy through devious means.
Secondly, from an economic perspective, although these seem like some of the most lucrative work fields at present, they are inequitable, leaving many people disenfranchised. Allowing credit decisions to be taken by algorithms or trading people’s information, can lead to disloyal competition and irreparable damage in the social fabric of our society.
Thirdly, one of the most overlooked negative consequences of algorithms, AI, and search engines is their insatiable thirst for energy and hardware. The hype of development has made people look away from the immense consumption that training and maintaining these systems require. This is a paradox, since sometimes the same people that militate for the climate plead in favor of AI and other technologies that have a significant footprint on the environment.
Some of our recommendations include the development of more humane technologies, as well as including ethical technology development in Computer Science curricula, in order to stimulate scholars and future professionals to consider the possible risks of technology and do their best to mitigate them. As such, practicing life engineering principles becomes essential for enhancing the quality of life.
Nevertheless, one cannot ignore the positive impact of these technologies, for example, instant access to information, and connectivity. But, instead of allowing technology to dictate our values and behavior, adopting an ethical mindset encourages us to prioritize the things that matter most to us. This might involve digital minimization, and finding an alternative plan, which needs to consider the bottleneck of energy consumption that digital computation requires and find analogue computation methods. Effectively tackling this issue will require a collective effort from both individuals and society at large. This entails raising awareness about the negative consequences of current digital technologies and advocating for legislative reforms that safeguard individual privacy rights.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit
http://creativecommons.org/licenses/by/4.0/.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.