Reviews
Shortlisted for the Women’s Prize for Non-Fiction
“Code Dependent is the intimate investigation of AI that we’ve been waiting for, and it arrives not a moment too soon. Murgia travels the world to bring us intimate portraits of every aspect of the human condition―inner life, family, work, class, race, geography, gender, community, politics―as each is unmade and remade by today’s global AI juggernaut. Most critically, Murgia doesn’t just ‘tell.’ She ‘shows’ us in moving detail that AI is nothing more than a spectrum of possibilities selected and shaped by the economic and political powers that bring it to life. Her work brilliantly reveals the quiet daily violence and flesh and blood consequences of today’s dominant AI regime designed and deployed by surveillance capitalism. Ultimately, the steady drumbeat of her stories opens our eyes to what could have been and what might yet become if we learn to join forces to reclaim our digital century for people and planet.”―Shoshana Zuboff, author of The Age of Surveillance Capitalism, Harvard Business School Professor Emeritus
“Brilliant storytelling. Books about AI often put the tech centre stage, but Murgia makes you, the human, the hero and sadly often the victim in this fascinating collection of stories about the impact of code on our future.”―Marcus du Sautoy, author of The Creativity Code
“With its compelling narrative, Code Dependent is a testament to the power of storytelling in unraveling the complexities of AI. Murgia’s profound insights and meticulous research offer a rare and invaluable perspective on the intersection of technology and society.”―Azeem Azhar, Founder, Exponential View
“Code Dependent provides a much needed corrective to the trendy breathless Silicon Valley insider AI history. Eschewing charismatic founders and sentient machines. it focuses instead on the world outside the tech bubble―the world AI’s boosters claim to be improving. By tending to the concrete stories of AI’s subjects, Code Dependent raises critical questions about AI and the business models behind it, doing so from the perspective of those laboring for and judged by costly, centralized AI models developed and deployed by employers, governments, and corporations.”―Meredith Whittaker, president of Signal, co-founder of the AI Now Institute
“Exposes the hidden consequences of our existing AI technologies.”―The Times
“A penetrating look at how we’re allowing artificial intelligence to infiltrate all parts of society, from policing, welfare, justice and health, to the point where whole lives are being altered – often ruined – by systems that hardly any of us understand.”―The Daily Telegraph
“The power of this book lies in the rich stories it tells of individuals … Drawing on interviews from around the globe, this highly readable and deeply important book exposes AI’s sordid underbelly.”―The Guardian
“Given the topic’s ubiquity, it is refreshing when a new perspective comes along. And Code Dependent is just that, making it a must-read for those struggling to reckon with the AI Revolution.”―New Scientist
Code Dependent: Living in the Shadow of AI by Madhumita Murgia examines the pervasive influence of artificial intelligence on modern society. This timely exploration delves into how AI is reshaping our daily lives, from the workplace to personal relationships. Murgia offers a balanced perspective, highlighting both the transformative potential and the ethical concerns surrounding AI technology. The book stands out in the genre for its accessible approach to complex technical concepts and its focus on real-world implications for individuals and communities.
Code Dependent is primarily aimed at readers interested in understanding the impact of AI on society, regardless of their technical background. The book is accessible to a general audience, making complex AI concepts understandable for those without specialized knowledge. Secondary audiences include policymakers, business leaders, and students in technology-related fields who seek a broader perspective on AI’s societal implications. Murgia’s clear writing style and use of real-world examples make the book’s ideas approachable for readers from various backgrounds.
Is artificial intelligence quietly taking control of your life? This question lies at the heart of Madhumita Murgia’s Code Dependent: Living in the Shadow of AI. In a world where algorithms increasingly shape our choices, Murgia’s book serves as a wake-up call to the subtle yet profound ways AI is transforming society.
Imagine waking up to a smartphone that not only knows your schedule but predicts your mood and adjusts your day accordingly. As you commute, self-driving cars navigate through traffic optimized by AI. At work, AI-powered tools assist or even replace traditional human roles. This isn’t science fiction – it’s the reality Murgia explores in her book.
Code Dependent takes readers on a journey through the AI-infused landscape of modern life. Murgia examines how AI technologies are revolutionizing industries, from healthcare and finance to entertainment and education. She doesn’t shy away from the potential benefits, highlighting how AI can solve complex problems and enhance human capabilities.
However, Murgia also shines a light on the darker aspects of our increasing reliance on AI. She raises important questions about privacy, job displacement, and the ethical implications of delegating decision-making to machines. Through compelling case studies and interviews with experts, she illustrates the real-world consequences of living in an AI-driven world.
The book delves into the power dynamics at play in the development and deployment of AI systems. Murgia explores how the concentration of AI capabilities in the hands of a few tech giants and governments could reshape social structures and exacerbate existing inequalities.
Readers of Code Dependent will gain a comprehensive understanding of AI’s current capabilities and future potential. They’ll learn to recognize the hidden influence of AI in their daily lives and develop a critical perspective on the technology’s broader societal impacts. Murgia equips her audience with the knowledge to navigate an increasingly AI-dependent world, encouraging informed engagement with these powerful tools.
By the end of Code Dependent, readers will be better prepared to face the challenges and opportunities presented by the AI revolution. Murgia’s insights prompt us to consider our role in shaping a future where humans and AI coexist, emphasizing the importance of maintaining human agency in a world increasingly governed by algorithms.
The central thesis of Code Dependent is that artificial intelligence has become an invisible yet powerful force shaping modern society, and we must actively engage with its development to ensure it serves human interests. Murgia argues that AI is not just a tool but a transformative technology that’s redefining fundamental aspects of human life and society.
To illustrate this point, consider the analogy of AI as a silent puppeteer. Just as a skilled puppeteer can make marionettes appear to move of their own accord, AI systems quietly influence our choices, behaviors, and even our perception of reality. From the content we see on social media to the products we buy and the jobs we’re offered, AI algorithms are pulling the strings behind the scenes, often without our full awareness or consent.
Code Dependent makes a significant contribution to the public discourse on AI by bridging the gap between technical expertise and general understanding. Murgia’s journalistic background allows her to present complex AI concepts in an accessible manner, making the book valuable for both tech enthusiasts and general readers.
The book has garnered attention for its balanced approach to AI, acknowledging its potential benefits while critically examining its risks. Murgia’s exploration of AI’s impact on employment, privacy, and social structures has sparked discussions among policymakers and industry leaders about the need for responsible AI development.
While some critics argue that the book may overstate the current capabilities of AI, others praise Murgia’s foresight in addressing potential future scenarios. The book’s release has coincided with increased public interest in AI ethics and regulation, contributing to ongoing debates about the role of technology in society.
We find Code Dependent: Living in the Shadow of AI to be a compelling and timely exploration of artificial intelligence’s impact on modern society. Murgia’s journalistic approach brings a refreshing clarity to complex AI concepts, making them accessible to a wide audience without sacrificing depth or nuance. The book’s balanced perspective on AI’s potential benefits and risks provides readers with a solid foundation for forming their own informed opinions on this crucial topic.
We particularly appreciate Murgia’s global perspective on AI development and policy. By examining AI initiatives and their implications across different countries and cultures, the book offers a more comprehensive understanding of the technology’s role in shaping international relations and global power dynamics. This broader view sets Code Dependent apart from many other AI-focused books that tend to concentrate primarily on developments in Silicon Valley or Western tech hubs.
The book’s thorough examination of AI ethics is another standout feature. Murgia goes beyond surface-level discussions, diving into complex issues such as algorithmic bias, privacy concerns, and the philosophical questions raised by AI decision-making. By presenting real-world case studies that illustrate these ethical dilemmas, she makes abstract concepts concrete and relatable, equipping readers with the tools to critically evaluate the moral implications of AI technologies they encounter in their daily lives.
We also commend Murgia’s insightful analysis of AI’s impact on employment. Her nuanced exploration of job displacement, creation, and transformation provides valuable insights for individuals and organizations navigating the changing landscape of work in the age of AI. The practical advice offered on adapting to these changes adds to the book’s overall utility for readers.
However, we note that the book occasionally falls short in exploring potential solutions to the challenges it identifies. While Murgia excels at raising important questions about AI governance, ethics, and societal impact, readers might benefit from more concrete proposals for addressing these issues. Additionally, those with a more technical background in AI might find some explanations oversimplified, though this approach is understandable given the book’s target audience.
We also observe that the book sometimes leans towards emphasizing worst-case scenarios in its effort to highlight potential risks. While it’s important to consider negative outcomes, a more balanced presentation of probable scenarios alongside more extreme possibilities could provide readers with a more nuanced understanding of AI’s likely trajectory.
Our Recommendation
Despite these minor shortcomings, we strongly recommend Code Dependent: Living in the Shadow of AI to anyone seeking a comprehensive, accessible introduction to the societal implications of artificial intelligence. The book’s clarity, balanced perspective, and global scope make it an excellent starting point for general readers looking to understand how AI is reshaping our world.
We believe Code Dependent is particularly valuable for policymakers, business leaders, and students in non-technical fields who need to understand AI’s broader impacts. However, even those with a background in technology will likely find value in Murgia’s thoughtful exploration of AI’s ethical and societal dimensions. Overall, this book serves as an important contribution to public discourse on AI, promoting the kind of informed engagement necessary for shaping a future where AI technologies serve the best interests of humanity.
The pervasive influence of AI in daily life is a central theme of Code Dependent. Murgia explores how AI algorithms have become deeply integrated into our routines, often operating invisibly in the background. From personalized content recommendations to predictive text in our emails, AI shapes our digital experiences in ways we may not fully realize. This ubiquity raises questions about the extent to which our choices and behaviors are being subtly guided by machine learning systems.
The potential for AI to exacerbate existing societal inequalities is another crucial topic addressed in the book. Murgia examines how AI systems, trained on historical data, can perpetuate and even amplify biases related to race, gender, and socioeconomic status. She discusses the implications of this in various domains, such as hiring practices, loan approvals, and criminal justice, highlighting the need for careful consideration of AI’s societal impact.
The changing nature of work in the age of AI is extensively explored in Code Dependent. Murgia delves into how AI is transforming industries and job roles, potentially displacing certain types of work while creating new opportunities in others. She considers the broader economic implications of widespread AI adoption and the challenges it poses for workforce adaptation and education systems.
Privacy concerns in an AI-driven world form another key element of the book’s message. Murgia examines the vast amounts of personal data required to train and operate AI systems, and the potential risks associated with this data collection. She raises important questions about data ownership, consent, and the balance between technological advancement and individual privacy rights.
The ethical considerations surrounding AI decision-making are thoroughly discussed in Code Dependent. Murgia explores the complexities of delegating important decisions to AI systems, particularly in high-stakes areas like healthcare and autonomous vehicles. She considers the challenges of ensuring transparency, accountability, and human oversight in AI-driven processes.
The geopolitical implications of AI development are also a significant focus of the book. Murgia examines the race between nations to achieve AI supremacy and its potential to reshape global power dynamics. She discusses the concentration of AI capabilities among a few tech giants and the implications this has for national security, economic competitiveness, and technological sovereignty.
The need for AI literacy and public engagement is emphasized throughout Code Dependent. Murgia argues that as AI becomes increasingly influential in society, it’s crucial for the general public to have a basic understanding of how these systems work and their potential impacts. She explores the challenges of making complex AI concepts accessible to non-experts and the importance of fostering informed public discourse on AI development and regulation.
AI in Healthcare: Murgia describes a case study of an AI system used in a major hospital to predict patient outcomes and recommend treatment plans. While the system showed promise in improving efficiency and accuracy, it also raised concerns when it was found to recommend different treatments for patients with similar conditions based on their insurance status, highlighting the potential for AI to perpetuate systemic biases in healthcare.
Algorithmic Bias in Hiring: The book discusses a well-known example of an AI recruitment tool developed by a major tech company that showed bias against female applicants. The system, trained on historical hiring data, learned to penalize resumes that included words like “women’s” or the names of all-women colleges, illustrating how AI can inadvertently perpetuate gender discrimination in the workplace.
AI and Content Moderation: Murgia examines the use of AI in social media content moderation, focusing on a case where an AI system struggled to distinguish between hate speech and discussions about hate speech. This led to the temporary banning of activists and researchers who were actually working to combat online hate, demonstrating the complexities of using AI for nuanced language tasks.
Predictive Policing: The book explores the implementation of a predictive policing AI system in a major U.S. city. While the system aimed to improve resource allocation and crime prevention, it disproportionately targeted low-income and minority neighborhoods, raising concerns about racial profiling and the reinforcement of existing biases in law enforcement.
AI in Financial Services: Murgia discusses a case where an AI-powered credit scoring system denied loans to qualified applicants from certain zip codes. Upon investigation, it was revealed that the AI had learned to associate specific geographic areas with higher risk, effectively redlining entire communities and perpetuating historical patterns of financial exclusion.
Autonomous Vehicles and Ethics: The book presents the classic “trolley problem” in the context of self-driving cars. Murgia describes a hypothetical scenario where an autonomous vehicle must choose between endangering its passengers or pedestrians, illustrating the complex ethical decisions that must be programmed into AI systems.
AI in Education: Murgia examines an AI-powered adaptive learning platform implemented in several schools. While the system showed promise in personalizing education, it also raised concerns about data privacy and the potential for over-reliance on standardized metrics, potentially narrowing the definition of educational success.
AI and Job Displacement: The book discusses the case of a large retail company that implemented AI-powered inventory management and customer service systems. While this increased efficiency, it also led to significant job losses among warehouse workers and call center employees, highlighting the disruptive potential of AI in the labor market.
AI in Journalism: Murgia explores the use of AI in news generation, focusing on a media organization that employed an AI system to write financial reports. While the system could produce basic articles quickly, it struggled with nuance and context, raising questions about the future role of human journalists and the potential for AI to spread misinformation if not properly supervised.
AI and Climate Change: The book examines how a major tech company used AI to significantly reduce energy consumption in its data centers. This case study illustrates the potential for AI to contribute to environmental sustainability, while also raising questions about the ecological impact of the increasing computational power required for advanced AI systems.
AI literacy is crucial for informed decision-making
Murgia emphasizes the importance of AI literacy for all citizens, not just tech professionals. She argues that understanding the basics of how AI systems work is essential for making informed decisions in an AI-driven world. To apply this insight, individuals can start by taking online courses on AI fundamentals, attending local tech meetups, or reading reputable AI news sources. Organizations can implement AI awareness training programs for employees, covering topics like machine learning basics, data privacy, and ethical considerations. Schools can introduce age-appropriate AI curriculum, teaching students to critically evaluate AI-driven information and understand the technology’s potential and limitations.
Ethical AI development requires diverse perspectives
The book highlights how AI systems often reflect the biases of their creators, underscoring the need for diverse teams in AI development. To put this insight into practice, tech companies should prioritize diversity in hiring for AI roles, actively recruiting individuals from varied backgrounds, including underrepresented groups in tech. They can establish cross-functional teams that include ethicists, sociologists, and legal experts alongside engineers and data scientists. Organizations should also implement regular bias audits of their AI systems, using tools and frameworks designed to detect unfair outcomes across different demographic groups.
Data privacy is a critical concern in AI development
Murgia explores the tension between the data hunger of AI systems and individual privacy rights. To address this, companies developing AI should adopt “privacy by design” principles, incorporating data protection measures from the earliest stages of product development. This could include implementing robust data anonymization techniques, using federated learning to keep personal data on users’ devices, and providing clear, user-friendly privacy controls. Individuals can protect their privacy by regularly reviewing and adjusting their data sharing settings on digital platforms, using privacy-enhancing tools like VPNs and encrypted messaging apps, and being selective about which AI-powered services they use.
AI’s impact on employment requires proactive adaptation
The book discusses AI’s potential to disrupt traditional job roles, necessitating a shift in workforce skills. To apply this insight, individuals should focus on developing skills that complement AI rather than compete with it. This includes honing creativity, emotional intelligence, and complex problem-solving abilities. They can also pursue continuous learning opportunities in emerging tech fields. Employers should invest in reskilling and upskilling programs for their workforce, partnering with educational institutions to develop curricula that address future skill needs. Governments can support this transition by funding adult education programs focused on AI and digital skills, and by updating labor policies to address the changing nature of work in the AI era.
Transparency in AI decision-making is essential
Murgia stresses the importance of understanding how AI systems arrive at their decisions, especially in high-stakes scenarios. To implement this, companies using AI for critical decisions should adopt explainable AI (XAI) techniques that make the decision-making process more transparent. This could involve using interpretable machine learning models or developing AI systems that can provide clear reasoning for their outputs. In regulated industries like finance or healthcare, organizations should work with policymakers to establish standards for AI transparency and accountability. For consumers, this means actively seeking out information about how AI systems are used in services they rely on and advocating for greater transparency from companies.
AI can exacerbate societal inequalities if not carefully managed
The book warns that AI systems can amplify existing biases and deepen societal divides. To mitigate this, organizations deploying AI should conduct thorough fairness assessments of their systems, testing for disparate outcomes across different demographic groups. They should also establish clear guidelines for addressing identified biases, including potentially foregoing the use of AI in sensitive decision-making processes where fairness cannot be ensured. Policymakers can play a role by developing AI fairness standards and requiring regular audits of AI systems used in public services. Advocacy groups and researchers can contribute by developing and promoting fairness-aware AI techniques and pushing for greater algorithmic accountability.
The geopolitical implications of AI require international cooperation
Murgia discusses how AI development is becoming a key factor in global power dynamics. To address this, nations should prioritize international collaboration on AI research and development, fostering open exchange of ideas while protecting core national interests. This could involve establishing multinational AI research centers, creating shared ethical guidelines for AI development, and negotiating AI arms control agreements. Businesses operating globally should be mindful of varying AI regulations across countries and work towards developing AI systems that can be ethically deployed worldwide. International organizations can facilitate dialogue between nations on AI governance and work towards creating a global framework for responsible AI development.
AI’s environmental impact needs careful consideration
While AI can contribute to solving environmental challenges, the book also notes its potential negative environmental impacts. To apply this insight, tech companies should prioritize energy efficiency in AI system design, using techniques like model compression and efficient hardware. They can also invest in renewable energy sources for their data centers. Researchers should focus on developing ‘green AI’ techniques that minimize computational resources without sacrificing performance. Consumers can contribute by being mindful of the energy consumption of AI-powered devices and services they use, opting for more energy-efficient options when available. Policymakers can incentivize the development of environmentally friendly AI through targeted funding and regulations.
Public engagement is key to shaping AI’s societal impact
Murgia emphasizes the importance of public input in guiding AI development and policy. To put this into practice, tech companies should establish meaningful channels for public feedback on their AI products, going beyond perfunctory user surveys to engage in substantive dialogue with diverse stakeholder groups. Governments can organize citizen assemblies on AI policy, ensuring a broad cross-section of society has a voice in shaping AI regulations. Educational institutions can play a role by hosting public lectures and workshops on AI, making complex topics accessible to non-experts. Individuals can engage by participating in public consultations on AI policy, joining local tech ethics groups, or contributing to open-source AI projects that prioritize social good.
Accessible explanation of complex AI concepts
Murgia excels in breaking down intricate AI technologies and their implications into language that a general audience can understand. She skillfully avoids technical jargon while still conveying the essence of how AI systems function and impact society. This accessibility is crucial given the book’s aim to increase AI literacy among the general public. By using relatable examples and clear analogies, Murgia helps readers grasp concepts like machine learning algorithms, neural networks, and data mining without overwhelming them with technical details.
Balanced perspective on AI’s societal impact
Code Dependent stands out for its nuanced approach to discussing AI’s influence on society. Murgia avoids the pitfalls of either uncritical techno-optimism or alarmist fear-mongering. Instead, she presents a balanced view that acknowledges both the transformative potential of AI and the legitimate concerns surrounding its widespread adoption. This even-handed treatment allows readers to form their own informed opinions about the role of AI in society, rather than being swayed by extreme viewpoints.
Comprehensive coverage of AI’s ethical implications
The book provides an in-depth exploration of the ethical challenges posed by AI systems. Murgia goes beyond surface-level discussions of AI ethics, diving into complex issues such as algorithmic bias, privacy concerns, and the philosophical questions raised by AI decision-making. She presents real-world case studies that illustrate these ethical dilemmas, making the abstract concepts concrete and relatable. This thorough examination of AI ethics equips readers with the framework to critically evaluate the moral implications of AI technologies they encounter in their daily lives.
Insightful analysis of AI’s impact on employment
Murgia offers a nuanced and well-researched perspective on how AI is reshaping the job market. She goes beyond simplistic narratives of mass unemployment or effortless automation, instead painting a complex picture of job displacement, creation, and transformation. The book provides valuable insights into which skills are likely to remain in demand in an AI-driven economy and offers practical advice for individuals and organizations adapting to these changes. This analysis is particularly valuable for readers looking to navigate their careers in the age of AI.
Strong journalistic approach to AI storytelling
Drawing on her background as a technology journalist, Murgia employs compelling storytelling techniques to bring AI concepts to life. She interweaves personal anecdotes, expert interviews, and vivid scenarios to illustrate the real-world implications of AI technologies. This narrative approach keeps readers engaged while also providing concrete examples that reinforce the book’s key points. The journalistic style also lends credibility to the book’s claims, as Murgia bases her arguments on extensive research and first-hand accounts from AI developers, policymakers, and individuals affected by AI systems.
Global perspective on AI development and policy
Code Dependent stands out for its international scope, examining AI developments and their implications across different countries and cultures. Murgia doesn’t limit her analysis to Silicon Valley or Western tech hubs but also explores AI initiatives in China, India, and other parts of the world. This global perspective provides readers with a more comprehensive understanding of the geopolitical dynamics shaping AI development and the varying approaches to AI governance across different societies. It’s particularly valuable for readers looking to understand AI’s role in shaping global power dynamics and international relations.
Limited technical depth for advanced readers
While the book’s accessibility is a strength for general readers, those with a more technical background in AI might find the explanations overly simplified. Murgia sometimes glosses over the finer points of AI algorithms and system architectures in favor of maintaining broad appeal. This approach, while understandable given the book’s target audience, may leave more technically inclined readers wanting a deeper dive into the underlying mechanics of AI systems.
Lack of quantitative data to support some claims
At times, Murgia relies heavily on anecdotal evidence and expert opinions to support her arguments about AI’s societal impact. While these perspectives are valuable, the book could benefit from more quantitative data and rigorous studies to back up its claims. This is particularly noticeable in discussions about AI’s economic impact, where hard numbers on job displacement and creation could provide a more concrete foundation for the book’s arguments.
Insufficient exploration of potential AI solutions
While Code Dependent excels at identifying the challenges posed by AI, it sometimes falls short in exploring potential solutions in depth. Murgia raises important questions about AI governance, ethics, and societal impact, but the book could offer more concrete proposals for addressing these issues. Readers might come away with a clear understanding of the problems but feel less equipped with actionable strategies for solving them.
Occasional overemphasis on worst-case scenarios
In its effort to highlight the potential risks of AI, the book sometimes leans towards emphasizing worst-case scenarios. While it’s important to consider potential negative outcomes, this approach might occasionally overshadow the more likely, moderate impacts of AI adoption. A more balanced presentation of probable scenarios alongside the more extreme possibilities could provide readers with a more nuanced understanding of AI’s future trajectory.
Overestimation of current AI capabilities
One potential blind spot in Code Dependent is the risk of overestimating the current capabilities of AI systems. While Murgia is generally careful to distinguish between present realities and future possibilities, readers might come away with an inflated sense of what AI can currently achieve. This could lead to misconceptions about the immediacy of certain AI-driven changes or unrealistic expectations about AI’s problem-solving abilities. Stuart Russell’s Human Compatible: Artificial Intelligence and the Problem of Control offers a more tempered view of current AI capabilities while still exploring their long-term implications.
Underexploration of AI’s potential in scientific research
While the book covers many applications of AI, it gives relatively little attention to its transformative potential in scientific research and discovery. Readers might miss out on understanding how AI is accelerating progress in fields like drug discovery, materials science, and climate modeling. This blind spot could lead to an incomplete picture of AI’s societal impact, focusing more on consumer and business applications while underappreciating its role in advancing human knowledge. For a deeper exploration of AI’s impact on scientific research, readers might turn to Kai-Fu Lee’s AI 2041: Ten Visions for Our Future, which includes scenarios illustrating AI’s role in scientific breakthroughs.
Limited discussion of AI in non-Western contexts
Despite its global perspective, Code Dependent might not fully capture the nuances of AI development and adoption in non-Western contexts. The book’s examples and case studies lean heavily towards North American and European experiences, potentially overlooking unique challenges and opportunities in developing economies. This blind spot could lead readers to apply Western-centric assumptions about AI’s impact to contexts where they might not be applicable. For a more diverse perspective on global AI development, Kate Crawford’s Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence offers insights into the geopolitical and cultural dimensions of AI across different regions.
Insufficient attention to long-term existential risks
While Murgia addresses many near-term and medium-term challenges posed by AI, the book gives less attention to potential long-term existential risks associated with advanced AI systems. Readers might come away without a full appreciation of the debates surrounding artificial general intelligence (AGI) and the potential for an intelligence explosion. This blind spot could lead to an underestimation of the importance of long-term AI safety research and governance. For a deeper exploration of these long-term considerations, Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies provides a comprehensive analysis of the potential risks and challenges associated with highly advanced AI systems.
The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher
The Age of AI offers a more high-level, strategic view of AI’s impact on global affairs compared to Murgia’s Code Dependent. While Murgia focuses on the everyday implications of AI for individuals and society, Kissinger, Schmidt, and Huttenlocher examine AI through the lens of geopolitics and long-term historical trends. Their book provides deeper insights into how AI might reshape international relations and governance structures, an area that Code Dependent touches on but doesn’t explore as thoroughly. However, The Age of AI lacks the accessible, on-the-ground examples that make Murgia’s work so relatable to general readers.
AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee
Lee’s AI Superpowers offers a more focused examination of the AI race between China and the United States, providing detailed insights into the unique characteristics of China’s AI ecosystem. While Murgia takes a global view in Code Dependent, her coverage of China’s AI developments is less extensive than Lee’s. AI Superpowers excels in its analysis of how cultural and political differences shape AI development in these two countries, offering a perspective that complements Murgia’s broader global analysis. However, Lee’s book is less comprehensive in its coverage of AI’s societal impacts beyond the economic and geopolitical spheres, an area where Code Dependent shines.
Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell
Russell’s Human Compatible delves deeper into the technical challenges of ensuring AI systems remain aligned with human values, a topic that Murgia addresses but doesn’t explore as thoroughly in Code Dependent. Russell’s book offers a more rigorous examination of the long-term risks associated with advanced AI systems, including the challenges of specifying correct objectives for AI. While this provides a valuable complement to Murgia’s work, Russell’s book is less accessible to non-technical readers and doesn’t offer as comprehensive a view of AI’s current societal impacts as Code Dependent.
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil
O’Neil’s book focuses more specifically on the harmful impacts of algorithmic decision-making systems, particularly in areas like criminal justice, education, and finance. While Murgia covers similar ground in Code Dependent, O’Neil’s work provides a deeper dive into the mathematical underpinnings of these systems and their potential to exacerbate societal inequalities. Weapons of Math Destruction offers a more critical perspective on the use of AI and big data in decision-making processes, whereas Murgia’s approach in Code Dependent is more balanced, exploring both the potential benefits and risks of AI technologies.
The Alignment Problem: Machine Learning and Human Values by Brian Christian
Christian’s The Alignment Problem offers a more in-depth exploration of the challenges involved in ensuring AI systems behave in accordance with human values and intentions. While Murgia touches on these issues in Code Dependent, Christian’s book provides a more comprehensive examination of the technical and philosophical aspects of AI alignment. The Alignment Problem delves deeper into the history of AI development and the various approaches to solving the alignment challenge, offering insights that complement Murgia’s more broad-based analysis of AI’s societal impacts.
Develop AI literacy
Enhance data privacy awareness
Develop AI-complementary skills
Engage in ethical AI practices
Prepare for AI-driven workplace changes
Contribute to shaping AI’s societal impact
Implement ethical AI development practices
Businesses should prioritize ethical considerations in their AI development processes to mitigate risks and build trust with customers and stakeholders. This involves establishing clear ethical guidelines, conducting regular bias audits of AI systems, and ensuring transparency in AI decision-making processes. By doing so, companies can avoid potential reputational damage and legal issues while positioning themselves as responsible leaders in the AI space.
However, implementing ethical AI practices can be challenging due to the fast-paced nature of technological development and the pressure to remain competitive. Many organizations may lack the expertise to effectively identify and address ethical issues in AI systems. Additionally, there may be resistance from teams focused on rapid development and deployment, who might view ethical considerations as a hindrance to innovation and efficiency.
To overcome these challenges, businesses should invest in building interdisciplinary AI ethics teams that include not only technical experts but also ethicists, legal professionals, and domain experts from relevant fields. Regular training programs on AI ethics should be mandatory for all employees involved in AI development. Companies can also partner with academic institutions or AI ethics organizations to stay updated on best practices and emerging ethical frameworks. Implementing a stage-gate process that includes ethical review at key development milestones can help ensure that ethical considerations are integrated throughout the AI lifecycle without significantly impeding progress.
Develop a comprehensive AI literacy program for employees
As AI becomes increasingly prevalent in various business functions, it’s crucial for companies to ensure that their workforce has a basic understanding of AI concepts and their implications. Implementing a company-wide AI literacy program can help employees better understand AI-driven tools, make informed decisions about AI implementation, and contribute to discussions about AI strategy. This knowledge can lead to more effective use of AI technologies and foster innovation across the organization.
However, developing and implementing an AI literacy program for a diverse workforce can be challenging. Employees may have varying levels of technical background and interest in technology. There might also be resistance from some staff members who fear that increased AI adoption could threaten their job security. Additionally, keeping the program up-to-date with rapidly evolving AI technologies can be resource-intensive.
To address these challenges, businesses should create a tiered AI literacy program that caters to different levels of expertise and job roles. Basic courses can focus on general AI concepts and their business applications, while more advanced modules can be offered for those in technical roles. Partnering with online learning platforms or AI education providers can help in developing and maintaining up-to-date course content. To address job security concerns, the program should emphasize how AI can augment human capabilities rather than replace them, and include modules on identifying opportunities for human-AI collaboration. Regular town halls or Q&A sessions with AI experts can help address employees’ concerns and maintain engagement with the program.
Establish an AI governance framework
Businesses need to establish a robust AI governance framework to ensure responsible development, deployment, and use of AI technologies. This framework should define clear policies and procedures for AI project approval, risk assessment, data management, and ongoing monitoring of AI systems. A well-structured governance framework can help companies navigate the complex ethical, legal, and operational challenges associated with AI adoption while maximizing its benefits.
Implementing an AI governance framework can be challenging due to the cross-functional nature of AI projects and the need for buy-in from various stakeholders. There may be resistance from teams that view governance as bureaucratic overhead that slows down innovation. Additionally, the dynamic nature of AI technology and evolving regulatory landscape can make it difficult to create a framework that remains relevant over time.
To overcome these obstacles, businesses should adopt an agile approach to AI governance. Start by creating a cross-functional AI governance committee that includes representatives from technology, legal, ethics, and relevant business units. This committee should develop a flexible framework that can be adapted as the company’s AI maturity grows and as regulations evolve. Implement a phased approach, starting with high-risk AI applications and gradually expanding to cover all AI initiatives. Regularly review and update the governance framework based on lessons learned and emerging best practices. To address concerns about slowing innovation, emphasize how good governance can actually accelerate AI adoption by building trust and mitigating risks. Provide clear guidelines and decision-making tools to help teams navigate the governance process efficiently.
Invest in AI-complementary skills development
As AI takes over more routine and predictable tasks, businesses need to focus on developing their workforce’s AI-complementary skills. These include creativity, emotional intelligence, complex problem-solving, and strategic thinking – areas where humans still outperform AI. By investing in these skills, companies can create a workforce that effectively collaborates with AI systems, driving innovation and maintaining a competitive edge in an AI-driven economy.
Implementing a large-scale skills development program can be challenging due to time and resource constraints. Employees may struggle to balance skill development with their regular job responsibilities. There might also be uncertainty about which specific skills will be most valuable in the future, given the rapid pace of technological change. Additionally, measuring the return on investment for soft skills development can be difficult, potentially making it hard to justify the expenditure to stakeholders.
To address these challenges, businesses should integrate AI-complementary skills development into their existing training and development programs. Use AI-powered learning platforms to provide personalized, on-demand learning experiences that employees can access at their convenience. Implement a skills mapping exercise to identify critical AI-complementary skills for different roles and create targeted development paths. Encourage learning through real-world applications by incorporating skill development into actual work projects. To measure impact, use a combination of traditional metrics (like employee retention and promotion rates) and AI-driven analytics to track skill application and its correlation with business outcomes. Consider partnering with educational institutions or online learning platforms to access cutting-edge content and reduce the burden of curriculum development.
Develop a strategy for responsible AI-driven automation
Businesses need to develop a thoughtful strategy for implementing AI-driven automation that balances efficiency gains with social responsibility. This involves carefully assessing which tasks and processes are suitable for automation, considering the impact on employees, and developing plans for reskilling and redeploying affected workers. A well-executed automation strategy can significantly enhance productivity while maintaining workforce morale and corporate social responsibility.
Implementing responsible AI-driven automation can be challenging due to the potential for job displacement and the associated negative publicity. There may be resistance from employees and labor unions concerned about job security. Additionally, accurately predicting the long-term impacts of automation on workforce needs and business models can be difficult, making it challenging to develop effective redeployment and reskilling strategies.
To overcome these obstacles, businesses should adopt a transparent and inclusive approach to automation planning. Engage employees early in the process, seeking their input on areas where automation could be most beneficial and how their roles could evolve. Develop a comprehensive communication strategy to explain the rationale for automation and the company’s commitment to supporting affected employees. Create a dedicated fund for reskilling and redeployment initiatives, demonstrating a tangible commitment to workforce development. Partner with local educational institutions and government agencies to develop broader support systems for workforce transition. Implement automation in phases, allowing time to learn from each stage and adjust strategies accordingly. Consider implementing a ‘human-in-the-loop’ approach where AI augments human capabilities rather than fully replacing human roles, easing the transition and maintaining a balance between automation and human expertise.
Establish robust data governance and privacy practices
Given the data-intensive nature of AI systems, businesses need to establish robust data governance and privacy practices. This involves implementing clear policies for data collection, storage, use, and sharing, ensuring compliance with data protection regulations, and maintaining transparency with customers about how their data is being used. Strong data governance not only helps in building trust with customers but also ensures the quality and reliability of data used in AI systems.
Implementing comprehensive data governance can be challenging due to the complexity of modern data ecosystems and the volume of data generated. There may be resistance from teams accustomed to having free access to data, viewing governance as a hindrance to agility. Ensuring compliance with varying data protection regulations across different jurisdictions can also be complex. Additionally, legacy systems and fragmented data storage can make it difficult to implement consistent governance practices across the organization.
To address these challenges, businesses should start by conducting a thorough data audit to understand their current data landscape. Develop a clear data governance framework that aligns with business objectives and regulatory requirements. Implement data cataloging and metadata management tools to improve data visibility and traceability. Invest in employee training to foster a culture of data responsibility across the organization. Consider appointing a Chief Data Officer to oversee data governance initiatives and ensure they receive appropriate priority. Implement privacy-enhancing technologies like data anonymization and encryption to protect sensitive information. Regularly conduct privacy impact assessments for AI projects to identify and mitigate potential risks. Develop clear communication channels with customers about data usage, providing them with easy-to-use privacy controls. By demonstrating a strong commitment to data privacy and governance, businesses can turn this into a competitive advantage, differentiating themselves in an increasingly data-conscious market.
AI-driven personalization and privacy concerns
As AI systems become more sophisticated, the tension between personalized experiences and privacy protection will intensify. Murgia’s insights on data privacy and AI ethics will become increasingly relevant. Businesses and policymakers will grapple with striking the right balance. Consumers may demand more transparency and control over their data. This could lead to new regulations and innovative privacy-preserving AI technologies.
AI in healthcare and ethical decision-making
The integration of AI in healthcare will accelerate, bringing Murgia’s discussions on AI ethics to the forefront. AI systems will assist in diagnosis, treatment planning, and drug discovery. This will raise complex ethical questions about AI’s role in life-or-death decisions. Healthcare providers and policymakers will need to develop robust frameworks for AI governance in medical settings. The public discourse on AI ethics in healthcare will likely intensify.
AI and the future of work
The impact of AI on employment, as explored in Code Dependent, will continue to be a critical issue. We’ll likely see increased automation in various sectors. This will drive demand for reskilling and lifelong learning programs. The nature of work itself may evolve, with humans focusing more on tasks that require creativity and emotional intelligence. Policymakers may need to consider new approaches to social safety nets and education systems to address these changes.
AI governance and global cooperation
Murgia’s examination of AI’s geopolitical implications will become increasingly relevant. As AI capabilities advance, there will be growing pressure for international cooperation on AI governance. We may see the emergence of global AI ethics standards or treaties. Tensions between national interests and the need for global cooperation in AI development could intensify. This could lead to new forms of international relations centered around AI capabilities and regulations.
AI literacy and public engagement
The need for widespread AI literacy, emphasized in Code Dependent, will become more urgent. Educational systems may incorporate AI education at earlier stages. Public engagement in AI policy discussions will likely increase. We may see the rise of citizen-led initiatives focused on responsible AI development. Media coverage of AI issues will probably become more nuanced and technically informed as public understanding grows.
Code Dependent: Living in the Shadow of AI is poised to have a significant long-term influence on public understanding and discourse around artificial intelligence. By presenting complex AI concepts in an accessible manner, Murgia’s work has the potential to elevate the level of public debate on AI policy and ethics. This increased AI literacy among general readers could lead to more informed citizen participation in shaping AI governance frameworks and corporate policies.
The book’s balanced approach to discussing AI’s impacts may contribute to a more nuanced public perception of AI technologies, moving beyond simplistic narratives of either uncritical techno-optimism or alarmist fear-mongering. This could foster a more productive dialogue between AI developers, policymakers, and the general public, potentially leading to more thoughtful and effective AI regulations and development practices.
Murgia’s emphasis on the importance of diversity in AI development teams and the need for interdisciplinary approaches to AI ethics could influence hiring and research practices in the tech industry. Companies and research institutions might place greater emphasis on incorporating diverse perspectives and ethical considerations into their AI development processes, potentially leading to more inclusive and socially responsible AI systems.
The book’s exploration of AI’s impact on employment and the changing nature of work could shape educational and workforce development policies. Policymakers and educational institutions might use insights from Code Dependent to inform curriculum development and retraining programs, better preparing workers for an AI-driven economy.
Murgia’s discussion of the geopolitical implications of AI development could contribute to a growing awareness of AI as a matter of national and international policy. This could lead to increased diplomatic efforts around AI governance and potentially shape international agreements or standards for responsible AI development and deployment.
The accessibility of Code Dependent may inspire more journalists and writers to tackle complex technological topics for general audiences. This could lead to an expansion of high-quality, accessible technology journalism, further enhancing public understanding of emerging technologies and their societal impacts.
The Age of AI: And Our Human Future by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher: This book offers a strategic view of AI’s impact on global affairs, complementing Murgia’s focus on everyday implications. It provides deeper insights into how AI might reshape international relations and governance structures, offering a valuable macro-level perspective to readers of Code Dependent.
AI Superpowers: China, Silicon Valley, and the New World Order by Kai-Fu Lee: Lee’s book provides a focused examination of the AI race between China and the United States, offering detailed insights into China’s AI ecosystem. It complements Murgia’s global perspective by providing an in-depth look at the two leading nations in AI development, helping readers understand the geopolitical dynamics shaping the future of AI.
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy by Cathy O’Neil: O’Neil’s work focuses specifically on the harmful impacts of algorithmic decision-making systems, particularly in areas like criminal justice, education, and finance. It offers a more critical perspective on the use of AI and big data in decision-making processes, providing readers with a deeper understanding of the potential negative consequences of AI deployment.
Human Compatible: Artificial Intelligence and the Problem of Control by Stuart Russell: Russell’s book delves deeper into the technical challenges of ensuring AI systems remain aligned with human values. It offers a more rigorous examination of the long-term risks associated with advanced AI systems, providing readers of Code Dependent with a more technical perspective on AI safety and ethics.
The Alignment Problem: Machine Learning and Human Values by Brian Christian: This book offers a comprehensive examination of the technical and philosophical aspects of AI alignment. It provides a deeper dive into the history of AI development and various approaches to solving the alignment challenge, complementing Murgia’s broader analysis of AI’s societal impacts with a focused look at a critical challenge in AI development.
Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence by Kate Crawford: Crawford’s book examines the hidden costs of AI systems, including their environmental impact and the labor conditions involved in their creation. It offers a critical perspective on the AI industry that complements Murgia’s work, helping readers understand the broader implications of AI development beyond its immediate applications.
The Ethical Algorithm: The Science of Socially Aware Algorithm Design by Michael Kearns and Aaron Roth: This book focuses on the technical aspects of designing ethical AI systems, offering insights into how we can build fairness, privacy, and other ethical considerations directly into algorithms. It provides a more technical complement to Murgia’s broader societal analysis, giving readers a glimpse into the cutting-edge research aimed at addressing some of the ethical challenges raised in Code Dependent.
You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place by Janelle Shane: Shane’s book offers a humorous and accessible look at the current capabilities and limitations of AI systems. It provides a lighthearted counterpoint to some of the more serious discussions in Murgia’s work, helping readers understand the often quirky and unpredictable nature of current AI technologies.
Websites and Online Platforms
AI Ethics Lab: This platform offers a wealth of resources on AI ethics, including research papers, case studies, and educational materials. It’s an excellent source for staying updated on the latest developments in AI ethics and governance. https://aiethicslab.com/
OpenAI: OpenAI’s website provides cutting-edge research on AI development and its societal implications. Their blog and publications offer insights into the latest AI capabilities and the challenges of creating beneficial AI. https://openai.com/
AI Now Institute: This research center focuses on the social implications of artificial intelligence. Their reports and publications provide in-depth analysis of AI’s impact on various sectors of society. https://ainowinstitute.org/
Conferences
AI for Good Global Summit: This annual conference brings together AI innovators with humanitarian and development experts to discuss how AI can contribute to solving global challenges. https://aiforgood.itu.int/
NeurIPS (Conference on Neural Information Processing Systems): One of the most prestigious machine learning conferences, NeurIPS features cutting-edge research in AI and often includes sessions on AI ethics and societal impact. https://nips.cc/
*FAT (Conference on Fairness, Accountability, and Transparency)**: This conference focuses on the ethical implications of AI and machine learning, addressing issues of bias, fairness, and transparency in algorithmic systems. https://facctconference.org/
Professional Organizations
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems: This initiative provides resources and guidelines for ethically aligned design of AI systems. https://ethicsinaction.ieee.org/
Association for Computing Machinery (ACM): The ACM has a special interest group on Artificial Intelligence (SIGAI) that provides resources and organizes events related to AI development and ethics. https://sigai.acm.org/
Podcasts
AI in Business: This podcast features interviews with AI practitioners and thought leaders, discussing the practical applications and implications of AI in various industries. https://www.aitechnology.com/podcast/
The TWIML AI Podcast: Formerly known as “This Week in Machine Learning & AI,” this podcast covers a wide range of AI topics, including technical developments and societal impacts. https://twimlai.com/podcast/
Ethics in AI: Hosted by Oxford University, this podcast series explores the ethical challenges posed by AI technologies, featuring discussions with leading experts in the field. https://podcasts.ox.ac.uk/series/ethics-ai
Courses
AI for Everyone: This Coursera course, offered by deeplearning.ai, provides a non-technical introduction to AI concepts and their societal implications. https://www.coursera.org/learn/ai-for-everyone
Ethics and Law in Data and Analytics: This edX course, offered by Microsoft, covers ethical and legal issues in AI and data analytics, including privacy, bias, and transparency. https://www.edx.org/course/ethics-and-law-in-data-and-analytics
Documentaries and Films
AlphaGo: This documentary follows the historic match between AI system AlphaGo and world champion Go player Lee Sedol, offering insights into the development of advanced AI systems.
The Social Dilemma: While not exclusively about AI, this documentary explores how AI-driven social media algorithms influence human behavior and society, touching on many themes relevant to Code Dependent.
iHuman: This documentary examines the promises and perils of AI through interviews with leading experts in the field, offering a balanced look at AI’s potential impact on society.
Discover the key takeaways from top non-fiction books in minutes. Get the wisdom you need to succeed fast. Here, learning is quick, engaging, and always at your fingertips.
© 2025 All Rights Reserved.