arrow_back
All Blog Posts
Article
September 10, 2024

How the U.S. can take a Cue from the E.U. in Addressing Data Privacy Concerns & Risks Associated with AI

Discover how the U.S. can learn from the E.U. to tackle privacy risks in AI systems, balancing innovation with essential privacy concerns for a safer future.

The rise of artificial intelligence (AI) has affected nearly every industry, yet the United States still lacks comprehensive federal regulations on how companies handle personal data for AI development and use.

While Congress has contemplated updating U.S. commercial data privacy laws for over a decade (with notable efforts including the Obama administration's 2012 Consumer Privacy Bill of Rights and increased attention following the 2018 Cambridge Analytica scandal), efforts have so far proven futile. The recent rapid advancements in AI should reignite discussions about privacy protections, especially as major tech companies revise their user agreements to explicitly allow personal data collection for AI training.

As U.S. policymakers consider new privacy laws and regulations to empower individuals over their personal data and protect the public from AI-enhanced surveillance risks, examining the EU's approach could provide key insights, especially as it relates to promoting regulatory consistency for businesses and reaffirming shared transatlantic values on digital and privacy rights.

AI & Data Privacy Concerns

The privacy challenges posed by AI revolve around several key issues:

  • Data hunger: AI's machine-learning algorithms require vast amounts of personal data, raising significant privacy questions about data sources, storage methods, access protocols, and usage guidelines. Meanwhile, current data protection laws (state by state) often fall short in addressing these concerns.
  • Advanced analysis capabilities: AI's power to process data and draw complex conclusions intensifies privacy risks. It can potentially infer sensitive personal information (location, preferences, habits), increase the risk of unauthorized data spread, facilitate identity theft, and enable unwarranted surveillance.

In addition to the personal privacy risks, AI-driven privacy violations can lead to tangible economic, security, and reputational damages beyond just revealing personal details without consent. These risks include:

  • Enhanced scams: AI-powered targeted phishing attacks, impersonation using synthetic media, personalized deceptive messaging.
  • Discriminatory pricing: Companies using AI to set different prices based on predicted consumer traits, like General Motors' past practice of selling customer driving data to brokers, affecting insurance premiums.
  • Algorithmic bias: Errors or biases in training data leading to large-scale unfair treatment.
  • Expanded surveillance: AI enhancing existing surveillance methods, new capabilities like biometric identification, predictive social media analytics, and potential for disproportionate impact on historically over-policed communities based on factors such as income, race, and religion.

EU Approach

In contrast to the U.S., the European Union has implemented broad data governance laws and taken policy action to address privacy risks associated with AI. The following EU regulations aim to mitigate AI-related privacy risks.

  • General Data Protection Regulation (GDPR): Includes an opt-out right for automated decision-making or predictive profiling (Article 22) and requirements for companies to be transparent about the legal purposes for processing data and to conduct regular impact assessments on associated individuals (Articles 13 and 35).
  • Digital Services Act (DSA): Bans targeted advertising to minors under 18 and prohibits targeted ads based on sensitive personal characteristics (such as political affiliation and religion). While this doesn’t prevent companies from using personal information to develop AI, it mitigates risk by requiring them to have a legitimate reason to do so and to comply with other regulatory requirements.
  • Artificial Intelligence Act (AI Act): Signed into law in July 2024, the AI Act classifies algorithmic systems based on their level of risk, banning “unacceptable” risk systems (those deemed the highest risk), including predictive policing and emotion recognition in employment or education settings. It also restricts police from using real-time biometric systems for identifying individuals in public spaces. Systems that are deemed “high” but not “unacceptable” risk may be allowed with strict oversight mechanisms in place.

U.S. Approach

While the U.S. currently lacks comprehensive federal legislation to govern data privacy and AI, there are several voluntary guiding principles to fall back on, such as the NIST AI Risk Management Framework and Blueprint for an AI Bill of Rights. Both initiatives aim to promote responsible AI development and use, but they approach it from different angles. The NIST framework is more technical and process-oriented, while the Blueprint for an AI Bill of Rights focuses on high-level principles and individual rights.

  • NIST AI Risk Management Framework: Released in January 2023, it provides a structured approach for organizations to address AI risks, covering the full AI lifecycle from design to deployment and monitoring. It emphasizes governance, transparency, and accountability, and includes four core functions: Govern, Map, Measure, and Manage.
  • Blueprint for an AI Bill of Rights: Developed by the White House Office of Science and Technology Policy and released in October 2022, it outlines five key principles for the development and use of AI systems: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback.

What’s Next

U.S. lawmakers must lead in evolving regulations, encouraging public discourse, and anticipating future AI challenges to effectively navigate the increasingly complex privacy landscape. This includes updating existing laws to address AI-specific challenges and implementing strict regulations on artificial intelligence data processing; fostering discussions on balancing public safety and individual privacy with a range of stakeholders including the public, tech, and law enforcement; and proactive policymaking focused on anticipating future AI developments and establishing preemptive measures. Concord will be monitoring developments at the federal level and will keep you informed with the latest updates.