top of page

CCI’s Market Study - Global Overview of Legal And Regulatory Frameworks Concerning AI

  • Tech Reg Forum
  • Oct 23, 2025
  • 9 min read

We scriberes summarise - sharing portions of interest from Competition Commission of India’s  (“CCI”) recent market study on ‘Artificial Intelligence and Competition’ published on October 6, 2025. Global regulatory approaches to AI are evolving rapidly, with countries adopting a mix of existing laws and new frameworks to address emerging risks. The U.S. relies on sector-specific laws and agency actions, while the EU leads with comprehensive regulations like the AI Act and GDPR. The UK and Canada have adopted pro-innovation, risk-based strategies, and China focuses on state-led AI growth alongside strict data laws. Countries like Australia and Japan emphasize ethics and transparency. In India, recent reforms like the Competition (Amendment) Act, Digital Competition Bill, and DPDPA aim to address AI-related competition and data concerns, supported by evolving policy guidance and institutional strengthening.

 

  1. United States of America

 

●      Pre-existing laws suitable for AI Regulation:

  • the Electronic Communications Privacy Act (ECPA) of 1986: protects electronic communications from unauthorised government surveillance.

  • Children’s Online Privacy Protection Act (COPPA) of 1998: regulates online data collection from children under the age of 13.

  • Federal Trade Commission Act (1914, amended multiple times): consumer protection law; grants the FTC authority to combat deceptive and unfair trade practices.

  • Civil Rights Act of 1964: Anti-discrimination law; prohibits discrimination in hiring and employment.

  • Fair Credit Reporting Act (FCRA) of 1970: regulates credit assessments and automated decision-making in lending.

 

→ Federal Level

  • 2016 - published Report titled ‘Artificial Intelligence, Automation, and the Economy’[1] which called for a balanced AI development and prioritising benefits for all citizens in the context of that AI advancements.

  • 2019 - American AI initiative launched to maintain the US’s global AI leadership by promoting AI R&D, increasing access to federal data and resources and ensuring the safety and ethics of AI systems.

  • 2019-2020 - National Institute of Standards and Technology (NIST) developed AI-related standards, guidelines, and best practices and also released ‘Four Principles for Explainable AI’[2].

  • 2023 - NIST released ‘The AI Risk Management Framework’ (AI RMF)[3] providing voluntary best practices for AI risk management.

  • July, 2021 - Executive Order on Promoting Competition in the American Economy (“Executive Order”) issued by the White House; encouraged the Department of Justice (“DOJ”) and the Federal Trade Commission (“FTC”) to enforce antitrust laws to address challenges posed by the rise of dominant digital platforms owing to the acquisition of nascent competitors; established the White House Competition Council within the President’s Executive Office.

  • The FTC in the past has relied on its broad mandate under the FTC Act to pursue deceptive AI-driven practices, such as opaque algorithms in targeted advertising or undisclosed automated decision-making in e-commerce.[4][5]

 

→ State Level

 

  • California’s AI Transparency Bill (2022) mandates businesses to disclose when AI interacts with consumers.[6] 

  • Illinois’ AI Video Interview Act (2020) regulates AI-based hiring tools to prevent discrimination.[7]

 

Lawsuits Pertaining to AI Use

 

  1. March 2019 - Facebook charged with improper use of AI by using its advertising platform to enable and perpetuate housing discrimination.

    • Allegation - Facebook's ad targeting tools allowed advertisers to exclude specific demographics from viewing housing ads, thus violating anti-discrimination laws.

      Beyond explicit targeting, the complainant alleged that Facebook's ML algorithms inherently favoured certain demographics over others.

    • Outcome: civil penalty of USD 115,000 was imposed; Meta agreed to overhaul its ad targeting technology, implemented a "variance reduction system" to ensure equitable distribution of housing ads across different demographic groups, removed "special audience" tools that could potentially enable advertisers to exclude protected classes, agreed to be subjected to regular third-party audits to ensure adherence to the settlement term.

 

  1. 2020 - Clearview AI sued by the American Civil Liberties Union (ACLU) for its facial recognition technology, which allegedly scraped personal data from the web without consent.

    • Allegation: Clearview was building a secretive tracking and surveillance tool using biometric identifiers; was in violation of privacy laws that required companies collecting, capturing, or obtaining an Illinois resident’s biometric identifier, such as a fingerprint, faceprint, or iris scan, must first notify that individual and obtain their written consent.

    • Outcome: Authorities ruled that Clearview AI’s actions violated privacy laws; Clearview AI settled the matter in 2022. Settlement prohibited the sale of its facial recognition database to private individuals and businesses.[8]

 

2. European Union

 

  • Broad Legal Framework - General Data Protection Regulation (GDPR).-       Covers how digital systems collect, process, and store personal data.

  • 2018 - “Artificial Intelligence for Europe” is Europe’s effort to regulate AI.[9] 

  • 2018 - establishment of the High-Level Expert Group on AI (HLEG)

  • 2019 - The Ethics Guidelines for Trustworthy AI[10], published. Following key requirements for AI systems laid down:

i) Human agency and oversight

ii) Technical robustness and safety

iii) Privacy and data governance

iv) Transparency

v) Diversity, non-discrimination, and fairness

vi) Societal and environmental well-being

vii) Accountability

  • Europe funds programmes, such as Horizon 2020[11] (has been succeeded by Horizon Europe), allocating billions of euros to support responsible AI development, ensuring that European AI technologies remained competitive while adhering to ethical standards.

  • European Commission (“EC”) under the Digital Services Act (“DSA”) inter alia required large Online Platforms and Search Engines whose services could be used to create and/or disseminate Gen AI content to assess and mitigate specific risks linked to AI.

  • 2014 - EU AI Act[12] published; aims to establish a comprehensive legal framework for AI across the EU. Companies that fail to comply with the AI Act could face fines of up to €35 million or 7% of global annual revenue whichever is higher.

  • Digital Markets Act, 2022 (DMA) - EU’s regulatory framework that addresses the growing power of digital gatekeepers.

 

  1. United Kingdom

 

  • 2019 - AI Sector Deal released as part of the UK’s Industrial Strategy; focused on investment in AI research and development, improving AI-related skills, and building infrastructure to support AI-driven industries.

  • 2018 - Centre for Data Ethics and Innovation (CDEI) established, this is now known as Responsible Technology Adoption Unit (RTA).[13] [14]

  • The CDEI provides guidance on AI governance, the Information Commissioner’s Office (ICO) oversees AI-related data protection and privacy issues, ensuring compliance with post-Brexit data laws.

 

Post Brexit


  • 2021 - ‘National AI Strategy’[15] launched; focussed on sustained growth, international collaboration, and a pro-innovation regulatory approach.

  • Digital Markets, Competition and Consumers Act 2024 (DMCC Act): will apply only to the large technology firms, with substantial and entrenched market power in a particular digital activity. If certain conditions are met, these firms can be designated with Strategic Market Status (SMS) in relation to a particular digital activity.229 If the CMA designates a firm with SMS, it will have two key tools: Conduct Requirements and Pro-Competition Interventions.[16]

  • 2023 - CMA published the report ‘AI Foundation Models: Initial review’[17]; focussed on competition and barriers to entry in the development of foundation models, the impact foundation models may have on competition in other markets and consumer protection.

 

  1. China

 

  • Tech giants - Baidu, Alibaba, and Tencent, played an active role in AI advancements, particularly in fields like facial recognition, autonomous systems, and natural language processing.

  • 2017 - Next Generation Artificial Intelligence Development Plan published; focussed on strengthening AI research and infrastructure, fostering AI talent and workforce development through education and training programmes.

  • 2021 - Personal Information Protection Law (PIPL), a data privacy law, came into force.[18]

 

  1. Australia

 

Pre-existing Laws

 

  • Privacy Act 1988 - to regulate how AI-driven systems handled personal data.[19] 

  • Australian Consumer Law (ACL) of 2010 - provide oversight in cases where AI-powered products and services impacted consumer rights.[20] 

  • Cybercrime Act 2001 - to address situations where AI is used for creating AI-powered malware or using AI to launch large-scale cyberattacks.[21]

  • 2019 - Ethical AI Framework introduced; established core principles for AI governance, including transparency, accountability, and prevention of harm.

  • 2020 - AI Roadmap[22] outlined strategic investments in AI-driven industries such as healthcare, agriculture, and cybersecurity, emphasising AI’s role in economic growth.

  • 2021 - AI Action Plan; outlined measures to promote AI while managing risks.[23] 

  • 2025 - Australian Competition and Consumers Commission (ACCC) published its 10th and final report on Digital Platform Services Inquiry; reinforces the need for regulatory reform to address digital platform-related competition and consumer harms.

 

  1. Japan

 

  • Act on the Protection of Personal Information (APPI) - regulates AI-driven data processing, ensures compliance with privacy safeguards.[24] 

  • The “Artificial Intelligence Technology Strategy” in March 2017[25], the “Social Principles of Human-Centric AI” in March 2019[26] and the “AI Strategy 2022” in April 2022[27], promoted R&D and initiatives for the social implementation of AI technology.

  • The Smartphone Act - prohibits designated providers from using data acquired from business users to unfairly develop competing services ensuring algorithmic neutrality.

  • Platform Transparency Act (2020): mandates disclosure of search ranking algorithms, terms of service updates and algorithmic neutrality.

 

  1. France

 

  • 2024 - French Competition Authority issued its opinion after conducting an inquiry into the competitive functioning of the generative AI sector post a public consultation and stakeholder consultation. This opinion was published in June, 2024[28] recommending:

-       Make the regulatory framework applicable to the sector more effective

-       Use the full extent of competition law tools

-       Increase access to computing power

-       Take account of the economic value of data and

-       Ensure greater transparency on investments by digital giants

 

  1. Canada

 

  • Artificial Intelligence and Data Act (AIDA)[29] - to regulate responsible design, development and deployment of AI systems, ensuring AI systems are safe, non-discriminatory. Under the AIDA, businesses will be held responsible for the AI activities under their control.

 

5.2 OECD Publication

 

●      Titled "Artificial Intelligence, Data and Competition" and published in May 2024.[30] 

●      The competition risks identified are:

-       Vertical integration of AI value chain by firms.

-       Proprietary access to highest quality data and computing power.

-       Barriers to switching across ecosystems and bundling practices in deployment phase.

●      Tools available to competition authorities:

-       Monitoring and Advocacy

-       Market Studies and Investigations

-       Merger Control

-       Enforcement and remedies

-       Cooperation and regulation

 

India: Regulatory Measures to Address Competition Issues in Technology-Driven Marketplaces

 

→ The Competition Act, 2002

 

  • The Competition Act is inherently sector-agnostic, designed to preserve and promote competition across the entire economy, including traditional and emerging digital markets. It applies uniformly to all enterprises, irrespective of the industry or technology used.

  • In the recent past, the CCI ordered a number of investigations against big tech firms operating in digital markets for alleged abuse of dominance, unfair restrictions, exclusive tie-ups, deep discounting and suspected anti-competitive arrangements with preferred sellers on their marketplaces.[31] [32] [33]

  • Additional laws such as Information Technology (IT) Act, 2000[34] provide a broad legal foundation for handling cybersecurity, data protection, and digital transactions.

 

Regulatory Response to Emerging Competition Challenges in the Digital and AI Era

 

  1. The Competition Law Review Committee (CLRC) and The Competition (Amendment) Act, 2023[35]

    • Constituted in 2018 to review the Competition Act, 2002 in the context of the digital economy and international best practices.

    • Primary Objectives - suggest measures to strengthen enforcement, foster a robust competition regime in the digital era, recommend mechanisms for speedier adjudication, institutional strengthening,and market studies.

    • Led to the Competition (Amendment) Act, 2023 (the Amendment Act).

 

  1. Committee on Digital Competition Law (CDCL) and Digital Competition Bill, 2024[36]

 

  • Constituted in 2023 to evaluate the need for a separate ex-ante regulatory framework for digital markets in India. Report was released in March, 2024.

  • Introduced the following major recommendations:

-   Introduction of ex-ante legislation to proactively monitor large digital enterprises

- Designation of such large enterprises as Systemically Significant Digital Enterprises (SSDE’s)

- Stipulation of obligations as applicable to each Core Digital Service to be specified through regulations on SSDEs.

-  Strengthening the capacity of the CCI’s Digital Market and Data Unit

 

  1. The Digital Personal Data Protection Act (DPDPA), 2023

 

  • The DPDPA proposes rules for data collection, storage, and processing, establishing a legal framework that may also be relevant for AI systems reliant on large-scale data analytics and machine learning.

 

  1. Ministry of Electronics and Information Technology (MeitY) Report on AI Governance Guidelines Development for Public Consultation[37]

    • In January 2025, the Ministry of Electronics and Information Technology (MeitY) released a report on AI governance guidelines development for public consultation.

- This report outlines a principle- based framework to ensure the ethical, safe, and inclusive deployment of AI technologies in India, drawing from global standards like the OECD AI Principles and India's NITI Aayog





[30] OECD (2024), “Artificial intelligence, data and competition”, OECD Artificial Intelligence Papers, No. 18, OECD Publishing, Paris. https://doi.org/10.1787/e7e88884-en.

Comments


bottom of page