Abstract
The rapid development of AI technologies has propelled various countries to increase their research and development capacities in this domain as part of “the AI arms race.” At the same time, the widespread utilization of AI highlights the need for regulatory interventions. Despite the difficulty of the regulatory task and uncertainty associated with AI’s impacts, several countries have started “the race to AI regulation” and have come up with unique and innovative approaches to regulating this technology. The spectrum of regulatory proposals spans from hard laws and the prohibition of certain systems to industry self-regulation based on AI ethics. The most detailed hard law on AI is currently undergoing public discussion in the EU, and regulation for recommendation algorithms is already implemented in China. Meanwhile, the governance of this technology elsewhere is mostly conducted through soft law mechanisms, which include governmental strategies and frameworks, alongside private and non-governmental sector guidelines and codes of conduct, often realized in the form of ethics-based industry self-regulation. This spurs the ongoing debate about which of the two approaches better promotes consumer welfare. While strict regulatory requirements may better protect society against the risks of AI technologies, they also tend to hinder the pace of innovation. It is unclear to policymakers and researchers which approach (strict command and control or ethical industry self-regulation) maximizes consumer welfare, and under what conditions. The conceptual difficulty in addressing this dichotomy partly stems from the lack of a common framework that incorporates both sides of the argument. In response to this gap in the literature, this paper has developed a model to address the following interrelated questions: (1) What are the advantages and disadvantages of the two regulatory approaches? (2) What institutional factors influence the outcomes of the two approaches? (3) How should governments optimally balance the tradeoff between AI innovation and consumer protection in general? To empirically ground our conception of different levels of regulatory stringency, we first examine the regulatory proposals from the EU, the UK, the US, Russia, and China. Our document analysis shows that a more stringent approach to AI regulation is taken by China, the EU, and potentially the US (if the Algorithmic Accountability Act is adopted), whereas a more relaxed approach is taken in Russia and the UK. The proposed level of regulatory stringency depends on how much they prioritize stimulating AI innovation in the private sector. Having understood the trade-offs from the policy documents, we zero in on the regulation of AI systems that are developed by the private sector for commercial purposes. Unlike those developed by the state for national security or military purposes, the former
presents a more challenging case for regulators since they do not have direct control over the innovation, exploitation, and usage of such AI systems. We also set aside systems that are outright prohibited since the issue they present is one of legal enforcement rather than economic trade-offs. Thus, our primary interest lies in the grey area – the types of commercial exploitation that are within the legal boundaries, yet may be considered unethical once revealed to the consumers. Examples of such exploitation include the case of Cambridge Analytica, the usage of large language models for clickbait fake news generation, deepfake technologies for generating pornography, or the boosting of Amazon’s own products on its website. However, the logic behind regulating consumer-facing AI systems is intricate, not least because decisions regarding innovation, consumer protection, and frequency of usage are all decentralized among various stakeholders who pursue their own objectives. The understanding of the optimality of various regulatory approaches, therefore, calls for a systematic framework capable of analyzing the strategic interaction between various stakeholder groups. As such, we answer the proposed research questions by constructing a game-theoretical model which examines the complex incentive dynamics between innovation and consumer protection. It is important to acknowledge that our model is not designed to be explanatory for the differences in regulatory approaches chosen by different countries, because the countries’ choices could be irrational, affected by path dependence, or be the derivative of the political regimes in power. Instead, our model intends to contribute to the normative discussion on the optimal approach in regulating AI, and clarify the current academic and policy debates by pointing out how the optimal regulatory stringency is conditional on the institutional environments. The regulatory stringency chosen by the government is modelled by the probability that the exploitative practices of local AI companies are revealed to the consumers. This modeling choice is motivated by a unique challenge facing AI regulators. One key aspect of regulating AI is the difficulty of interpreting the workings of the black box systems, particularly what kinds of data are collected and what types of algorithms are used to extract valuable information by the companies. This fundamentally differs from industrial sectors where their social cost of production such as environmental pollution is relatively easily monitored and detected. In that sense, it is important that our model incorporates the possibility of revealing information to consumers, which will affect the behavior of consumers and their welfare in the end. After all, for unethical but lawful exploitation, it is consumers’ knowledge of such practices rather than top-down prohibition that acts as a disciplining device. Based on our game-theoretic analysis, we have developed an economic theory of how the welfare-maximizing level of regulatory stringency for AI depends on various institutional
parameters. Under high foreign competition, domestic innovation plays a relatively small role in serving consumers. On the other hand, consumers benefit most when they are not misled to underuse the highly competitive foreign AI systems. As a result, the prioritization of consumer protection should motivate a government to choose a high level of regulatory stringency under high foreign competition. Meanwhile, under low foreign competition (for instance, due to strong protectionist policies), the domestic AI industry can effortlessly win over local consumers from their foreign competitors. This means domestic firms can derive high marginal benefits in terms of market share from improving their algorithms. As a result, the robustness of domestic firms’ innovation incentives should motivate a government to also choose a high level of regulatory stringency under low foreign competition. Interestingly, under intermediate foreign competition, the government faces a delicate trade-off between consumer protection and innovation. Too stringent regulation stifles the innovation incentive of the domestic AI industry, whereas minimal regulation subjects the consumers to excessive exploitation. To maximize the actual consumer welfare, the government may strategically lower its regulatory stringency and turn a blind eye on some occasions. Across all institutional environments, however, minimal regulations are never compatible with maximizing actual consumer welfare. As such, the objectives of such regulatory design may be either rationalized by the prioritization of innovation, domestic producer surplus, or the perceived welfare of the consumers. In the latter, the government is primarily concerned with the image that this regulatory intervention produces without worrying too much about the actual protection of consumers – essentially using a loosely designed regulation as a PR tool. This suggests that further empirical studies should pay close attention to cases where governments are proposing very loosely defined regulations for AI.
presents a more challenging case for regulators since they do not have direct control over the innovation, exploitation, and usage of such AI systems. We also set aside systems that are outright prohibited since the issue they present is one of legal enforcement rather than economic trade-offs. Thus, our primary interest lies in the grey area – the types of commercial exploitation that are within the legal boundaries, yet may be considered unethical once revealed to the consumers. Examples of such exploitation include the case of Cambridge Analytica, the usage of large language models for clickbait fake news generation, deepfake technologies for generating pornography, or the boosting of Amazon’s own products on its website. However, the logic behind regulating consumer-facing AI systems is intricate, not least because decisions regarding innovation, consumer protection, and frequency of usage are all decentralized among various stakeholders who pursue their own objectives. The understanding of the optimality of various regulatory approaches, therefore, calls for a systematic framework capable of analyzing the strategic interaction between various stakeholder groups. As such, we answer the proposed research questions by constructing a game-theoretical model which examines the complex incentive dynamics between innovation and consumer protection. It is important to acknowledge that our model is not designed to be explanatory for the differences in regulatory approaches chosen by different countries, because the countries’ choices could be irrational, affected by path dependence, or be the derivative of the political regimes in power. Instead, our model intends to contribute to the normative discussion on the optimal approach in regulating AI, and clarify the current academic and policy debates by pointing out how the optimal regulatory stringency is conditional on the institutional environments. The regulatory stringency chosen by the government is modelled by the probability that the exploitative practices of local AI companies are revealed to the consumers. This modeling choice is motivated by a unique challenge facing AI regulators. One key aspect of regulating AI is the difficulty of interpreting the workings of the black box systems, particularly what kinds of data are collected and what types of algorithms are used to extract valuable information by the companies. This fundamentally differs from industrial sectors where their social cost of production such as environmental pollution is relatively easily monitored and detected. In that sense, it is important that our model incorporates the possibility of revealing information to consumers, which will affect the behavior of consumers and their welfare in the end. After all, for unethical but lawful exploitation, it is consumers’ knowledge of such practices rather than top-down prohibition that acts as a disciplining device. Based on our game-theoretic analysis, we have developed an economic theory of how the welfare-maximizing level of regulatory stringency for AI depends on various institutional
parameters. Under high foreign competition, domestic innovation plays a relatively small role in serving consumers. On the other hand, consumers benefit most when they are not misled to underuse the highly competitive foreign AI systems. As a result, the prioritization of consumer protection should motivate a government to choose a high level of regulatory stringency under high foreign competition. Meanwhile, under low foreign competition (for instance, due to strong protectionist policies), the domestic AI industry can effortlessly win over local consumers from their foreign competitors. This means domestic firms can derive high marginal benefits in terms of market share from improving their algorithms. As a result, the robustness of domestic firms’ innovation incentives should motivate a government to also choose a high level of regulatory stringency under low foreign competition. Interestingly, under intermediate foreign competition, the government faces a delicate trade-off between consumer protection and innovation. Too stringent regulation stifles the innovation incentive of the domestic AI industry, whereas minimal regulation subjects the consumers to excessive exploitation. To maximize the actual consumer welfare, the government may strategically lower its regulatory stringency and turn a blind eye on some occasions. Across all institutional environments, however, minimal regulations are never compatible with maximizing actual consumer welfare. As such, the objectives of such regulatory design may be either rationalized by the prioritization of innovation, domestic producer surplus, or the perceived welfare of the consumers. In the latter, the government is primarily concerned with the image that this regulatory intervention produces without worrying too much about the actual protection of consumers – essentially using a loosely designed regulation as a PR tool. This suggests that further empirical studies should pay close attention to cases where governments are proposing very loosely defined regulations for AI.
| Original language | English |
|---|---|
| Publication status | Published - 24 May 2023 |
| Externally published | Yes |
| Event | Atlanta Conference on Science and Innovation Policy 2023 - Atlanta, United States Duration: 24 May 2023 → 26 May 2023 |
Conference
| Conference | Atlanta Conference on Science and Innovation Policy 2023 |
|---|---|
| Abbreviated title | ATLC 2023 |
| Country/Territory | United States |
| City | Atlanta |
| Period | 24/05/23 → 26/05/23 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 9 Industry, Innovation, and Infrastructure
-
SDG 12 Responsible Consumption and Production
Fingerprint
Dive into the research topics of 'Balancing the Tradeoff between Regulation and Innovation for Artificial Intelligence: An Analysis of Top-down Command and Control and Bottom-up Self-Regulatory Approaches'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver