(TNS) — Artificial intelligence is already impacting our lives in many positive ways, automating tasks, helping diagnose medical problems, and serving as a voice-controlled virtual assistant for many. Masu. Still, there is a very real risk of misuse and unintended consequences of the technology, as we saw recently in Maryland, where AI was allegedly used to create a revenge video against a person. A case believed to be the first criminal case was filed. Employer. As a result, governments at home and around the world have grappled with the question of how best to regulate it.
Last year, President Joe Biden issued an executive order on AI that established new standards for safety and security. The EO called on Congress to pass data privacy regulations, noting that AI increases incentives for developers to collect and misuse personal data. There appears to be new momentum on this front, with Washington Democratic Sen. Maria Cantwell and Republican Rep. Cathy McMorris Rodgers just announcing a bipartisan privacy bill called the American Privacy Rights Act.
Another characteristic of AI that has led to calls for regulation is its potential to increase bias and discrimination. There are several well-known instances of bias in algorithms tasked with making very important decisions (e.g., predicting the extent of a patient's disease). An independent bias audit has been proposed as a potential solution. Other proposals seek to protect consumers, patients, students, and workers in different ways.
Although the EO on AI directed relevant government agencies to take action, Congress has not enacted any significant new laws to regulate AI. Several bills have been introduced in state legislatures to fill this gap. At least 40 states, along with Puerto Rico, the Virgin Islands and Washington, D.C., have introduced AI bills in the 2024 Congress, according to the National Conference of State Legislatures. The bill is wide-ranging and cannot be fully summarized, but it includes several important categories. Legislation to address criminal uses of AI, such as creating child pornography and creating synthetic voice and image likenesses for fraudulent purposes (e.g., “deepfakes” and other deceptive uses to influence elections) A bill that would establish disclosure requirements when content is generated or decisions are made using AI (such as the hiring of employees) and how automated decision-making tools are used. A bill to restrict. According to the bill AI, which provides protection from discrimination. This last category includes bills that would reiterate existing rights (removing ambiguity that arises when discriminatory decisions are made by algorithms rather than humans), require impact assessments, or create standards for independent bias audits. Includes bills to do so. State legislatures have focused particular attention on crime, employment, education, health, and insurance.
The lack of federal AI regulation has raised concerns that we are headed toward a patchwork legal system with weak enforcement. There is also the added risk of a race to the bottom as states try to lure companies with promises of a looser regulatory environment. An argument can also be made to avoid heavy-handed regulations. For example, British Prime Minister Rishi Sunak has argued that: “The UK's answer is not to rush to regulate…we believe in innovation…and in any case, how can we write laws that make sense for something we don't fully understand yet? ” ? ”
While Sunak's premise that AI is not well understood seems flawed, the potential for regulation to stifle innovation needs to be taken seriously. This is a point often made by technology industry spokespeople. The trade-off between protecting individual rights and stifling innovation was debated in the European Union before settling in favor of comprehensive artificial intelligence legislation. However, it remains an open question whether the costs of regulatory compliance are actually high enough to significantly undermine innovation.
A recent national survey I conducted of 885 American executives clarified this question. I asked respondents about their perceptions of the costs of compliance and their support for specific her AI regulation proposals. This group included individuals who are directly involved in decision-making related to the adoption and implementation of AI within the company and who are likely to be knowledgeable about compliance costs.
Whether respondents support (1) regulations requiring disclosure of AI use and data collection policies, (2) bias regulations requiring third-party audits, and (3) regulations requiring explanations for autonomous decisions. asked. Support for the regulation was surprisingly high. More than 70% of respondents strongly or somewhat supported each type of regulation. This was true despite the fact that the majority felt that complying with the regulations would impose moderate or significant resource challenges. For this group, the benefits of regulation clearly outweigh the costs of compliance.
The bills being debated in each state address national requirements, including disclosure of the use of AI, protections from algorithmic bias and discrimination, and oversight to ensure the safe and fair use of autonomous decision-making tools. It becomes a guidepost for those who want to know. This should be combined with strengthening existing laws to cover new phenomena such as algorithmic price manipulation. The proposed data privacy bill is a welcome first step. In addition to data privacy protection, it also includes sections on algorithms that address civil rights and some forms of discrimination. Setting national standards would simplify compliance with the patchwork of state laws. However, given the current political climate and timeline, it is uncertain where this bill will go. There is good reason to hope this plan succeeds, and from there we move on to comprehensive national regulation of AI. The development of AI is occurring at a pace that is too fast to defer sensible regulation.
© 2024 Baltimore Sun. Distributed by Tribune Content Agency, LLC.