The United States intends to strengthen legislation and law enforcement to standardize AI development

 ChatGPT, a chat robot developed by OpenAI, has risen rapidly, attracting a large number of fans, and at the same time attracting many competitors including Budd chat robot of Google. The picture shows the seminar on ChatGPT organized by the media service of public education school (data photo).

  ◎ reporter Liu Xia

  In a recent report, the US Capitol Hill website pointed out that the explosive growth of generative artificial intelligence (AI) such as ChatGPT, and the resulting confusion such as the proliferation of false information, prejudice, and changing the labor structure, triggered the concerns of the US federal government. Lawmakers and regulators hope to take concrete actions to eliminate these concerns.

  Cause two key problems

  The Capitol Hill website reported that the development of AI may cause two key problems.

  On the one hand, the recent rise of generative AI tools has brought problems related to the spread of false information. On the other hand, AI that powers automation systems may lead to discrimination.

  The British magazine New Scientist also pointed out that ChatGPT and other chat bots often make factual mistakes, quote completely fictional events or articles, fabricate sexual harassment scandals and falsely accuse real people in cases. The use of ChatGPT has also led to data privacy scandals, involving the disclosure of confidential company data. In addition, images, audio and even videos generated by AI may also lead to large-scale "deep fraud" information, which is proved by the arrest of former US President Trump and the false AI composite photos of Pope Francis wearing fashionable white down jackets.

  The call for strengthening law enforcement is rising.

  The US Federal Trade Commission, the Civil Rights Department of the Ministry of Justice, the Consumer Financial Protection Bureau and the Equal Employment Opportunity Commission (EEOC) recently issued a joint statement saying that in view of the increasing use of AI in a series of services, from housing to medical care, they promised to strengthen the enforcement of existing laws, including aiming at solving the possible discrimination of AI and formulating new regulations on development rules.

  Charlotte burrows, president of EEOC, said that the use of advanced technologies, including AI, must comply with federal laws.

  Of course, the joint statement of these institutions mainly focuses on automated systems that use AI, rather than generative AI like ChatGPT.

  Even so, the website of Capitol Hill pointed out in the report, with the soaring popularity of ChatGPT and other chat bots, Google and other companies are constantly launching new products that compete with them. In addition, a group of entrepreneurs including Tesla and Twitter CEO elon musk are also rushing into the industry, which may lead to noise and turmoil, which also highlights the urgent need for policy makers to take relevant measures.

  To cope with the risks brought by AI, the US Congress will weigh the formulation of new laws and regulations, and institutions should also increase the enforcement of existing laws. Because even where existing laws are applicable, it may be difficult to enforce these laws because of the way the AI system works. For example, algorithmic recruitment tools may lead to discrimination, but it is difficult for employees to know whether they have been systematically discriminated against. Therefore, the problems that institutions need to solve are not only how to apply the law, but also how to deal with the challenges brought by the law enforcement process, and how these AI systems will affect the real world.

  Christine Clark, Assistant Minister of Justice in charge of civil rights affairs, also stressed that AI poses a great threat to all kinds of discrimination problems in today’s society, which need more in-depth research and review by policy makers and others.

  Safety management is imperative.

  Earlier this month, US senators announced a proposal to establish a framework for AI regulation, aiming at improving transparency and accountability. In addition, legislators are also prepared to take action to put pressure on the industry on the risks related to the rise of AI technology.

  The Senate Intelligence Committee recently wrote to the CEOs of ChatGPT developers OpenAI, Metauniverse Platform Company, Google, Anthropic and Microsoft, asking them how to deal with security risks when developing large-scale AI models.

  The letter wrote that with the increasing use of AI in most fields and the possibility of the steady integration of large-scale language models into a series of existing systems from medical care to financial sector, it is urgent to emphasize the security issues in the use of AI.

  In response to questions raised by some senators about how to use AI, a spokesman for the American Consumer Technology Association said that the association has been "contributing to the formulation of AI policies, standards and frameworks" with its members.

  Craig Albright, vice president of the American Software Alliance in charge of US government relations, said that Congress can require companies to formulate risk management plans, conduct risk assessments on high-risk use of AI, and define what high-risk cases are, while companies need to conduct impact assessments and design assessments to ensure that they do the right thing.