The U.S. is trying to curb artificial intelligence
The U.S. is trying to curb artificial intelligence

The U.S. is trying to curb artificial intelligence

The “Expert” magazine decided to cover the debate on artificial intelligence in the U.S. Congress.

U.S. lawmakers are puzzling over what restrictions could be imposed to curb the uncontrolled growth of artificial intelligence (AI). But even months after one of the most famous neural networks, ChatGPT, came to public and Washington’s attention, no consensus has yet been reached, according to an Expert magazine article.

Some of the congressional committee members’ proposals focus on limiting AI that could endanger people’s lives or livelihoods, such as if it were to operate in the medical and financial sectors. Other proposals include rules to ensure that artificial intelligence is not used to discriminate against or violate someone’s civil rights. The dispute also touches on the question of whether the government should regulate developers of artificial intelligence or companies that use it to interact with consumers.

It is unclear which viewpoint will ultimately win, but some in the business community, including IBM, America’s largest IT company, and the nonprofit U.S. Chamber of Commerce, favor regulating only critical areas such as medical diagnostics, calling it a risk-based approach. Jordan Crenshaw of the Chamber of Commerce’s Center for Technology Engagement believes that decisions about the effects of AI on human health and finances are far more important than, for example, regulating video advertising.

The growing popularity of so-called generative artificial intelligence, which uses data to create new content, has raised concerns that the rapidly evolving technology could encourage cheating on exams, fuel misinformation and ultimately lead to a new generation of cheaters. For example, the ChatGPT neural network can create texts that are difficult to distinguish from those written by humans.

In this regard, the leaders of OpenAI, its sponsor Microsoft, and Alphabet met with President Joe Biden, but no key decisions have yet been made. That’s why Congress is drafting rules for a virtually new industry. Jack Clark, co-founder of famed AI startup Anthropic (whose CEO also attended the White House meeting), said: “In general, House and Senate staffers have woken up, and everyone is being asked to deal with this. People want to get ahead of artificial intelligence in part because they feel they’re not ahead of social media.” Adam Kovacevich, head of the Chamber of Progress Business Association, said the top priority for major tech companies is to resist government overreaction to innovation that is too fast and too ambitious.

Lawmakers today, such as Democratic Senator Chuck Schumer, are determined to tackle the issue of artificial intelligence, but the fact is that Congress is seriously polarized because of other current events. There’s a presidential election next year, and lawmakers are now dealing with other important issues, such as raising the national debt ceiling. Nevertheless, Schumer did come up with a plan for independent experts to test new AI-related technologies before they are launched. The politician also urges the industry to increase transparency and provide the government with the data it needs to prevent potential dangers.

One of the most popular risk-based approaches in the context of this controversy suggests that AI used in medicine would be scrutinized by the Food and Drug Administration (FDA), while artificial intelligence in entertainment would not be regulated. The European Union has also moved forward with similar regulations.

Opposing this approach is Democratic Senator Michael Bennet, who introduced a bill calling for a government task force on artificial intelligence. The politician believes that a risk-based approach will not fully solve the new problems. Bennett favors a “values-based approach” that prioritizes privacy, civil liberties and rights. Bennett’s aide added that rules based on a risk-based approach may be too one-sided and fail to address dangers such as the use of artificial intelligence to recommend racist videos.

OpenAI employees have also thought about broader oversight of the field in which they themselves work. Researcher Cullen O’Keefe, in an April speech at Stanford University, suggested creating an agency that would require companies to obtain licenses before training powerful artificial intelligence models or managing the data centers that support them. The agency, O’Keefe said, could be called the Office of Artificial Intelligence Security and Infrastructure. Asked about the proposal, OpenAI CTO Mira Murati said a credible body could make AI developers responsible for enforcing security standards. But more important, she said, is to pinpoint all the standards and risks that government and businesses are trying to mitigate.

The last major regulator created in the U.S. was the Consumer Financial Protection Bureau, which was organized after the 2007-2008 financial crisis. Now, though, some Republicans may resist any regulation of artificial intelligence. An anonymous party source told Reuters, “We have to be careful that proposals to regulate artificial intelligence do not become a mechanism for government micromanagement of computer code, including search engines and algorithms.

Loading

FavoriteLoadingAdd to favorites
Spread the love

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.