AI Risks! White House orders Tech Giants to protect Public 

White House

On Thursday, tech executives were called to the White House and instructed to safeguard the public from the risks posed by artificial intelligence (AI).

The Warning

It was claimed that Sundar Pichai of Google, Satya Nadella of Microsoft, and Sam Altmann of OpenAI had a “moral” obligation to protect the public.

The public’s fascination with recently released AI (artificial intelligence) products, such as ChatGPT and Bard, is growing.

The White House made it apparent that there was a possibility of further industry regulation.

They provide regular people the chance to work with so-called “generative AI,” which can quickly summarise data from many sources, debug computer code, create convincingly human-sounding presentations and even poetry, among other things.

Because they provide a concrete example of the possible benefits and drawbacks of the new technology, their implementation has generated fresh discussion about the place of AI (artificial intelligence) in society.

Order for Action

Technology leaders were cautioned that the administration was open to new laws and regulations to encompass artificial intelligence. And that it was up to businesses to “ensure the safety and security of their products” on Thursday when they convened at the White House.

Sam Altman, the CEO of OpenAI, the company behind ChatGPT, told reporters that executives were “surprisingly on the same page on what needs to happen” in terms of legislation.

Following the conference, US Vice President Kamala Harris made a statement in a statement that although new technology has the potential to benefit lives, it could potentially endanger safety, privacy, and civil rights.

According to her, the private sector has “an ethical, moral, and legal responsibility to ensure the safety and security of their products”.

The National Science Foundation will spend $140 million (£111 million) in seven new AI research centres, according to a statement from the White House.

Both lawmakers and tech executives have been urging for stronger regulation of the rapidly expanding field of developing AI.

The “godfather” of AI, Geoffrey Hinton, resigned from his position at Google earlier this week, stating he now regretted his work.

Elon Musk and Steve Wozniak, the co-founders of Apple, asked for a halt to the technology’s implementation in a letter published in March.

And on Wednesday, Lina Khan, the head of the Federal Trade Commission (FTC), expressed her opinions on the necessity for regulation of AI.

The Concerns

There are worries that chatbots like ChatGPT and Bard may be erroneous and contribute to the spread of false information, as well as worry that AI may quickly replace people’s employment.

Additionally, there are worries that generative AI could break copyright regulations. AI that can copy voices may make fraud worse. false news may be distributed using AI-generated videos.

The Opposite side of the Coin

Bill Gates and other proponents, meanwhile, have rebuffed suggestions for an AI “pause,” claiming that such a step would not “solve the challenges” that lie ahead.

According to Mr. Gates, it would be preferable to concentrate on the best ways to utilise AI advancements.

Others worry that there may be too much regulation, which would benefit Chinese IT firms strategically.

Releated Posts:

Digi_Marc

digimarcfreelancing@gmail.com

JOIN OUR NEWSLETTER

get daily update to join our Magazine