AI Crackdown? US, China Mull Security Reviews On AI Tools Like ChatGPT

The Biden administration has begun investigating whether artificial-intelligence systems such as ChatGPT should be subject to safeguards, amid rising worries that the technology could be used to discriminate or propagate harmful material.

The Commerce Department issued a formal public request for feedback on Tuesday on what it called accountability measures, including whether potentially harmful new AI models should go through a certification procedure before they are distributed.

“It is amazing to see what these tools can do even in their relative infancy,” said Alan Davidson, who leads the National Telecommunications and Information Administration, the Commerce Department agency that put out the request for comment. “We know that we need to put some guardrails in place to make sure that they are being used responsibly.”

Davidson stated that the comments, which will be received for the next 60 days, will be used to assist guidance for US policymakers on how to approach AI. He went on to say that his agency’s legal role is to advise the president on technology policy rather than write or enforce rules.

This comes as China aims to compel generative AI services to undergo a security audit before they are permitted to function.

The Cyberspace Administration of China stated in draft guidelines issued for public comment that service providers must verify information is accurate, protects intellectual property, and does not discriminate or jeopardize security. AI operators must also explicitly label AI-generated content, according to a statement placed on the country’s internet watchdog’s website.

The guidelines basically translate existing data and content laws — from the protection of personal information to the banning of statements deemed undesirable by the Chinese Communist Party — to the booming field of AI.

China will most likely prohibit foreign AI services, such as those from OpenAI or Google, as it did with American search and social media offerings, but it is expected to avoid tightening the leash on its domestic firms for the time being, for fear of stifling a nascent arena that requires room for innovation.

Alibaba Group Holding, SenseTime Group, and Baidu are all vying to establish the ultimate next-generation AI platform for the world’s largest internet market. That reflects a growing wave of development globally, with Alphabet Inc.’s Google and Microsoft Corp. among the many tech companies investigating generative AI since OpenAI’s ChatGPT sparked the industry in November.

Alibaba announced on Tuesday that it intends to include generative AI into its Slack-like work app and Amazon Echo-like smart speakers before expanding the portfolio to its other services. SenseTime had previously presented the big AI model SenseNova as well as a user-facing chatbot named SenseChat.

This comes on the heels of Baidu’s Ernie bot, which was launched for limited testing about a month ago.

AI pushback

Concerns have been made by industry and government authorities regarding a variety of potential AI dangers, including the use of the technology to conduct crimes or propagate lies.

“There are very active conversations ongoing about the explosive good and bad that AI could do,” said Sen. Richard Blumenthal in an interview. “This, for Congress, is the ultimate challenge—highly complex and technical, very significant stakes and tremendous urgency.”

Sen. Michael Bennet was concerned about children’s safety when he wrote to several AI companies last month, asking about public tests in which chatbots delivered worrisome advice to users posing as young people.

Elon Musk, among other tech heavyweights, recently asked for a six-month moratorium on the development of systems more sophisticated than GPT-4, the version of OpenAI’s chatbot published about a month ago. They warned that a race between OpenAI and others such as Google was taking place without appropriate risk management and planning.

OpenAI welcomes regulation, noting that “powerful AI systems should be subject to rigorous safety evaluations.”

“Regulation is needed to ensure that such practices are adopted, and we actively engage with governments on the best form such regulation could take,” the tech firm said in a blog post.

In March, Italy became one of the first countries to ban ChatGPT, as the nation’s data protection agency alleges the chatbot violated privacy laws and conducted “unlawful collection of personal data.”

Apple also recently blocked an update that adds a feature with a customized version of OpenAI’s GPT-3 large language model on the email app BlueMail, according to the co-founder of Blix, the app developer.

The email app’s new feature uses GPT-3 to help automate writing emails based on the content of previous emails and calendar events. But the smartphone maker rejected the app update, citing concerns over the app’s lack of content filtering, and that it could generate content not suited for young users.

In the public-comment document posted Tuesday, the federal technology advisory board questioned whether procedures such as “quality assurance certifications” should be included to secure public trust in AI systems.

The statement inquired whether additional rules or regulations should be enacted, but it did not go into depth about potential risks or endorse any specific precautions.


Information for this briefing was found via The Wall Street Journal, Bloomberg, and the sources mentioned. The author has no securities or affiliations related to the organizations discussed. Not a recommendation to buy or sell. Always do additional research and consult a professional before purchasing a security. The author holds no licenses.

Leave a Reply

Share
Tweet
Share