November 22, 2024

Microsoft urges lawmakers, companies to ‘step up’ with AI guardrails

Microsoft President Brad Smith is the latest tech industry heavyweight to call for better risk management and regulation for artificial intelligence.

Brad Smith, the president of Big Tech firm Microsoft, has called on governments to “move faster” and corporations “step up” amid a massive acceleration in artificial intelligence development.

Speaking at a May 25 panel in front of United States lawmakers in Washington D.C., Smith made the call as he proposed regulations that could mitigate the potential risks of AI, according to a report from The New York Times.

The company has called for companies to implement “safety brakes” for AI systems that control critical infrastructure and to develop a broader legal and regulatory framework for AI, among other things. 

Smith is yet another industry heavyweight to raise the alarm over the rapid development of AI technology. The breakneck pace of advancements in AI has already given rise to a number of harmful developments, including threats to privacy, job losses by way of automation, and extremely convincing “deep fake” videos that routinely spread scams and disinformation across social media.

The Microsoft executive said that governments shouldn’t bear the full brunt of action and that companies also need to work to mitigate the risks of unfettered AI development.

Smith’s comments come even though Microsoft has been working on AI as well, reportedly developing a series of new specialized chips that would help power OpenAI’s viral chatbot, ChatGPT.

Still, Smith argued that Microsoft wasn’t trying to palm off responsibility, as it pledged to carry out its own AI-related safeguarding regardless of whether the government compelled it, stating:

“There is not an iota of abdication of responsibility.”

On May 16, OpenAI founder and CEO Sam Altman testified before Congress, where he advocated for the establishment of a federal oversight agency that would grant licenses to AI companies.

Notably, Smith endorsed Altman’s idea of handing out licenses to developers, saying that “high risk” AI services and development should only be undertaken in licensed AI data centers.

Related: Nvidia AI chip value skyrockets amid AI development boom

Ever since ChatGPT first launched in November last year, there have been widespread calls for more stringent oversight of AI, with some organizations even suggesting that development of the technology should be brought to a temporary standstill.

On March 22, the Future of Life Institute published an open letter calling for industry leaders to “pause” development of AI. It was signed by a number of major tech industry leaders, including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak. At the time of publication, the letter had attracted more than 31,000 signatures.

AI Eye: Make 500% from ChatGPT stock tips? Bard leans left, $100M AI memecoin

Please enter CoinGecko Free Api Key to get this plugin works.