OpenAI launches grant for AI democratic governance project
The company behind ChatGPT announced it would award 10 grants of $100,000 to teams worldwide to develop a democratic process for determining AI rules.
OpenAI, the parent company of the artificial intelligence chatbot ChatGPT, has launched an initiative to bring more democratic input to AI development.
In the official announcement on May 25, the company said it is preparing to award 10 grants worth $100,000 each toward experiments in setting up a “proof-of-concept,” democratic process for determining rules for AI systems to follow.
According to OpenAI, the rules should be “within the bounds defined by the law” and should benefit humanity.
“This grant represents a step to establish democratic processes for overseeing AGI and, ultimately, superintelligence.”
The company said the experiments will be the basis for a more “global” and “ambitious” project in the future. It also noted that conclusions from the experiments would not be binding, but used to explore important questions surrounding AI governance.
The grant is provided by the nonprofit arm of OpenAI. It said the project results will be free and accessible to the public.
Related: OpenAI launches official ChatGPT app for iOS, Android coming ‘soon’
This comes as governments worldwide seek to implement regulations on general-purpose generative AI. OpenAI CEO Sam Altman recently met with regulators in Europe to stress the importance of non-restrictive regulations and not to hinder innovation.
A week prior, Altman testified before the United States Congress with a similar message.
In the new grant program announcement, OpenAI echoes the sentiment that laws should be tailored to the technology, and that AI needs “more intricate and adaptive guidelines for its conduct.”
It gave example questions, such as, “How should disputed views be represented in AI outputs?” After which it said that no single individual, company or country should dictate such decisions.
OpenAI previously warned that if AI is not developed in a cautious manner, a superhuman form of AI could arise within a decade. Therefore, developers “have to get it right.”
Magazine: ‘Moral responsibility’: Can blockchain really improve trust in AI?