
OpenAI, Microsoft, Google and Anthropic Unveil forum To Control Machine Learnings Models
CIOTechOutlook Team | Wednesday, 26 July 2023, 11:32 IST

They are extremely powerful foundation models with the potential to be destructive enough to seriously endanger public safety. The generative AI algorithms that power chatbots like ChatGPT quickly extrapolate vast amounts of data to convey responses as text, poetry, and art.
Although there are many applications for these models, authorities such as the European Union and business titans like Sam Altman, CEO of OpenAI, have stated that proper safeguards are required to address the risk posed by AI. The industry group Frontier Model Forum will collaborate with academics, businesses, and governments to develop best practices for deploying frontier AI models and promote AI safety research.
But it won't lobby for or against governments, a representative for OpenAI said.
"Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control," Microsoft president Brad Smith said in a statement.
In the upcoming months, the forum will establish an advisory board, work with a working group to secure funds, and establish an executive board to oversee its operations.
CIO Viewpoint
Why Foolproof Facial Recognition Is Key Against...
By Joseph Sudheer Thumma, Global CEO & MD, Magellanic Cloud
National Technology Day 2025: Powering Progress...
By CIOTech Outlook Team
Aligning IT Roadmap with Business Objectives: A...
By Subhash singh Punjabi, CISO & Head Enterprise Architecture, Deepak Fertilisers & Petrochemicals Corporation Ltd
CXO Insights
Use Of Technology and The Impact On Education...
By Sudhir Rao, Vice President - Technology, Pearson India
SERVION GLOBAL SOLUTIONS Ameliorating Digital...
By CIOReview Team
Industry Trends in Open Sourced Hardware