Financial Gains Lead to Oversight Evasion, Say Insiders


The Gist

  • AI giants evade oversight. Leading AI companies avoid effective oversight due to strong financial incentives, according to former and current employees.
  • Weak AI accountability. The lack of sufficient accountability and regulatory structures poses serious risks, including potential human extinction.
  • Experts call for AI oversight. Experts call for increased guidance and oversight from the scientific community, policymakers and the public to mitigate these risks.

Leading artificial intelligence companies avoid effective oversight because of money and operate without sufficient accountability government or other industry standards, former and current employees said in a letter published today.

In other words, they get away with a lot — and that’s not great news for a technology that comes with risks including human extinction.

“We are hopeful that these risks can be adequately mitigated with sufficient guidance from the scientific community, policymakers, and the public,” the group wrote in the letter titled, “A Right to Warn about Advanced Artificial Intelligence.” “However, AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”

The letter was signed by seven former OpenAI employees, four current OpenAI employees, one former Google DeepMind employee and one current Google DeepMind employee. It was also endorsed by AI powerhouses Yoshua Bengio, Geoffrey Hinton and Stuart Russell.

AI Poses Serious Risks

While the group believes in the potential of AI technology to deliver unprecedented benefits to humanity, it says risks include:

  • Further entrenchment of existing inequalities
  • Manipulation and misinformation
  • Loss of control of autonomous AI systems potentially resulting in human extinction

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the group wrote. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

The list of employees who shared their names (others were listed anonymously) includes: Jacob Hilton, formerly OpenAI; Daniel Kokotajlo, formerly OpenAI; Ramana Kumar, formerly Google DeepMind; Neel Nanda, currently Google DeepMind formerly Anthropic; William Saunders, formerly OpenAI; Carroll Wainwright, formerly OpenAI; and Daniel Ziegler, formerly OpenAI.

This isn’t the first time Hilton spoke publicly about his former company. And he was pretty vocal today on X as well.

Kokotajlo, who worked on OpenAI, quit last month and was vocal about it in a public forum as well. He said he “Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI (artificial general intelligence).” Saunders, also on the governance team, departed along with Kokotajlo.

Wainright’s time at OpenAI dates back at least to the debut of ChatGPT. Ziegler, according to this LinkedIn profile, was with OpenAI from 2018 to 2021.



Source link

We will be happy to hear your thoughts

Leave a reply

HARMY TECHNO
Logo
Shopping cart