Tech Giants Agree to Self-Police AI In Framework That Has No Teeth

Leading artificial intelligence companies have voluntarily agreed to guardrails to manage the risks posed by the emerging technology in a bid by the White House to get the industry to regulate itself in the absence of legislation instituting limits around the development of the new tools.

The seven companies — OpenAI, Google, Meta, Amazon, Microsoft, Inflection and Anthropic — pledged to permit independent security testing of their AI systems before they’re released to the public and share information with the government about the safety of the technology, among other vows, to make the field more transparent amid an arms race to capitalize on instruments that allow users to create videos, photos and text with ease, the White House announced on Friday. Absent is any kind of reporting regime or timeline that could legally bind the firms to their commitments.

More from The Hollywood Reporter

The development comes as actors and writers are striking in a historic dual work stoppage that has essentially brought production in Hollywood to a halt and threatens to further destabilize an industry reckoning with technology that workers believe undermines their labor. Among the core issues they’re looking to address in new deals with studios and streamers is safeguards around the use of AI. On July 13, Duncan Crabtree-Ireland, SAG-AFTRA’s chief negotiator and national executive director, said that the Alliance of Motion Picture and Television Producers remains “steadfast in their commitment to devalue the work of our members” by utilizing the technology. He condemned an offer to pay background performers for one day of work in exchange for the rights to their digital likeness “for the rest of eternity with no compensation.” (The AMPTP has disputed SAG-AFTRA’s characterization of its AI offer.)

At Comic-Con on Friday, Haunted Mansion director Justin Simien said of AI, “It’s a world in which you feel like the artists can get squeezed out.”

The pledges mark early measures from the White House as it crafts an executive order and pursues legislation surrounding the development of AI. Governments across the world are scrambling to regulate the technology as leading firms continue to devour troves of data and literary and art works without permission to train large language models like OpenAI’s ChatGPT. After lawmakers largely failed to oversee data privacy issues that came with the rise of Big Tech and social media, there’s pressure to address novel issues presented by the tools that threaten to displace workers across several industries. The companies are trying to shape legislation and perhaps even avoid a legal framework altogether by coming to terms with the White House in a rush to feed as much data into their AI systems as they can in the event the practice becomes regulated. After years of wrangling by lawmakers, there’s still no federal data privacy law limiting the reach of firms like Meta, Alphabet and Amazon that have entrenched themselves as dominant players across numerous markets and now threaten to exercise near-total control over AI.

Under the commitments, the companies agreed to:

  • Internal and external security testing of their AI systems before their release

  • Sharing information across the industry and with governments, civil society, and academia on managing AI risks

  • Investing in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights

  • Facilitating third-party discovery and reporting of vulnerabilities in their AI systems

  • Developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system

  • Publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use

  • Prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy

  • Develop and deploy advanced AI systems to help address society’s greatest challenges

Some of the commitments are in the companies’ interests and represent steps they likely would’ve taken anyway, like “investing in cybersecurity and insider threat safeguards.” Others are meant to assuage lawmakers that they can maintain oversight of the technology, like “facilitating third-party discovery” and “sharing information across the industry and with governments.” Additionally, each company can interpret the language differently to serve their best interests. For example, the firms pledged to report inappropriate areas of use, but they’re unlikely to consider that the poaching of data across the web without permission falls under that umbrella. There aren’t any firm commitments on how to make the AI models more transparent or clarity on how to meaningfully assess and create a reporting system for societal impacts, according to AI experts.

“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” Jim Steyer, CEO of advocacy group Common Sense Media, said in a statement to The Hollywood Reporter. He noted the “complete mis-management of social media governance” and the prioritization of “profits to such an extent that they will not hold themselves accountable.”

An official for the Federal Trade Commission speaking on the condition of anonymity said that the firms that fail to follow through can possibly be penalized under the FTC Act, which bars unfair methods of competition and deceptive business practices. But without a reporting regime or timeline that the agency can point to, enforcement can be tricky.

The safeguards also appear to be geared more toward addressing national security concerns rather than those of creators whose work has been pilfered to train the new tools. Of the eight points agreed upon, four concern risks posed by AI in the context of cybersecurity.

“While the commitments in the new voluntary agreement make sense within the terms of the tech industry,” says Kate Crawford, a professor at USC and author of Atlas of AI, “they fail to contend with the profound and far reaching impacts on the cultural sector, and how the history of creativity has been used to train generative AI systems.”

Courts are currently grappling with the legality of AI companies training their systems on vast quantities of art, literary works, personal information and news articles — the majority of which is available for free online. OpenAI is facing at least four class actions over the practice. The cases will likely turn on novel questions of copyright law.

Ryan Clarkson, the lead lawyer in a proposed class action against OpenAI on behalf of millions of Internet users whose data was scraped, says that “Big Tech’s promise to police themselves isn’t enough.” He stresses, “Life-saving drugs also have the potential to do good, and yet we don’t release them worldwide until after the risks are dealt with, not before. We may not be able to put the AI genie back in the bottle, but we can send the experiment back to the lab. The White House settling for much less only underscores the need for judicial intervention.”

In a statement, Google said it is “proud to join with other leading AI companies to jointly commit to advancing responsible practices in the development of artificial intelligence.” Microsoft and OpenAI similarly said in statements they support collaboration with the government.

Simien underscored at Comic-Con, “AI doesn’t work without other human people making art that it basically steals and blends into other things. You get the feeling that if the guys at the top could make these things without these pesky artists, that they would.”

Best of The Hollywood Reporter

Click here to read the full article.