- Senate, led by Chuck Schumer, is wary of closed-door AI summit.
- Doug Clinton emphasizes the need for AI transparency and watermarking content.
- Clinton warns of future risks and advocates for explainability in AI models.
As the Senate grapples with regulating artificial intelligence, Doug Clinton of Deepwater Management offers insights on transparency and explainability in AI models. However, not all lawmakers are on board with the format of the upcoming high-profile, bipartisan AI summit orchestrated by Majority Leader Chuck Schumer.
Several senators have publicly expressed reservations about Schumer’s summit, criticising the decision to hold an all-day, closed-door meeting with tech billionaires such as Elon Musk, Mark Zuckerberg, Bill Gates, and Sam Altman of OpenAI. Even within Schumer’s Democratic leadership, there are calls labelling the private dialogues with lawmakers as “just plain wrong”.
Addressing pressing AI issues, Doug Clinton previewed that true transparency would involve understanding what goes into these machine learning models and what comes out. He emphasized the importance of watermarking generated content and ensuring that the AI models aren’t exploiting copyrighted materials. According to Clinton, as AI’s role in society expands, the public would need more information about the data used to train these algorithms.
Today, the Titans of Tech including @elonmusk, Mark Zuckerberg, and @OpenAI's @sama will meet behind closed doors with lawmakers in the first of a series of meetings about regulating AI. @deepwatermgmt's @dougclinton has a preview of what to expect. pic.twitter.com/6VZU0TAspE
— Squawk Box (@SquawkCNBC) September 13, 2023
Clinton further delved into the problem of explainability in AI, echoing what many tech CEOs admit: even the developers aren’t entirely sure why an AI model might respond in a particular manner to specific prompts. He argued that while it’s tolerable for now, the vagueness surrounding AI responses could be a significant issue down the line. He noted “If it gets out of control, it could raise questions about sentient beings and consciousness”.
The Deepwater Management executive also discussed the internal red-teaming processes at AI companies designed to stress-test these models. Still, he conceded that safety could become increasingly problematic as these models evolve. While current AI models are well-contained regarding safety, Clinton warned that the community is walking a fine line, implying a need for government intervention sooner rather than later.