- cross-posted to:
- technology@lemmit.online
- cross-posted to:
- technology@lemmit.online
Insider report details clash over one board member’s criticism in an academic paper.
Kyle Orland - 12/5/2023, 9:31 PM
Insider report details clash over one board member’s criticism in an academic paper.
Kyle Orland - 12/5/2023, 9:31 PM
I…don’t think that’s what the referenced paper was saying. First of all, Toner didn’t co-author the paper from her position as an OpenAI board member, but as a CSET director. Secondly, the paper didn’t intend to prescribe behaviors to private sector tech companies, but rather investigate “[how policymakers can] credibly reveal and assess intentions in the field of artificial intelligence” by exploring “costly signals…as a policy lever.”
The full quote:
Anthropic is being used here as an example of “private sector signaling,” which could theoretically manifest in countless ways. Nothing in the text seems to indicate that OpenAI should have behaved exactly this same way, but the example is held as a successful contrast to OpenAI’s allegedly failed use of the GPT-4 system card as a signal of OpenAI’s commitment to safety.
Honestly, the paper seems really interesting to an AI layman like me and a critically important subject to explore: empowering policymakers to make informed determinations about regulating a technology that almost everyone except the subject-matter experts themselves will *not fully understand.