Discussion about this post

User's avatar
PauseAI's avatar

Very true, we cannot rely on vague definitions alone. That's why our proposal has clear red lines: it bans training of AI models that are either 1) larger than 10^12 parameters, 2) have more than 10^25 FLOPs used for training or 3) capabilities that are expected to exceed a score of 86% on the MMLU benchmark.

These lines might be overly strict (maybe ASI will require 100x the scale, we do not know now), but they could also be too permissive (maybe algorithmic progress leads to ASI being trainable on a desktop computer in a couple of years).

Finding the perfect red lines is a valuable endeavor, but inherently difficult. Underestimating the red lines could lead to human extinction. We should therefore err on the side of caution.

Expand full comment
Harold Godsoe's avatar

Great article, thank you for all your writing. Working in Tokyo, I have what might be a useful counterexample on this take: *"This is a painfully common dynamic in corporate law. For example, multinationals routinely practice tax avoidance right up to the legal line of tax evasion. They can prove with audited books that what they’re doing is legal, even if it clearly undermines the spirit of the law."*

Experience adjacent to the Japan National Tax Authority has taught me that they are far more likely than Western tax authorities to sanction companies walking up to the line with "tax mitigation". They do this with a combination of ambiguous lines (rather than bright lines), and unilateral decisions about when a violation of the spirit of the law is an actual violation of the law, regardless of the law's text.

The downside of this approach would be potential for corruption and lack of stability. But this is entirely mitigated by (1) the NTA's professionalism and (2) the NTA's general willingness to publish Q&A explainers (with Qs posed by corporate counsel) on the NTA's approaches to tax - none of which are binding on NTA decisions if they contain loopholes exploited in bad faith, but all of which are respected and taken seriously for the effort put into them.

A hypothetical AI Authority that took its idea of authority seriously, could act in similar ways: compasionate to the concerns of AI companies in understanding the rules, but rutheless in upholding the spirit of an AGI prohibition.

Expand full comment
2 more comments...

No posts

Ready for more?