Originally posted to SecondBest.ca ; Zvi responds here.

I. Oversight of AGI labs is prudent

  1. It is in the U.S. national interest to closely monitor frontier model capabilities.
  2. You can be ambivalent about the usefulness of most forms of AI regulation and still favor oversight of the frontier labs.
  3. As a temporary measure, using compute thresholds to pick out the AGI labs for safety-testing and disclosures is as light-touch and well-targeted as it gets.
  4. The dogma that we should only regulate technologies based on “use” or “risk” may sound more market-friendly, but often results in a far broader regulatory scope than technology-specific approaches (see: the EU AI Act).
  5. Training compute is an imperfect but robust proxy for model capability, and has the immense virtue of simplicity.
  6. The use of the Defense Production Act to require disclosures from frontier labs is appropriate given the unique affordances available to the Department of Defense, and the bona fide national security risks associated with sufficiently advanced forms of AI.
  7. You can question the nearness of AGI / superintelligence / other “dual use” capabilities and still see the invocation of the DPA as prudent for the option value it provides under conditions of fundamental uncertainty.
  8. Requiring safety testing and disclosures for the outputs of $100 million-plus training runs is not an example of regulatory capture nor a meaningful barrier to entry relative to the cost of compute.

II. Most proposed “AI regulations” are ill-conceived or premature

  1. There is a substantial premium on discretion and autonomy in government policymaking whenever events are fast moving and uncertain, as with AI.
  2. It is unwise to craft comprehensive statutory regulation at a technological inflection point, as the basic ontology of what is being regulated is in flux.
  3. The optimal policy response to AI likely combines targeted regulation with comprehensive deregulation across most sectors.
  4. Regulations codify rules, standards and processes fit for a particular mode of production and industry structure, and are liable to obsolesce in periods of rapid technological change.
  5. The benefits of deregulation come less from static efficiency gains than from the greater capacity of markets and governments to adapt to innovation.
  6. The main regulatory barriers to the commercial adoption of AI are within legacy laws and regulations, mostly not prospective AI-specific laws.
  7. The shorter the timeline to AGI, the sooner policymaker and organizations should switch focus to “bracing for impact.”
  8. The most robust forms of AI governance will involve the infrastructure and hardware layers.
  9. Existing laws and regulations are calibrated with the expectation of imperfect enforcement.
  10. To the extent AI greatly reduces monitoring and enforcement costs, the de facto stringency of all existing laws and regulations will greatly increase absent a broader liberalization.
  11. States should focus on public sector modernization and regulatory sandboxes and avoid creating an incompatible patchwork of AI safety regulations.

III. AI progress is accelerating, not plateauing

  1. The last 12 months of AI progress were the slowest they’ll be for the foreseeable future.
  2. Scaling LLMs still has a long way to go, but will not result in superintelligence on its own, as minimizing cross-entropy loss over human-generated data converges to human-level intelligence.
  3. Exceeding human-level reasoning will require training methods beyond next token prediction, such as reinforcement learning and self-play, that (once working) will reap immediate benefits from scale.
  4. RL-based threat models have been discounted prematurely.
  5. Future AI breakthroughs could be fairly discontinuous, particularly with respect to agents.
  6. AGI may cause a speed-up in R&D and quickly go superhuman, but is unlikely to “foom” into a god-like ASI given compute bottlenecks and the irreducibility of high dimensional vector spaces, i.e. Ray Kurzweil is underrated.
  7. Recursive self-improvement and meta-learning may nonetheless give rise to dangerously powerful AI systems within the bounds of existing hardware.
  8. Slow take-offs eventually become hard.

IV. Open source is mostly a red-herring

  1. The delta between proprietary AI models and open source will grow overtime, even as smaller, open models become much more capable.
  2. Within the next two years, frontier models will cross capability thresholds that even many open source advocates will agree are dangerous to open source ex ante.
  3. No major open source AI model has been dangerous to date, while the benefits from open sourcing models like Llama3 and AlphaFold are immense.
  4. True “open source” means open sourcing training data and code, not just model weights, which is essential for avoiding the spread of models with Sleeper Agents or contaminated data.
  5. The most dangerous AI models will be expensive to train and only feasible for large companies, at least initially, suggesting our focus should be on monitoring frontier capabilities.
  6. The open vs. closed source debate is mainly a debate about Meta, not deeper philosophical ideals.
  7. It is not in Meta’s shareholders’ interest to unleash an unfriendly AI into the world.
  8. Companies governed by nonprofit boards and CEOs who don’t take compensation face lower-powered incentives against AI x-risk than your typical publicly traded company.
  9. Lower-tier AI risks, like from the proliferation of deepfakes, are collective action problems that will be primarily mitigated through defensive technologies and institutional adaptation.
  10. Restrictions on open source risk undermining adaptation by incidentally restricting the diffusion of defensive forms of AI.
  11. Trying to restrict access to capabilities that are widely available and / or cheap to train from scratch is pointless in a free society, and likely to do more harm than good.
  12. Nonetheless, releasing an exotic animal into the wild is a felony.

V. Accelerate vs. decelerate is a false dichotomy

  1. Decisions made in the next decade are more highly levered to shape the future of humanity than at any point in human history.
  2. You can love technology and be an “accelerationist” across virtually every domain — housing, transportation, healthcare, space commercialization, etc. — and still be concerned about future AI risks.
  3. “Accelerate vs. decelerate” imagines technology as a linear process when technological innovation is more like a search down branching paths.
  4. If the AI transition is a civilizational bottleneck (a “Great Filter”), survival likely depends more on which paths we are going down than at what speed, except insofar as speed collapses our window to shift paths.
  5. Building an AGI carries singular risks that merit being treated as a scientific endeavor, pursued with seriousness and trepidation.
  6. Tribal mood affiliations undermine epistemic rationality.
  7. e/acc and EA are two sides of the same rationalist coin: EA is rooted in Christian humanism; e/acc in Nietzschean atheism.
  8. The de facto lobby for “accelerationism” in Washington, D.C., vastly outstrips the lobby for AI safety.
  9. It genuinely isn’t obvious whether Trump or Biden is better for AI x-risk.
  10. EAs have more relationships on the Democratic side, but can work in either administration and are a tiny contingent all things considered.
  11. Libertarians, e/accs, and Christian conservatives — whatever their faults — have a far more realistic conception of AI and government than your average progressive.
  12. The more one thinks AI goes badly by default, the more one should favor a second Trump term precisely because he is so much higher variance.
  13. Steve Bannon believes the singularity is near and a serious existential risk; Janet Haven thinks AI is Web3 all over again.

VI. The AI wave is inevitable, superintelligence isn’t

  1. Building a unified superintelligence is an ideological goal, not a fait accompli.
  2. The race to build a superintelligence is driven by two or three U.S. companies with significant degrees of freedom over near-term developments, as distinguished from the inevitability of the AI transition more generally.
  3. Creating a superintelligence is inherently dangerous and destabilizing, independent of the hardness of alignment.
  4. We can use advanced AI to accelerate science, cure diseases, solve fusion, etc., without ever building a unified superintelligence.
  5. Creating an ASI is a direct threat to the sovereign.
  6. AGI labs led by childless Buddhists with alt accounts are probably more risk tolerant than is optimal.
  7. Sam Altman and Sam Bankman-Fried are more the same than different.
  8. High functioning psychopaths demonstrate anti-social behaviors in their youth but learn to compensate in adulthood, becoming adept social manipulators with grandiose visions and a drive to “win” at all cost.
  9. Corporate malfeasance is mostly driven by bad incentives and “techniques of neutralization” — convenient excuses for over-riding normative constraints, such as “If I didn’t, someone else would.”

VII. Technological transitions cause regime changes

  1. Even under best case scenarios, an intelligence explosion is likely to induce state collapse / regime change and other severe collective action problems that will be hard to adapt to in real time.
  2. Government bureaucracies are themselves highly exposed to disruption by AI, and will need “firmware-level” reforms to adapt and keep-up, i.e. reforms to civil service, procurement, administrative procedure, and agency structure.
  3. Congress will need to have a degree of legislative productivity not seen since FDR.
  4. Inhibiting the diffusion of AI in the public sector through additional layers of process and oversight (such as through Biden’s OMB directive) tangibly raises the risk of systemic government failure.
  5. The rapid diffusion of AI agents with approximately human-level reasoning and planning abilities is likely sufficient to destabilize most existing U.S. institutions.
  6. The reference class of prior technological transitions (agricultural revolution, printing press, industrialization) all feature regime changes to varying degrees.
  7. Seemingly minor technological developments can affect large scale social dynamics in equilibrium (see: Social media and the Arab Spring or the Stirrup Thesis).

VIII. Institutional regime changes are packaged deals

  1. Governments and markets are both kinds of spontaneous orders, making the 19th and 20th century conception of liberal democratic capitalism a technologically-contingent equilibrium.
  2. Technological transitions are packaged deals, e.g. free markets and the industrial revolution went hand-in-hand with the rise of “big government” (see Tyler Cowen on The Paradox of Libertarianism).
  3. The AI-native institutions created in the wake of an intelligence explosion are unlikely to have much continuity with liberal democracy as we now know it.
  4. In steady state, maximally democratized AI could paradoxically hasten the rise of an AI Leviathan by generating irreversible negative externalities that spur demand for ubiquitous surveillance and social control.
  5. Periods of rapid technological change tend to shuffle existing public choice / political economy constraints, making politics more chaotic and less predictable.
  6. Periods of rapid technological change tend to disrupt global power balances and make hot wars more likely.
  7. Periods of rapid technological change tend to be accompanied by utopian political and religious movements that usually end badly.
  8. Explosive growth scenarios imply massive property rights violations.
  9. A significant increase in productivity growth will exacerbate Baumol’s Cost Disease and drive mass adoption of AI policing, teachers, nurses, etc.
  10. Technological unemployment is only possible in the limit where market capitalism collapses, say into a forager-style gift economy.

IX. Dismissing AGI risks as “sci-fi” is a failure of imagination

  1. If one’s forecast of 2050 doesn’t resemble science fiction, it’s implausible.
  2. There is a massive difference between something sounding “sci-fi” and being physically unrealizable.
  3. Terminator analogies are underrated.
  4. Consciousness evolved because it serves a functional purpose and will be an inevitable feature of certain AI systems.
  5. Human consciousness is scale-dependent and not guaranteed to exist in minds that are vastly larger or less computationally bounded.
  6. Joscha Bach’s Cyber Animism is the best candidate for a post-AI metaphysics.
  7. The creation of artificial minds is more likely to lead to the demotion of humans’ moral status than to the promotion of artificial minds into moral persons.
  8. Thermodynamics may favor futures where our civilization grows and expands, but that doesn’t preclude futures dominated by unconscious replicators.
  9. Finite-time singularities are indicators of a phase-transition, not a bona fide singularity.
  10. It is an open question whether the AI phase-transition will be more like the printing press or photosynthesis.

X. Biology is an information technology

  1. The complexity of biology arises from processes resembling gradient descent and diffusion guided by comparatively simple reward signals and hyperparameters.
  2. Full volitional control over biology is achievable, enabling the creation of arbitrary organisms that wouldn’t normally be “evolvable.”
  3. Superintelligent humans with IQs on the order of 1,000 may be possible through genetic engineering.
  4. Indefinite life extension is a tragedy of the anticommons.
  5. There are more ways for a post-human transition to go poorly than to go well.
  6. Natural constraints are often better than man-made ones because there’s no one to hold responsible.
  7. We live in base reality, and in nature there is no such thing as plot armor.
New Comment