Co-founder and CEO of Anthropic. In this wiki he matters primarily as a voice that consistently looks at frontier AI through the lens of safety, control, and social impact.

Context

Amodei represents the part of the current AI debate that does not ask whether models are impressive, but what can be done with that capability responsibly. That is exactly why his comments on defensive use, controlled access, and security coordination are so revealing.

Project Glasswing makes this posture visible: strong capabilities are not released broadly, but directed to defenders. That is less marketing than political and organizational positioning.

Core Ideas

  • Safety before maximal release - not every powerful capability should become generally available immediately
  • Models are infrastructure, not just products - whoever operates them is responsible for deployment and access
  • Defensive AI needs coordination - partnerships, access restrictions, and compute policy are part of the system itself

Connections

  • Anthropic - his company and strategic context
  • Project Glasswing - the clearest example of his current safety agenda
  • Nina Schick - outside commentator translating his signals into the broader AI discourse

Sources