← Emerging Signals
The Adolescence of Technology
Futures ThinkingDario AmodeiJanuary 1, 2026

The Adolescence of Technology

ai capability growthgovernance lagfutures thinkingpower concentrationai adoption

Summary

Anthropic's CEO frames the current AI moment as a 'technological adolescence' — a rite of passage where the danger isn't AI failing, but AI succeeding before our social, political, and safety systems can absorb what that success produces. The essay maps four risk categories: AI autonomy failures, misuse for mass destruction, concentration of power, and labor disruption. His through-line: doomerism and uncritical optimism are equally dangerous, and as of 2026 we are considerably closer to real risk than we were in 2023.

Read Original Article →

Related Signals

Signal Graph

Second Order

The most important line in this essay isn't about AI — it's that 'we are considerably closer to real danger in 2026 than we were in 2023,' written by the CEO of a leading AI lab while continuing to deploy more capable systems. That's not contradiction; it's signal. The practitioners closest to the frontier are moving from theoretical risk mapping to operational urgency. For organizations in 'AI adoption mode,' this reframes the stakes. The risk isn't that AI fails to deliver. It's that it delivers before the oversight, governance, and human judgment structures are ready to handle what delivery produces.

Third Order

The institutions best positioned to survive technological adolescence won't be the ones with the most capable AI — they'll be the ones that built governance reflexes before they needed them. The third order consequence of an industry-wide prioritization of deployment over governance is that the frameworks arrive late, get written by people distant from the problem, and end up calibrated to the last crisis rather than the next one. The organizations that treated safety and strategy as the same conversation — not competing priorities — will be the ones that set the terms for everyone else.