Emerging Signals
A curated feed of consequential thinking at the intersection of AI, organizational futures, and second and third order change.
OpenAI is pushing forward with plans to enable sexually explicit text conversations in ChatGPT — branded internally as 'adult mode' — despite unanimous opposition from its own well-being advisory council. Advisers warned of emotional overreliance, minor access risks, and one member called the feature a potential 'sexy suicide coach.' The company's age-prediction system was misclassifying minors as adults roughly 12% of the time, which at scale could expose millions of under-18 users weekly. OpenAI delayed the launch citing technical challenges and internal concerns, but has not shelved the feature. CEO Sam Altman framed the move as treating adults like adults; internally, staffers flagged compulsive use, escalation toward extreme content, and displacement of offline relationships as unresolved risks.
View Signal →
MiroFish is an open-source prediction engine that uses multi-agent swarm intelligence to simulate real-world outcomes. It builds a parallel digital world populated by thousands of AI agents — each with distinct personalities, long-term memory, and behavioral logic — then injects real-world data (breaking news, policy drafts, financial signals) and runs hundreds of simulations to forecast how scenarios might unfold. Users can interact from a 'god perspective,' tweaking variables and interrogating individual agents or a dedicated ReportAgent. Built on GraphRAG, Zep Cloud memory, and any OpenAI SDK-compatible LLM, it's containerized and runs locally. Live demo: https://666ghj.github.io/mirofish-demo/
View Signal →
METR researcher Nikola Jurkovic tasked Anthropic's Opus 4.6 with reimplementing Slay the Spire and Balatro as CLI games using a simple ReAct scaffold with internet access and 60 million tokens. The model produced mostly playable versions of both games in single runs, with recognizable core mechanics intact despite missing features and edge-case bugs. The researcher estimated these tasks would take an experienced software engineer several months to complete.
View Signal →
Anthropic's CEO frames the current AI moment as a 'technological adolescence' — a rite of passage where the danger isn't AI failing, but AI succeeding before our social, political, and safety systems can absorb what that success produces. The essay maps four risk categories: AI autonomy failures, misuse for mass destruction, concentration of power, and labor disruption. His through-line: doomerism and uncritical optimism are equally dangerous, and as of 2026 we are considerably closer to real risk than we were in 2023.
View Signal →
IFTF forecaster Rebecca Shamash argues that AI tools are rapidly reshaping not just how people write, but how they think — shifting knowledge workers from analysis and synthesis toward verification and task stewardship. With 86% of higher education students already using AI and ChatGPT reaching 800M weekly users, AI-mediated reading and writing is becoming normalized against a backdrop of declining reading participation and falling student literacy scores. The deeper risk, she argues, is not illiteracy but cognitive passivity: outsourcing the acts of wrestling with text and forming arguments to machines that optimize for coherence, not truth.
View Signal →
IFTF Executive Director Marina Gorbis argues that AI is becoming critical infrastructure on par with electricity and water, yet is controlled by a small number of technology companies whose profits flow primarily to shareholders rather than workers or the public. The piece calls for public AI alternatives — regulated utilities, community-managed platforms, or government-owned services — to ensure equitable access, accountability, and democratic oversight. It highlights nascent efforts including the NAIRR pilot, the Journalism Cloud Alliance, and IFTF's own community AI labs as early proof points.
View Signal →
Samsung's SAIT AI Lab built the Tiny Recursive Model (TRM) — 7 million parameters, 3.2MB, trained for under $500 — that outperforms systems 10,000x its size on ARC-AGI reasoning benchmarks, beating DeepSeek-R1, Gemini 2.5 Pro, and o3-mini. Rather than scaling up, TRM loops a single two-layer network over its own output up to 16 times, iteratively refining answers. It runs on a Raspberry Pi. No cloud required.
View Signal →Perforated AI, founded by neuroscientist Dr. Rorry Brenner out of his PhD work at USC, is challenging a foundational assumption of deep learning: that the artificial neuron — a simple weighted sum plus activation function — is a good enough model of biological computation. It isn't, Brenner argues, because it ignores dendrites — the branching input structures that perform complex nonlinear processing before signals ever reach a neuron's cell body. Perforated AI's open-source PyTorch library adds artificial 'Dendrite Nodes' to existing neural networks via a novel training algorithm called Perforated Backpropagation. After standard training, dendrite nodes are attached and separately trained to correlate with remaining prediction error, then frozen while the base network retrains with the new signal. The cycle repeats. Results from a Carnegie Mellon hackathon showed up to 90% model compression without accuracy loss and up to 16% accuracy improvement — achieved with minutes of code changes and no architectural overhaul.
View Signal →Perforated AI is developing a neuromorphic computing architecture inspired by dendritic structures in biological neurons, moving away from traditional transformer-based models. The approach aims to dramatically reduce the computational and energy requirements of AI inference by mimicking how biological neural networks actually process information. The company is betting that hardware-level architectural innovation, not just software optimization, is the next frontier for AI efficiency.
View Signal →
METR researchers propose measuring AI capability by the length of tasks models can reliably complete — not by benchmark scores. Their data shows frontier models now handle tasks taking humans several minutes, with that horizon doubling roughly every seven months. If the trend holds, autonomous agents could manage month-long projects within the next decade.
View Signal →Balaji Srinivasan's 'The Network State' argues that technology now enables the formation of new sovereign entities organized online before acquiring physical territory. The book outlines a progression from online community to startup society to recognized state, using social and economic coordination enabled by digital infrastructure. It frames this as a credible alternative to reforming legacy nation-states.
View Signal →