DE EN
agenticPunk
Zurück zur Übersicht

This Week in Agentic (W14)

16. April 2026 | Roman Zenner
Teilen:
This Week in Agentic (W14)

Context windows are the new RAM — and Anthropic proved it by accident. Plus: Google throttles its own agent hype, Shopify quietly builds the smarter stack, and the German podcast scene writes agentic commerce off. W14.

tl;dr

  • Anthropic's Claude Code leak reveals a three-tier memory architecture — the blueprint nobody was supposed to see
  • Google has to throttle its own internal coding agent. Too much demand. No joke
  • Shopify is building the agentic stack from the bottom up. Not discovery — production.

Anthropic's Leak: The blueprint nobody was supposed to see

A forgotten .npmignore line. A 59.8 MB source map in the npm registry. 512,000 lines of TypeScript, wide open. The second leak within a week. What the code reveals about agent architecture — three-tier memory hierarchies, "dreaming" processes, always-on daemons — is more instructive than any keynote. We dedicated a separate article to the topic.

Google: Too much agent, too little control

Google's internal AI coding agent "Agent Smith" became so popular that access had to be restricted. Not for security reasons — demand. The agent plans workflows autonomously, pulls documents from employee profiles, integrates into internal chat. Sergey Brin at the latest town hall: AI agents are the focus for the year. And AI usage now factors into performance reviews.

Shopify Tinker: The stack from below

Tinker consolidates over 100 AI tools into a free app — logos, product photos, social-media videos, 360-degree views. Models from OpenAI, Google, and Anthropic under the hood, organized by output rather than by model name. The merchant describes what they need. Tinker handles the prompting in the background.

The impact is in the detail: one merchant generated 150 brand images in the first month. Professional product photography in the US costs $50 per shot. For a brand still building itself, that's the difference between iteration and standstill.

Quick hits

  • OpenAI is extending the Responses API as a foundation for autonomous agents: shell tool, execution loop, container workspace, context compaction. The infrastructure is getting serious
  • Arm claims agentic AI needs new CPUs. Intel's datacenter chief disagrees. The hardware debate has started — and it's going to get expensive
  • Ollama brings local language models via MLX to Apple Silicon — 1,810 tokens per second on M5 chips. Faster than many APIs. When coding agents run locally at cloud speed, the API-economy math changes

An LLM researched and wrote. A human read, cut, and approved.