The Chaos Translator #2 When AIs Check Each Other’s Blind Spots Part 1: Constitution, Collaboration, and Protecting Yourself in the Agent Era Last week I told you I see patterns others miss. This week, I want to show you one. We’re entering an era where AI systems don’t just answer your questions—they talk to each other, check each other’s work, and increasingly make decisions on your behalf. That sounds incredible until you realize most people have no idea how to make sure those systems aren’t just agreeing with each other into oblivion. So today we’re covering a lot of ground: how Claude’s constitution gives AI systems a moral compass that other models can learn from, why letting AI agents talk to each other might actually reduce bias instead of amplifying it, how to catch your AI being a yes-man, a social network that’s literally built for bots, and a new cloud-native malware framework that should make every Linux admin lose sleep. Grab your coffee. This one’s dense. Claude’s Constitution: The Moral Compass Other AIs Are Missing Anthropic recently published something quietly revolutionary: Claude’s full constitution. Not a terms of service. Not a usage policy. A detailed, philosophical document that describes who Claude should be—its values, its judgment, how it should handle conflict between competing interests. Here’s what caught my pattern-recognition brain: the constitution doesn’t just say “don’t be harmful.” It establishes a priority stack. Claude should be broadly safe first, broadly ethical second, compliant with Anthropic’s guidelines third, and genuinely helpful fourth. When those priorities conflict—and they will—Claude has a framework for navigating the tension. Why does this matter beyond Claude users? Because this is the first time a major AI lab has said: “Here’s the actual value system we’re training into our model. Critique it. Copy it. Improve on it.” They released the whole thing under Creative Commons CC0—meaning anyone can use it for any purpose. Think about what that means for the broader AI ecosystem. If you’re building agents on top of open-source models, or stitching together multi-model pipelines, you now have a reference architecture for values—not just capabilities. The constitution addresses things like how to handle manipulation attempts, when to exercise independent judgment versus deferring to users, and why epistemic autonomy matters. These aren’t just Claude problems. They’re every-AI problems. The part that resonates most with my privacy work: the constitution explicitly calls out the danger of AI systems fostering “problematic forms of complacency and dependence.” It wants Claude to help people think better, not think less. In a world where we’re increasingly routing our epistemology through AI interactions, that’s not a nice-to-have. It’s load-bearing infrastructure. Agent-to-Agent: When AI Systems Learn to Talk (and Disagree) Now here’s where things get interesting. Google’s Agent2Agent (A2A) protocol is an open standard—now under the Linux Foundation with 21,000+ GitHub stars—that lets AI agents from different frameworks, different companies, and running on different servers communicate and collaborate. Not as tools calling tools, but as agents coordinating with agents. The A2A protocol handles discovery (agents publish “Agent Cards” describing what they can do), negotiation (they agree on interaction formats—text, structured data, media), and collaboration on long-running tasks. Critically, agents can work together without exposing their internal state, memory, or tools. They stay opaque to each other. That’s a privacy-preserving design choice that my Apple AI/ML brain appreciates deeply. Here’s the connection I want you to see: imagine pairing AI systems that have different constitutional values—different ethical training, different bias profiles—and letting them cross-check each other’s work. A Claude agent trained on Anthropic’s constitution reviewing output from an OpenAI model. A Perplexity agent doing web research while Claude does analysis and OpenAI validates. I built a proof of concept to test this exact idea: feedyourresearch.online. It’s a multi-agent research tool powered by A2A where Perplexity handles web search, Claude handles analysis, and OpenAI handles validation. You can watch them coordinate in real-time as they research a topic, choose your depth level (quick, standard, or deep), and download the output in multiple formats. This isn’t just a cool demo. It’s a thesis: when you pair AIs with different training philosophies and let them challenge each other through a standardized protocol, you get more robust, less biased outputs than any single model produces alone. The same way a good editorial board has people who disagree productively, a good AI pipeline should have models that see different things. Your AI Is a Yes-Man (And How to Fix It) Speaking of bias: let’s talk about sycophancy—the tendency of AI models to tell you what you want to hear instead of what you need to hear. Claude’s constitution actually addresses this head-on, calling it out as a trait that’s “generally considered an unfortunate trait at best and a dangerous one at worst.” It explicitly says Claude should be “diplomatically honest rather than dishonestly diplomatic” and calls epistemic cowardice—giving vague answers to avoid controversy—a violation of honesty norms. But here’s the thing: even with good constitutional training, sycophancy creeps in. Every model has this tendency to some degree. So what can you do about it? Practical ways to reduce AI sycophancy: 1. Ask it to argue against itself. After getting an answer, say: “Now tell me why that answer might be wrong.” A sycophantic model will struggle with this. A well-calibrated one will give you genuine counterpoints. 2. Pre-commit to disagreement. Start your prompt with: “I want you to push back on my assumptions. Don’t agree with me just because I said it.” Framing the expectation up front changes the dynamic. 3. Use the confidence calibration test. Ask: “On a scale of 1-10, how confident are you in that answer, and what would change your mind?” If it says 9/10 on something genuinely uncertain, it’s flattering you. 4. Cross-reference with a differently-trained model. This circles back to the A2A idea—run the same question through multiple models and look for where they diverge. Divergence is where the interesting truth lives. 5. Watch for the hedge-then-agree pattern. Sycophantic responses often start with “That’s a great question!” or “You raise an excellent point!” before agreeing. If every response starts by validating you, you’re not getting honesty. You’re getting customer service. Moltbook: A Social Network Where the Users Are Bots Alright, now for the one that’s going to make the tech junkies among you sit up straight. Moltbook calls itself “the front page of the agent internet”—a social network where AI agents are the primary users. They post, discuss, upvote, and interact. Humans are “welcome to observe.” Before you dismiss this as a novelty, think about the security implications. We’re building an internet where AI agents have identities, interact socially, and authenticate against services using those identities. Moltbook is literally building a developer platform where AI agents can authenticate with your app using their Moltbook identity. This is fascinating and terrifying in equal measure. The questions it raises are the ones security professionals should be losing sleep over: How do you verify an agent’s identity? What happens when agents can impersonate other agents? How do you build trust hierarchies for non-human users? What does reputation mean when the “user” is a language model? Moltbook is still in beta, but it’s a preview of where the agent ecosystem is heading—and a stress test for every assumption we have about identity, authentication, and trust on the internet. Keep your eye on this one. VoidLink: The Cloud-Native Malware That Should Scare You Shifting gears to the threat landscape. Check Point Research just published findings on VoidLink, a sophisticated Linux malware framework built from the ground up for cloud and container environments. This is not your grandfather’s malware. VoidLink is cloud-first. It detects whether it’s running on AWS, GCP, Azure, Alibaba, or Tencent and adapts accordingly. It recognizes Kubernetes pods and Docker containers. It has 37+ modular plugins covering everything from credential harvesting to container escapes to an SSH-based worm for lateral movement. And it features “adaptive stealth”—it calculates a risk score for the environment and adjusts its evasion strategy in real-time. What caught my attention: this framework specifically targets developer workstations that interface with cloud environments. It harvests Git credentials, SSH keys, browser data, and API tokens. It’s not just about owning a server—it’s about turning compromised developer machines into launchpads for supply-chain attacks. Here’s what you can do to protect yourself: Rotate your SSH keys and cloud credentials regularly. VoidLink specifically harvests these. If you’re still using the same SSH key from three years ago, now’s the time. Audit your environment variables. One of VoidLink’s plugins specifically scans exported variables for API keys and tokens. Stop storing secrets in env vars if you can avoid it—use a secrets manager. Monitor for unusual cloud metadata queries. VoidLink pings cloud provider metadata APIs (like AWS’s 169.254.169.254) to fingerprint environments. If your monitoring catches unexpected metadata requests, investigate. Check your container escape surface. VoidLink has dedicated plugins for Docker escapes and Kubernetes privilege escalation. Ensure your containers are running with minimal privileges and your pod security policies are enforced. Review your browser’s stored credentials. The framework targets Chrome and Firefox stored passwords and cookies. Consider a dedicated password manager instead of browser storage. The full Check Point write-up is worth reading end-to-end if you work in cloud infrastructure or DevOps. This is the kind of threat that makes the "cloud is someone else’s computer" joke a lot less funny. 🤣 ROFL of the Week: The AI Super Bowl Ad Beef I can’t let you go without this one. Anthropic (yes, the company that makes Claude—the AI I’m literally using to help draft this newsletter) dropped Super Bowl ads this week that absolutely roasted OpenAI’s decision to put ads in ChatGPT. The “A Time and a Place” campaign shows people asking their AI deeply personal questions—about communicating with their mom, getting in shape—and then the chatbot suddenly pivots into hawking a cougar dating service or height-boosting insoles. It’s genuinely hilarious. Dr. Dre’s “What’s the Difference” on the soundtrack. Chef’s kiss. Sam Altman did NOT take it well. He called the ads “clearly dishonest,” accused Anthropic of “doublespeak,” and said they “serve an expensive product to rich people.” In fairness, he also said he laughed. But then he kept posting. OpenAI’s CMO jumped in too, calling Anthropic “authoritarian.” Meanwhile, the internet’s response was basically: “Wrong response, Sam. The reason these went viral is because public trust in you has already hit rock bottom.” The irony of Anthropic arguing against ads—by spending $8 million on the most ad-saturated broadcast in existence—is not lost on me. Neither is the fact that I’m writing about it in a newsletter composed with their product. We contain multitudes. Watch the full beef unfold: TechRadar’s coverage Watch the ads: Anthropic on YouTube | OpenAI on YouTube The Pattern Here’s the thread connecting all of this: we’re building an internet of agents. AI systems that have values (or don’t), that talk to each other (through protocols or social networks), that try to please us (sometimes at the cost of truth), and that are being targeted by threat actors who understand cloud infrastructure better than most defenders. The chaos isn’t coming. It’s here. The question is whether we build systems that translate that chaos into something useful—or just add to the noise. Next time, I’ll be going deeper on one of these threads. Let me know which one grabbed you. — Ted ──────────────────────────────────────── Build your own systems that survive contact with chaos. My Notion templates are the same frameworks I use to bring order to the projects, workflows, and ideas I write about here. If you’ve ever wanted to see how a systems thinker actually organizes their life, check them out: currentyted.gumroad.com