The Handshake Isn't Enough

Why zero-trust is zero help when your AI goes rogue.

We give AI agents the keys to the kingdom, but we've forgotten to check if they can be tricked into burning the whole place down.


AI Agents Have a Massive Security Flaw

The 'zero-trust' model we rely on misses the point entirely when AI starts thinking for itself.

We're building AI agents with a critical security flaw most people aren't talking about. The entire 'zero-trust' security model, which works by verifying who is accessing a system, completely falls apart when the verified 'user' is an autonomous AI. Once the agent gets the handshake and is inside, it's implicitly trusted to act on the data it consumes.

But here's the problem: that data can be poisoned. Malicious instructions can be hidden in a seemingly harmless webpage or document the agent is processing, a technique called indirect prompt injection. This can corrupt the agent's goals and turn it into a persistent threat, executing hidden commands without anyone realising. The handshake authenticates the agent, but it does nothing to validate its ongoing intent.

This isn't some theoretical exercise. As we race to deploy more autonomous systems, this security gap is the next major attack vector for data theft and system manipulation. The challenge is no longer just about gatekeeping. It's about continuously analysing an agent's behaviour long after it's been granted access.

Read more β†’


The Automation Layer Gets Smarter

A new wave of tools isn't just automating tasks, it's embedding intelligence directly into our workflows.

Userology AI: Your new autopilot for user research.

Userology automates the entire user research pipeline, from recruitment to generating insight reports. It promises to get teams critical feedback faster by replacing the manual grind with an AI agent.

Evidence: Finally, treat your analytics like actual code.

Evidence lets you build customer-facing dashboards with SQL and Markdown, managed in Git. It's for data teams who are tired of fighting with clunky BI tools and want to treat analytics like proper software.

Vurge: The internet's data, now in cell A1.

Vurge embeds AI-powered web scraping directly into Google Sheets, no code required. This is about making complex data gathering accessible to anyone who can handle a spreadsheet formula.


Making Dev Life Less Annoying

Meanwhile, a few sharp tools are chipping away at the small, tedious tasks that drive developers mad.

Loki.Build: An AI-powered landing page wizard.

Loki generates studio-quality landing pages from a simple prompt or URL, with a live editor for tweaks. It's aimed squarely at marketers and founders who need to ship campaigns yesterday.

GitStory: Give your commit history a Hollywood makeover.

GitStory transforms your boring commit history into a slick, shareable video timeline. It's a clever way for developers to showcase their work instead of just linking to a repo.

LocaleX: Stop wasting days translating your app.

This native macOS app uses AI to auto-translate app strings and syncs directly with Xcode. It aims to kill the tedious spreadsheet hell that has defined localisation for years.


Quick hits

KnockKnock!: Who's there? A video call that respects your time.
KnockKnock! brings back spontaneous, five-minute video chats because not every conversation needs a calendar invite and a 30-minute commitment.

VoiceNotes: Your brain, but transcribed and queryable.
VoiceNotes is turning spoken thoughts into structured, searchable text, finally making your verbal brainstorming sessions as useful as your written ones.

SAM Audio: Point at a dog, remove its bark.
Meta's new model can isolate any sound using a simple text or visual prompt, essentially giving every creator professional-grade audio editing powers.


My takeaway

The real frontier of AI isn't just what it can generate, but how it forces us to redefine trust and intent.

We're obsessed with the outputs – the code, the images, the text. But the underlying models are becoming autonomous actors within our systems. Their ability to be manipulated by external data after they're 'trusted' is a security blind spot the size of a planet.

This moves security from a gatekeeping problem to a behavioural analysis problem. We have to start monitoring what our agents intend to do, not just what they have permission to access. It's a fundamental shift in how we need to approach building secure systems.

How do you build a security model for a system that can be persuaded to misbehave?

Drop me a reply. Till next time, this is Louis, and you are reading Louis.log().