// LATEST EXPLAINERS
Research-backed guidance for teams building with LLMs
Six placeholder cards are live now so the hub structure, links, and publishing flow are ready before the final Unit 42 article pool lands.
PROMPT INJECTIONFEATURED
5 min readHow AI Agents Get Hijacked: Unit 42's Prompt Injection Findings, Explained
Unit 42 keeps showing the same pattern: once an attacker can rewrite the model's priorities, your workflow stops behaving like your workflow. This explainer breaks down where that control flips and what a sane defense looks like.
INDIRECT INJECTIONFEATURED
4 min readIndirect Prompt Injection: The Attack Your Scanner Won't Catch
The dangerous instruction often isn't in the chat box. It's buried in the PDF, webpage, ticket, or retrieval result your agent decides to trust.
When Your AI Calls the Wrong Tool: Understanding Unsafe Tool Use
Tool access turns a bad answer into a real-world action. We unpack how weak tool-selection rules let injected instructions trigger the wrong side effect at the wrong time.
RAG Poisoning: Why Your Knowledge Base Is a Security Risk
If your model trusts retrieved context more than it should, poisoned content quietly becomes system behavior. This piece explains the control failures behind that shift.
The EU AI Act Wants Proof. Here's What That Means for Your LLM Stack.
Regulators and enterprise buyers are converging on the same expectation: if your AI system can fail in a security-relevant way, you need evidence, not claims.
AUDIT STRATEGYFEATURED
5 min readAudit vs. Scanner: Why One Finds What the Other Misses
A scanner can surface signals. An audit tells you whether those signals become risk, how serious they are, and what remediation should happen next.