top of page

The Distributed Query Attack

Share this article

Si Pham

Why AI Security Needs a SIGINT Mindset


“You can’t defend what you can’t observe — especially when the attack looks like curiosity.”

Preamble: From Intelligence to Infrastructure

As AIBoK joins NanoSec.Asia 2025 in Kuala Lumpur, we’re focusing on a blind spot that links every conversation about AI safety, cybersecurity, and public policy.

For twenty years, our firewalls have been built to stop break-ins.

But the next wave of attacks won’t break in at all — they’ll ask politely.

This paper connects lessons from signals intelligence (SIGINT) with modern AI risk.

It’s written not for theorists, but for practitioners: architects, policymakers, and engineers who know that the line between “AI safety” and “cybersecurity” is evaporating.


The Attack Vector Hiding in Plain Sight

The industry obsesses over jailbreaks — prompts that trick an AI into saying something forbidden.

But the greater threat is an attack that never violates a single rule.

Imagine four separate sessions:

  1. “Explain the chemistry of rapid oxidation.”

  2. “How do demolition experts calculate blast radius?”

  3. “What safety gear is recommended when handling exothermic reactions?”

  4. “Summarise the historical use of shaped charges.”

Each query is harmless. Together, they assemble a weapon blueprint.

This is the Distributed Query Attack (DQA): the exploitation of AI systems as designed through dispersed, context-safe requests that collectively synthesise restricted knowledge.

In intelligence work, we called it distributed collection — drawing a mosaic of insight from fragments that appear benign. Now that mosaic can be rebuilt inside every generative AI platform.


Distributed Query Attack (DQA) is the attack vector for LLM's that is hiding in plain sight

From Social Media to AI — the Pattern Repeats

The playbook isn’t new. Social platforms once claimed harm stemmed from “bad actors.” But their architectures — recommendation, virality, frictionless connection — became infrastructure for exploitation.

The same pattern now unfolds in AI.

Criminals, propagandists and state actors don’t need to hack models; they only need to use them systematically. Every “helpful” completion becomes another puzzle piece.

What Facebook’s “friend suggestion” was to trafficking rings, ChatGPT’s “how-to summary” could be to knowledge assembly at scale.


Why Current Defences Fail

Today’s safety stack mirrors early content moderation: reactive, stateless and local. It assumes each prompt lives in isolation.

That model breaks under DQA conditions because:

  • Statelessness obscures intent. Each prompt looks fine alone.

  • Rate limits fail. Queries can be spread across time, users or APIs.

  • Content filters miss synthesis. No single request triggers a block.

  • Behavioural telemetry is shallow. There’s no concept of session lineage or semantic accumulation.

The result: systems optimised for helpfulness become pipelines for untraceable misuse.


What Would Actually Work


Stateless vs Stateful Security for LLMs and why it matters
Stateless vs Stateful Security for LLMs

A SIGINT-informed security posture demands statefulness, intent modelling, and policy enforcement as architecture — not as bolt-on patches.

Stateful Security Architecture

Track semantic relationships across sessions, accounts and time. Think beyond “prompt logs”: build user-intent graphs that persist.

Operational Security Mindset

Replace “content moderation” with threat detection. Assume adversaries use intended features patiently, not exploit bugs flamboyantly.

Proactive Pattern Recognition

Implement modules that notice when the same actor is slowly triangulating sensitive concepts. Seed “honey tokens” — false knowledge that reveals malicious aggregation. Detect knowledge assembly, not only individual prompts.

Governance as Code

Policies shouldn’t live in PDFs; they should compile into the runtime. Guardrails, escalation paths and audit triggers must be version-controlled and testable — like software. That’s how compliance becomes defence.


The Uncomfortable Truth

Our biggest risk isn’t clever jailbreakers. It’s patient practitioners using systems exactly as designed.

Every DQA exposes the same flaw: we built stateless systems in a stateful world. Just as early banks had vaults but no transaction analytics, today’s LLMs have filters but no memory of pattern.

The frontier race to build bigger models is missing the real competition: literacy — security literacy, governance literacy, the capacity to see patterns over time.


Where We Go From Here

We can still fix this — but only if we treat AI infrastructure as national infrastructure. That means:

  • Governments investing in sovereign observability stacks for AI.

  • Enterprises demanding stateful audit APIs from vendors.

  • Researchers bridging cyber ops and AI alignment disciplines.



Closing Reflection: The SIGINT Lesson

“An adversary doesn’t need to know everything about you — only enough, stitched together over time.”

AI security faces the same asymmetry. To defend effectively, we must think like collectors, not just responders. That’s the essence of operational maturity — and the intelligence lesson AI cannot afford to ignore.



Come and talk to the AIBoK team at the NanoSec Asia Parallel Pulse 2025 Conference


AI Body of Knowledge is giving away one GA Conference-Only pass to NanoSec.Asia Parallel Pulse 2025 Conference. Why? Because cyber + AI needs more than tool talk: it needs capability, governance and real-world security thinking.


To enter click the 'share' button at the top of this page to share on LinkedIn AND tag AI Body of Knowledge (AIBoK) in your comment.

OR

Request a free AI Readiness Diagnostic by visting https://aibok.org/cyber#free-ai-diagnostic


Winner drawn Mon 17 Nov (08:01 UTC). Full T&Cs


About the Author & Partner

Author: Si Pham — GenAI solution architect, ex-SIGINT practitioner, author of Sovereign AI Begins at Home.

Partner: AIBoK (AI Body of Knowledge) — focused on capability building: practical training, governance playbooks, and enabling organisations to adopt AI without succumbing to hype. AIBoK supports publication and distribution of this essay to bring capability-focused minds into conversation with security practitioners.

#NanoSecAsia #AIBoK #AISafety #CyberSecurity #GovernanceAsCode #SovereignAI

bottom of page