Most current frameworks focus on (1) and miss (2). An agent that has perfect permission controls but draws from a poisoned or incomplete context window is still dangerous. For operations use cases, context integrity is arguably the harder problem — agents pulling from CRM, email, and ticketing systems simultaneously have large attack surfaces through injected data.
The NIST RFI would benefit from a clearer taxonomy here. Authorization and context integrity require different mitigations.
the drone registration analogy in the RFI is actually quite apt. for agents that can take real-world actions (deploy code, make purchases, send communications), some kind of capability manifest that can be audited before deployment would go a long way. the hard part is that agents are compositional - agent A calling agent B calling a tool creates permission chains that are hard to reason about statically.
[1] https://www.commerce.gov/news/press-releases/2025/06/stateme... [2] https://www.reuters.com/technology/us-ai-safety-institute-di...
Key focus areas: - Novel threats: prompt injection, behavioral hijacking, cascade failures - How existing security frameworks (STRIDE, attack trees) need to adapt - Technical controls and assessment methodologies - Agent registration/tracking (analogous to drone registration)
This is specifically about agentic AI security, not general ML security - one of the first formal government RFIs on autonomous agents.
Comments from practitioners deploying these systems would be valuable.
Deadline: March 9, 2026, 11:59 PM ET Submit: https://www.regulations.gov/commenton/NIST-2025-0035-0001
Priority questions (if limited time): 1(a), 1(d), 2(a), 2(e), 3(a), 3(b), 4(a), 4(b), 4(d)
Full 43-question RFI at link above.
A more recent release:
Announcing the "AI Agent Standards Initiative" for Interoperable and Secure Innovation
https://www.nist.gov/news-events/news/2026/02/announcing-ai-...