AI/LLM Security
Prompt injection, RAG exposure, model behavior, system prompt leakage, tool misuse, guardrail bypass, and AI application risk.
Research notes, technical explainers, and working ideas focused on AI/LLM security, emerging AI-enabled social engineering, SPECTRA development, and practical offensive security methodology.
Documenting the first live validation of the SPECTRA framework against a synthetic enterprise RAG system with a seeded retrieval authorization vulnerability. From building the lab to validating the methodology to identifying what comes next.
These themes define the main areas I am researching and writing about as AI systems become more connected to data, tools, workflows, and human decision-making.
Prompt injection, RAG exposure, model behavior, system prompt leakage, tool misuse, guardrail bypass, and AI application risk.
AI-assisted reconnaissance, synthetic personas, phishing and vishing evolution, impersonation risk, pretext generation, and trust signals.
Framework notes, roadmap updates, methodology refinements, attack chain logic, context-aware testing concepts, and tooling ideas.
Practical notes on testing structure, finding development, reporting, risk communication, and impact.