About

Offensive security background. AI security focus.

I'm Justin Henderson, an offensive security practitioner focused on AI/LLM security, emerging AI-enabled social engineering and offensive security tradecraft.

Background

Forged in the Fire

My background is rooted in intelligence and US Special Operations, where I first learned to dissect problems from an adversarial point of view. During my military career, I supported high-risk counterterrorism operations in austere and denied environments, often serving as the person responsible for keeping teams connected through secure communications when reliability mattered most.

That experience forced me to become adaptable across communications, technical surveillance, and field technology. I worked with satellite systems, secure radio platforms, unmanned systems, sensors, and other classified systems while learning how to solve problems under pressure, anticipate adversary behavior, and adjust quickly when conditions changed.

One lesson I carried with me was to wake up every day and improve your fighting hole. There is always something you can do to strengthen your position, reduce exposure, and put yourself in a better situation than the day before. That mindset applies directly to security. The work is never finished, but disciplined improvement compounds over time.

Those lessons still shape how I approach security today: understand the environment, identify the real objective, think like the adversary, improve the position, and build plans that survive contact with reality.

Justin Henderson during military operations
Offensive Security

From Fieldcraft to Offensive Security

After the military, offensive security felt like a natural continuation of the way I had already learned to think. The environment changed, but the fundamentals stayed the same: understand the terrain, identify the objective, study the adversary, find the weak points, and communicate what matters.

Penetration testing gave me a structured way to apply that mindset across networks, applications, physical environments, and human behavior. Over time, I became especially drawn to the areas where technical systems, trust, and real-world decision-making overlap.

The strongest security findings are rarely isolated technical issues. They usually come from understanding how people, systems, processes, and assumptions connect. That is the lens I bring into my current work.

Social Engineering

Social Engineering

Social engineering has always been about context, trust, timing, and human behavior. Long before AI entered the conversation, effective social engineering depended on understanding how people make decisions, how authority is perceived, how urgency changes behavior, and how trust can be earned, borrowed, or abused.

My exposure to intelligence work, operational planning, and human-focused tradecraft gave me a deep interest in the psychology behind social engineering. I learned to look at interactions differently: not just as conversations, but as exchanges shaped by assumptions, incentives, pressure, and trust. That perspective made social engineering and physical intrusion some of the areas where I felt I could contribute most as a security professional.

As a security practitioner, I have helped plan and execute APT-level social engineering assessments, including targeted spear-phishing, phased spear-vishing, pretext development, human-targeted testing, and physical intrusion scenarios. That work requires more than a script. It requires reconnaissance, timing, discipline, adaptability, and the ability to translate human behavior into defensible security findings.

The emergence of artificial intelligence makes this area even more important. Attackers can now accelerate reconnaissance, generate highly tailored pretexts, clone voices, create synthetic media, automate engagement campaigns, and adapt messaging at scale. Some of these attacks will be difficult to detect even for trained professionals.

AI/LLM Security

Why AI Security Became the Next Mission

Artificial Intelligence feels like one of the defining shifts in modern history and maybe of all-time. The technology is moving quickly, the attack surface is still being defined, and organizations are connecting AI systems to data, tools, workflows, and decisions faster than security teams can fully reason about the consequences.

What interests me most is that AI security is not only a model problem. It is a systems problem, a workflow problem, and a trust problem. The real risk often depends on what the AI can access, what it can do, who trusts its output, and what happens when that trust is manipulated.

That makes AI security a natural fit for adversarial thinking. The question is not just whether a model can be manipulated. The better question is what that manipulation enables inside the specific environment where the system is deployed.

Personal Note

Grounded by family

Outside of security, I'm a husband and father. My wife, Chesna, has been with me through every major transition in my life: deployments, the weight that comes after them, the transition out of the military, the long nights studying, the moments of doubt, and every ambitious idea that has required late nights, odd hours, and a lot of patience. She is an incredible wife and mother, one of the smartest people I know, and a constant source of grounding, perspective, and support.

My son, Patton, is the greatest gift I have ever been given. Becoming a father has changed the way I think about the future and the systems we are building now. AI will shape the world he grows up in, and that gives this work a personal dimension for me. I care about AI security not only because the technology is fascinating, but because the way these systems are designed, deployed, trusted, and abused will matter far beyond individual applications. Used well, AI can expand human potential. Used carelessly, it can amplify harm, deception, and control at a scale we are only beginning to understand.

And then there are Luke and Cheeto, our two cats, who have been by our side for more than ten years through seven moves, hard times, and great times. They are not AI security researchers, but they have attended enough late-night study sessions to deserve honorary credit.

Justin Henderson with family
Why This Site Exists

AI systems, human trust, and offensive security

Black Ledger Security is where I document my work at the intersection of AI systems, human trust, and offensive security. I use this site to publish research, frameworks, cheatsheets, field notes, and practical writing as I continue building deeper expertise in AI/LLM security and emerging AI-enabled social engineering.