Meet Constantine – Find Mythos-level vulnerabilities in your code. It proves them, patches them, PRs them back. Autonomously.

As Strong As Your Weakest Parameter: An AI Authorization Bypass

In this AI gold rush, LLMs are becoming increasingly popular with many companies rolling out AI-assisted applications. When evaluating the security posture of these applications, it’s essential to pause and ask ourselves: what are we securing? Automated security tools that test models in isolation play an important role in identifying known vulnerabilities and establishing security […]

Exploiting LLM Write Primitives: System Prompt Extraction When Chat Output Is Locked Down

Exploiting LLM Write Primitives

Prompt injection allows attackers to manipulate LLMs into ignoring their original instructions. As organizations integrate AI assistants into their applications, many are adopting architectural constraints to mitigate this risk. One increasingly common pattern: locking chatbots into templated responses so they can’t return free-form text. This seems secure. If an LLM can’t speak freely, it can’t […]

Where AI Systems Leak Data: A Lifecycle Review of Real Exposure Paths

Where AI Systems Leak Data

AI data exposure rarely looks like a breach. No alerts are triggered, no obvious failure occurs, and most of the time nothing appears to be wrong at all. Instead, sensitive information moves through retrieval, reasoning, and storage layers that were never designed to enforce trust boundaries. Most organizations evaluate AI systems by reviewing individual components […]