Download our Latest Industry Report – Continuous Offensive Security Outlook 2026

There’s Always Something: Secrets Detection at Engagement Scale with Titus

Praetorian Titus secret scanner

TL;DR: Titus is an open source secret scanner from Praetorian that detects and validates leaked credentials across source code, binary files, and HTTP traffic. It ships with 450+ detection rules and runs as a CLI, Go library, Burp Suite extension, or Chrome browser extension — putting secrets detection everywhere you already work during engagements. Say you find […]

Praetorian Guard finds critical flaws in OpenClaw – And What It Means for Your Software Supply Chain

At Praetorian, we’re constantly exploring how emerging technologies can strengthen security programs. Today, we’re sharing insights from our work building AI-powered vulnerability discovery capabilities within Praetorian Guard — finding critical security issues across the open-source ecosystem before they become public exploits. Using a multi-stage AI pipeline — automated discovery, validation, and exploit verification — we’ve […]

MCP Server Security: The Hidden AI Attack Surface

TL;DR – MCP servers – the integration layer connecting AI assistants to external tools and data – are a significant and underexplored attack surface. Our research demonstrates that both locally hosted and third-party MCP servers can be exploited to execute arbitrary code, exfiltrate sensitive data, and manipulate user behavior, often with zero indication to the […]

Julius Update: From 17 to 33 Probes (and Now Detecting OpenClaw)

Julius open-source AI infrastructure detection tool logo featuring a classical bust with an update stamp overlay

TL;DR: Julius v1.2.0 nearly doubles probe coverage from 17 to 33, adding detection for self-hosted inference servers, AI gateways, and RAG/orchestration platforms like Dify, Flowise, and KoboldCpp. The headline addition is OpenClaw, a fast-growing AI agent gateway where exposed instances leak API keys, grant filesystem access, and allow full user impersonation. Update Julius and run […]

Et Tu, Default Creds? Introducing Brutus for Modern Credential Testing

It’s day three of staring at a spreadsheet of 700,000 live hosts. Your port scans are done. Fingerprintx has identified thousands of SSH services, databases, admin panels, and file shares across a sprawling enterprise network. Now comes the part that every penetration tester hates: auditing and testing credentials at scale. You need to check for […]

Introducing Augustus: Open Source LLM Prompt Injection Tool

Augustus open-source LLM vulnerability scanner dashboard showing automated prompt injection, jailbreak, and adversarial attack testing across 28 LLM providers.

From LLM Fingerprinting to LLM Prompt Injection Last month we released Julius, a tool that answers the question: “what LLM service is running on this endpoint?” Julius identifies the infrastructure. But identification is only the first step. The natural follow-up: “now that I know what’s running, how do I test whether it’s secure?” That’s what […]

Deterministic AI Orchestration: A Platform Architecture for Autonomous Development

Architecture diagram of a deterministic AI orchestration platform showing the "thin agent" pattern, autonomous development workflows, and lifecycle hooks for LLM enforcement.

Executive Summary The primary bottleneck in autonomous software development is not model intelligence, but context management and architectural determinism. Current “Agentic” approaches fail at scale because they rely on probabilistic guidance (prompts) for deterministic engineering tasks (builds, security, state management). Furthermore, the linear cost of token consumption versus the non-linear degradation of model attention creates a “Context Trap” […]

Introducing Julius: Open Source LLM Service Fingerprinting

Julius open-source security tool for LLM service fingerprinting and adversarial AI red teaming to identify hidden model versions.

The Growing Shadow AI Problem Over 14,000 Ollama server instances are publicly accessible on the internet right now. A recent Cisco analysis found that 20% of these actively host models susceptible to unauthorized access. Separately, BankInfoSecurity reported discovering more than 10,000 Ollama servers with no authentication layer—the result of hurried AI deployments by developers under […]

Stealing AI Models Through the API: A Practical Model Extraction Attack

Organizations invest significant resources training proprietary machine learning (ML) models that provide competitive advantages, whether for medical imaging, fraud detection, or recommendation systems. These models represent months of R&D, specialized datasets, and hard-won domain expertise. But what if an attacker could duplicate an expensive machine learning model at a fraction of the cost?  Model extraction […]

As Strong As Your Weakest Parameter: An AI Authorization Bypass

In this AI gold rush, LLMs are becoming increasingly popular with many companies rolling out AI-assisted applications. When evaluating the security posture of these applications, it’s essential to pause and ask ourselves: what are we securing? Automated security tools that test models in isolation play an important role in identifying known vulnerabilities and establishing security […]