Over the past few months, Praetorian has worked with McKinsey & Company to help clients solve some of the complex cybersecurity challenges that have emerged as a result of COVID-19. We recently published a series of articles to help CSOs, CISOs, and security teams navigate the difficult environment we have been thrust into due to COVID-19. 

The first article, Cybersecurity’s dual mission during the coronavirus crisis focuses primarily on the role of cybersecurity executives and leaders during the COVID-19 crisis. The second article, Cybersecurity tactics for the coronavirus pandemic, dives a bit deeper into the actual tactics security teams should be aware of and be using to defend their users and data. 

This article expands on the series and provides technical examples and recommendations for security teams to ponder and implement. 

Accelerate patching for critical systems

While COVID-19 has not changed attacker tactics or created new classes of malware, the impact of attacks (especially to public facing telework systems) has increased. Previously, a DDoS attack against a VPN concentrator may have affected a small percentage of workers with minimal impact to daily operations. Today, that same attack could paralyze an organization for the duration of the attack. 

Strict patching rollout policies and schedules help to ensure that such patches do not break systems, however, in today’s climate, the risk of fratricide from broken patches is a much lower risk than the risk that is posed from attacker exploitation. As such, patching cycles should be accelerated to near real-time speed. 

As a concrete example, both Citrix and Pulse Secure (fairly ubiquitous teleworking technologies) published critical vulnerabilities that were and continue to be exploited in the wild in January of this year. Luckily these vulnerabilities were disclosed and patches released before the bulk of organizations transitioned to remote work arrangements. With the rapid growth and use of these technologies in the recent month, the blast radius could have been much worse or may still be if internal patch cycles are lagging. 

Consistent assessments of organizations’ external attack surfaces are a critical component to securing users and data. Vulnerability scans and penetration tests against externally facing assets should match pace with patching and be performed on a more aggressive schedule to detect any issue before attackers do. While quarterly scanning is generally sufficient during normal operations, organizations may want to accelerate the pace to weekly or more frequently. 

Scale up multifactor authentication

As many administrators will lament; easier said than done. When organizations seek to scale up multifactor authentication, the primary consideration must be the criticality of the resource being protected. Organization should first start with MFA rollout for the most critical applications and users and ensure that MFA options are commensurate with the resource. 

SMS MFA has been shown to be highly exploitable either through SIM-jacking attacks or by nation-state actors who have footholds within telecommunication networks. Stronger factors such as authenticator apps or hardware tokens (such as Yubikeys) should be used in most situations but certainly for access that is considered extremely sensitive. 

Users should also be aware of how “push-style” MFA works, where a notification is sent to a mobile device. Attackers will often attempt to trigger this style of MFA in the morning or around lunch time when lots of login activity is happening. The goal is to slip in a push notification for MFA to a user at a time when they might already be expecting to receive multiple notifications. Users should fully validate ALL MFA push notifications for both the time of activity and the activity location. Additionally, users should understand that Push MFA systems are robust and will not generally “glitch” and send duplicate or erroneous notifications.  Administrators should also ensure that MFA push infrastructure records the login location and not the location of the service that is being logged in to. With one of our clients, all MFA push notifications originated from the same location regardless of the user’s location making MFA push validation difficult or impossible. 

IT teams should also keep in mind that many applications and services provide multiple access vectors that may not be immediately obvious. Administrators should ensure that ALL access vectors (including APIs and admin portals) are protected by MFA if a single vector is deemed critical enough to warrant MFA. As an example, Microsoft Exchange can be configured to utilize MFA, however, Exchange Web Services (EWS) cannot. EWS is used by many applications and services to provide programmatic access to email and calendars. Without a viable means to protect this interface with MFA, the only way to fully protect an Exchange instance with MFA is to disable the EWS service. 

Install compensating controls for facility-based applications migrated to remote access

Organizations may be tempted to simply open internal services to the internet so that employees may access resources directly, expecting that existing authentication and protective mechanisms are sufficient for their risk profile. While this can certainly be a viable option, organizations must be aware that the internet is constantly being scanned. Although your organization may not be a target, the software you are running may be. 

The connected device search engine Shodan (and others like it), scans and catalogues all services on the internet on a continuous basis. Attackers are alerted when new and vulnerable services are exposed to the internet. Although services may have been safe behind a company firewall, exposure to the interest presents an entirely different risk profile. 

In our testing activities, we frequently find administrative portals, file shares, and vulnerable services exposed to the internet. In some cases, the findings lead directly to full internal network compromises. Any services that are moved or exposed to the internet should be provided additional protection either through a VPN, application gateway (such as Cloudflare Access), or be configured for added monitoring capabilities. Additionally, services should undergo a penetration test and frequent vulnerability scanning. 

Another example is the loss of domain services for domain joined hosts operating off the corporate network. In situations where VPN or other remote access is not available, domain polices may prevent common services such as updates and patches from operating while off the corporate network. In such cases, IT teams will have to creative to ensure that their users are still secured without the benefit of internal domain services and policies. 

Seize the opportunity to transition from Perimeter based VPN networking to an Identity Aware “Zero Trust” Proxy

Each internal enterprise application should have network protection and web-application authentication. VPN perimeter-based network protection means all employees must connect to the VPN to access but everyone on the VPN has network connectivity to all applications. Granular access can be granted by only issuing web login credentials to specific users. The problem with this is unauthorized bypass attacks which defeat the web authentication system. Each year, many top name self-managed SaaS applications release CVE’s for such vulnerabilities. Solutions like Akamai EAA, CloudFlare Access, and GCP’s IAP solve this by shifting the granular authorization first to the networking layer. These solutions can optionally offload web-logins to an Identity Service Provider such as Okta. Praetorian tests the customer’s configuration and underlying platform security of zero trust solutions. This gives customers the confidence to rapidly transition to Zero Trust. 

Account for shadow IT

Our assessments frequently identify findings on systems that administrators did not even know existed. COVID-19 is likely to also expose more shadow IT as services are disrupted due to remote working. While these anecdotal events will expose shadow IT, all of us would prefer to identify these assets before 0-hour. 

The best path to accounting for shadow IT is to have a strong asset accountability system. The NIST Cybersecurity Framework, which we use to benchmark our clients’ cybersecurity risk posture, has several outcomes within the Identify function aligned to help address the risk of unaccounted for assets. 

Accounting for assets required a detailed understanding of communications and data flows both logical and literal. Additionally, assets much be accounted for in a system of record that is the source of truth about the current status of assets. Finally, technical validation must be performed such as network scans, rogue asset detection mechanisms, or even physical inventories. For externally facing assets, organizations should perform scans of their entire owned IP space to baseline exposed assets. Internally, given the prevalence of private IP space and DHCP, this method may be less affective but still may be a worthy goal. Internally, security teams should ensure an accurate inventory of addressable space and maintain scans of those networks. 

As the adage goes, you cannot defend what you cannot see. Organizations without strong confidence in their deployed assets will inevitably find themselves responding to events affecting assets they did not even know about (perhaps with degraded capabilities). Security teams must seek to ease the process of onboarding security controls so that users are less likely to seek out shadow IT to get their work done.  Similarly, the security team should work with IT administrators and the Helpdesk to validate a streamlined but secure provisioning process for both client devices and devices running business workloads. 

Anecdotally, we have seen IT teams “McGyver” Microsoft Teams into an RDP server for management, expose RDP and SMB to the internet to facilitate connectivity, and individuals running their own remote access or VPN software from inside the corporate boundary. 

Quicken device virtualization

As the article linked above mentioned, device virtualization may be a quicker solution to enabling secure remote work than deploying VPNs or other technologies. The spectrum of device virtualization for this purpose run from simple unmanaged VMs running on a user’s host to fully managed on-prem VDI solutions. The selection of a solution on the spectrum comes down to cost, risk tolerance, and implementation time. 

Instructing users on how to implement a simple “clean” or company provided VM on their home computer is extremely low-cost using tools like VirtualBox and open source operating systems such as Ubuntu.  The benefit here is ease of implementation, cost (free for most commercial applications), and the ability to separate work activities from personal activities on a personal computer. The drawbacks are lack of management of the platforms, the user’s computer, and the technical understanding required by users.  

Moving up the spectrum, cloud based VDI solutions offer a “quick start” option for managed virtualization. These cloud based VDI solutions can be connected via VPNs, direct interconnects, or VPC peering capabilities with cloud providers to facilitate access to internal company resources. This solution abstracts the implementation away from the users but does come with increased cost and complexity. That being said, the advantage of this solution is the offloading of infrastructure, licensing, patching, and configuration of the VDI environment to the cloud provider. 

At the far end of the spectrum is on-prem VDI solutions such as Citrix. While these solutions provide the most control, their complexity, cost, and implementation resources often make this a prohibitive solution for many organizations. The greatest advantage here is that many dedicated VDI solutions support client host checking to validate the connecting clients meet certain criteria such as patch levels. 

For all solutions, the considerations from the previous sections must be applied, lest a false sense of security be assumed. Despite the movement of resources off-premises, the security of the network and data is still only as strong as the weakest link. Rapid patching (as applicable) and MFA authentication to these resources are still must-haves. 

Identify and monitor high-risk user groups

The notion of identifying high-risk user groups is not strange to anyone, however, actual application can be tricky. Security professionals tend to zero in on administrators, developers, and others with elevated access within an organization’s environment. Where we tend to miss the mark is with personnel who are not high-risk from a technical perspective but high-risk from a data or positional perspective. HR personnel may not have elevated network access but have access to significant amounts of employee data, nonetheless. Executives similarly do not have high permissions within the network but can make company impacting decisions with a phone call or email. 

High-risk users must be categorized and treated appropriately with adequate controls. Although deployment of controls across the entire enterprise is ideal, successful implantation at 100% coverage is often difficult in normal times. With the ever-present cloud of the COVID-19 pandemic, 100% coverage across an enterprise is even less likely. As such, IT and security teams should focus controls on the identified high-risk users and services. 

Focusing on the high-risk users and services ensures proper alignment of resources and can also help to reduce costs. That does not mean that all others should be ignored, however, the impact of cybersecurity events on high-risk users is generally more quickly evident and more severe. The dwell time for an attacker to escalate from a lower risk user to a high-risk user is longer and therefore provides more time for detection mechanisms to be effective. 

Supporting secure remote-working tools

Left on their own, employees will find a way to get work done. In a remote-centric world this means use of collaboration and teleworking tools that may not be sanctioned by the organization. Many ubiquitous tools offer many strong security mechanisms and functions, BUT those mechanisms and functions are either not available in free tiers of the service or not enabled by default.

IT teams should enable telework by selecting platforms and developing secure configurations and settings on such platforms such as encryption settings, password requirements, file retention times, and account expiration settings (among others). Even the most secure platform may not be so when installed by the user and not configured further. 

Such configurations should be enforced by policy and follow normal change control procedures for modifications. People will follow the path of least resistance, providing a secure configuration for highly useful tools is a sure way to ensure employees use the approved tools instead of seeking other outlets to get work done.  

Teleworking platforms that should be considered (but not limited to) are VPN platforms, teleconference software, remote access and remote HelpDesk platforms (TeamViewer, LogMeIn, Bomgar, etc.), and file sharing platforms (Box, DropBox, Google Drive, etc.). 

Testing and adjusting IR and BC/DR capabilities

As our article with McKinsey notes, table-top exercises are the best way to tease out any issues with IR, BC, or DR plans. Tabletop exercises can be simple walkthroughs or can be expanded to include defined scenarios (and even simulated attacker artifacts) for even greater impact. 

Things the security and business teams should look for in tabletop exercises with regard to COVID-19 are:

  • Processes that require physical access to equipment such as forensics, recovery from offline (tape) backups, or usage of hard copy emergency credentials (break glass accounts). 
  • Processes that require devices to be connected to the network such as quarantine procedures that rely on access layer devices (switches, Wi-Fi) or boundary defenses (proxies, IDS/IPS, logging). 
  • Processes that rely on third parties being on site or sharing information with third parties. Frequently, incident response vendors are brought on site; security teams should have updated processes to support security contractors remotely. Additionally, processes for sharing information with law enforcement or other regulatory entities should be tested to ensure applicability in remote-work situations. 
  • If IR, BC, or DR process include a “war room” component, ensure that adequate collaboration tools are available to support this remotely.

Expand monitoring

While high-risk resources should be prioritized, if resources allow, monitoring should be expanded. Perhaps the easiest way to achieve this task without additional infrastructure is using agent-based monitoring. EDR platforms excel at monitoring devices regardless of the network they are connected to or the device location. For organizations who are concerned that on-premises controls are rendered ineffective by the move to remote work, this path may provide a quick (although perhaps costly) solution. 

Organizations should NOT take this opportunity to expand employee monitoring for productivity purposes. If the security organization is enlisted to help with this sort of effort, the negative impacts to security monitoring and associated resources may outweigh the benefits. 

To fully monitor for security threats, organizations should investigate if VPN infrastructure has adequate capacity to disable split tunneling on client endpoints. Most threats to end-users come from the internet, if split tunneling is enabled, security teams lose visibility on traffic to and from the most common attack vector. The largest drawback to disabling split tunneling is the added bandwidth load on VPN infrastructure.  

Unique Considerations

Every enterprise is unique and the areas outlined above represent common themes seen throughout Praetorian’s engagements. Technical debt, in-flight projects, or simple lack of certain capabilities may make navigating the post-pandemic world even more difficult. Organizations with unique or complicated situations may benefit from direct recommendations from our engineers. Praetorian was built to solve complicated cybersecurity problems and is prepared to help you solve yours. 

Praetorian and McKinsey have entered into a strategic alliance to help clients solve complex cybersecurity challenges and secure innovation. As a part of this alliance, McKinsey is a minority investor in Praetorian.