Learn how to enable safer selling in Salesforce with Microsoft Cloud App Security — Part 4: Using Sentinel to enhance alerting.
Hi,
In this blog I want to share with you my experience in making Salesforce a little more secure with Microsoft Security tools. Since I’ve covered a lot of topics on this journey, this blog has become so long that I’ve split it into several parts.
- In part 1, we will set up a lab environment in which we can test everything else.
- In part 2, we’ll look primarily at the capabilities of Microsoft Cloud App Security and how they can benefit Salesforce.
- Part 3 is about combining the use cases developed in Part 2 into meaningful scenarios, e.g., to enable the use of BYOD.
- In part 4, Sentinel has to come to the rescue for a use case that I could not handle with MCAS.
The blog is aimed at IT admins with a solid basic knowledge of M365 / Azure services and can be used e.g. as an aid to rebuild. I try to link sources wherever possible and instead of detailed instructions I will rather go into the why and the stumbling blocks.
Limitations of MCAS
While MCAS is very strong in the area of session controls and the connection of 3rd party tools via API, I ran into limitations when trying to build policies for alerts.
- MCAS is not very flexible in the definition of queries
- It is not possible to compare results of queries with older results.
Since these disadvantages are not present in Sentinel (and Microsoft Defender) and I have thought several times in some use cases: “Oh, if only I had the data in Sentinel”, it was at some point very obvious to look for an integration. And if you’re new to Sentinel this can be a nice occasion to start…
Use Case: Detect new Admins
The triggering use case for me was the monitoring of admin roles in Salesforce, as I could see in MCAS who was currently a member of an administrative role, but could not alert if something changed here.
At that time I knew
- that this data can be retrieved via the API of MCAS.
- how such an alert can be implemented in Sentinel.
- how I can write data from APIs to custom tables of Log Analytics using LogicApps.
My high-level design then looked like this:
General benefits of the integration
The implementation of a SIEM is usually not something you do in the short term for a use case but it is generally a good idea to deal with the topic and generally a good idea to choose Sentinel ;-)
The obvious advantages are the management of alerts and the correlation with other alerts. Additionally, the discovery logs provided by MCAS can be very helpful in Sentinel. Here is a good blog about the integration: Playbook for Azure Sentinel & MCAS integration | by Priscila Viana | Medium
Why not use Microsoft Defender for this?
However, most of this can also be used in the Microsoft 365 Defender portal, but Sentinel is used here because it is much more flexible in terms of connectivity and allows you to create and use your own tables.
But hopefully in the future all logs will also be included in Microsoft Defender. This would make this blog obsolete ;-)
Solution components
I think with the procedure described here various use cases can be implemented, therefore I will try to describe the individual components so well that you can adapt them well for your use case.
Logic App
Creating a Logic App can be done directly under Playbooks in Sentinel. I really love Logic Apps — you can do so much with them!
Here’s the overview of a simple playbook to fetch data from an API and write it to a Log Analytics Workspace:
- Defining the recurrence rate
- Pulling the MCAS API Token from an Azure Key Vault
- Defining and assigning the needed variables for the filter.
- Retrieving the API
- Parsing the JSON from the API
- Writing each entry to the Log Analytics Workspace.
Analytics rule in Sentinel
With the data in Log Analytics it is easy to create an alert with this logic: If a user appears in the group that was not there during the last query, create an alarm!
Step 1: Recurrence
In my Use Case it is all about 10 minutes. This time period is used in both the Logic App and the Analytics Rule. But you can take any other time — as long as you don’t go below the run times. However, I don’t think you need to go below 10 minutes here, as the biggest delay comes from the API coupling between MCAS and Salesforce.
Step 2: MCAS API Access
MCAS has its own API, so it is not integrated into MS Graph.
Issuing the Authorization Token
To work with the MCAS API, so-called API Authorization Tokens are required. These can be issued and revoked via the GUI. Issuing API tokens is very well described in the MCAS documentation and there is even a video describing how to create the tokens and connect to the API using CURL — great resource!
In short: New tokens can be created in the Security extensions area — only a name for the token is required.
This token should be kept really well, as it grants access to all data in MCAS. For a first prototype as described here, it can be stored in a variable — but it is highly advisable to store the token in an Azure Key Vault. See next chapter for this topic.
Another problem occurs if you’re (hopefully) using Privileged Identity Management, because the privileges of the token are delegated permissions of the creating account. This includes the expiration of the eligible permissions. A possible workaround for this is to use a service user with a static assigned security operator role for the creation of the token.
Using Azure Key Vault
As described above, it is very important to protect the API token. In my opinion, using Azure Key Vault is the best way to do this.
The following steps are needed to achieve this:
- Enable the System Assigned Managed Identity of the Logic App
- Create an Access Policy in Key Vault with the secret permissions GET and LIST
- Create the secret with the MCAS API Token in the Key Vault. I’ve chosen to include the string “TOKEN “ as a prefix in the secret to simplify the use in the HTTP step.
- Create a step in the Logic App from type Azure Key Vault Get Secret in the flavor Managed Service Identity and choose your Key Vault and Secret.
- Obfuscate the output from the Get Secret step to prevent curious admins from reading the token in the output of the step. Note: this will also obfuscate the input of the HTTP step (learn more)
Step 3: Variables and IDs
For our use case we need to filter by the ID of the SalesForce App and the ID of the System Administrator group. These IDs are different in every environment and you can find them by pulling the entities endpoint without a filter:
- Create your Logic App
- Create your API Token and define the variable for it
- Create the HTTP action as shown above without the filter
- Run the app and open the raw output of the HTTP action
- There you can search for a known admin and collect the IDs
- This output can also be used in the PARSE JSON step
A shortcut for this can be to have a look at the URLs in the MCAS GUI. In my experience the group ID works always and the app ID only works if you have only one instance of salesforce…
Step 4: Pulling the MCAS APIs
Using the MCAS APIs it is possible to read virtually all data from MCAS for further processing. The different endpoints are described very well here:
Investigate activities using the API | Microsoft Docs
Here is a sample call against the Entity Endpoint:
- Since we want to include a query in the body, we use the POST method (instead of GET).
- The URIs for the endpoints can be seen in the MCAS portal. (while creating the token and at about)
- In the header we specify the authorization token and the formatting of the body.
- In the body we can then give the API our specific requests. In this case, I have queried a filter on data from the app Salesforce and the System Administrator group
Query / Filter definition
I have chosen a very simple filter for my use case to get the desired data for my use case. However, the filter options offer a number of operators and also allow much more complex queries.
{
“filters”: {
“app”: {
“eq”: @{variables(‘AppID’)}
},
“userGroups”: {
“eq”: “@{variables(‘GroupID’)}”
}
}
}
Dealing with limits
For this use case, the topic of limits should not matter, but it is good to keep this topic in mind for further use cases.
Step 5: JSON Parsing
The next important step in the runbook is to parse the output of the HTTP to action. This allows us to process the data much more easily.
Unfortunately, there is no documentation from Microsoft on the schema of the entity endpoint. But fortunately, the Parse JSON step gives us the possibility to derive the schema from an example — we can take it directly from the output of the HTTP step — see chapter “Find the right IDs”.
Step 6: Log Analytics Ingestion
The final step in our Logic App is to ingest the data in a table in our Log Analytics Workspace. To achieve this we need:
- A loop to iterate through the parsed data
- A connection to the Logic Analytics Workspace — see this blog
- A name for the new table — this is up to your creativity ;-)
At my first attempt I had big problems with this playbook, because the data from the API was supposedly not compatible with the for-each loop. But the highly respected Sami Lamppu told me the trick that made my playbook work. There seems to be a bug in the Logic apps that requires manual configuration of the field. (see here)
Analytics rule in Sentinel
After our Logic app has run for the first time, the table we defined with the suffix _CL appears after a while in the section Custom Logs. With each run of the Logic App, a new record with a timestamp is now created for each member of the System Administrator group in SalesForce.
To determine whether the content of the group has changed, my approach is to create two virtual tables via Kusto Query Language (Learn KQL) and compare them. One table contains the current record (within the last 10 minutes) and the other table contains the previous one (period between 10 and 20 minutes).
Virtual tables are created in KQL as follows:
let X = view () { … query … }
For comparison, the operator of the flavor left anti-join can be used, because it returns all records from the left side that don’t match any record from the right side. The use of this operator works like this:
X | join kind=leftanti Y on Key
In concrete terms, our query then looks like this:
let SF_Admins_now = view () {
MCAS_Entity_Log_SFAdmins_CL
| where TimeGenerated > ago(10m)
| project displayName_s, email_s, TimeGenerated
};
let SF_Admins_then = view () {
MCAS_Entity_Log_SFAdmins_CL
| where TimeGenerated between (ago(20m)..ago(10m))
| project displayName_s, email_s, TimeGenerated
};
SF_Admins_now | join kind= leftanti SF_Admins_then on email_s
| distinct displayName_s, email_s
From the query to the alert
With this query it is now really easy to create a custom alert rule. By selecting create Azure Sentinel alert you get to a wizard that guides you through the creation process. (tutorial). The following points should be noted:
- A descriptive name and a good description will help the unfortunate person who is to process the alert later.
- Tactics here means the MITRE ATT&CK® framework and also helps to classify the alert — I have chosen privilege escalation here.
- In order for Sentinel to be able to combine multiple suspicious activities and alerts into incidents, making life much easier for the SOC team, entity mapping should be performed.
The following mapping is suitable for this use case:
Account — FullName — displayName_s
- The scheduling must match the recurrence of the Logic App and the time periods in the query to provide meaningful results. I decided to use 10 minutes.
Open topics
Some topics are still open from my point of view to complete the solution. In order not to blow up the scope of this blog I decided to leave only a few links to these topics here and maybe write a separate blog post about it later.
Monitoring the Logic App
As with all services, it is a good idea to monitor Logic Apps to make sure they are still running and not tampered with.
Monitor logic apps by using Azure Monitor logs — Azure Logic Apps | Microsoft Docs
Monitoring Logic Apps with Log Analytics — Alessandro Moura — Blog
Securing the Logic App
In addition to securely storing the API token in an Azure Key, several other measures should be implemented to prevent, for example, the Logic App from being modified.
Protect Logic Apps with Azure AD OAuth — Part 1 Management Access | GoToGuy Blog
Summary
My goal with this blog was to show
- how to deal with limitations of MCAS.
- how to use Logic Apps in a meaningful and secure way.
- how easy it is to create custom alerts in Sentinel.
I don’t have a concrete plan for the next part yet, but I believe that it will come soon, because some topics are still open...