Importance of Having a Single Flow Trigger
Introduction
Salesforce sells Flows as a no-code solution that should answer all your needs. You can have as many Flows as you want, there is no limit! But is that true?
You will find out the answer when you try to update more than usual number of records. When working in Flow we are switching to thinking about single records, but we need to keep in mind that it will also run through with all the records in given transaction.
So what would happen if we wanted to update 1000 records? What about 10000? Are you sure everything will be running fine?
Problem
Some time ago I was working on a rather small Salesforce implementation; there were already some trigger Flows created. The idea was that each business area had its flow, so on paper, logic was separated, and the solution was clean.
What was the problem?
I tried to do a mass update in Apex; I wanted to update around 1000 records. Unexpectedly, I was welcomed with a system limit exception — CPU time limit.
In this post, I will show you how to resolve this issue
Test Scenario
I have created two simple flows to demonstrate the issue. I’m also using the “Apex Log Analyzer” VS Code extension to visualize execution.
Now I will update 1113 accounts. I know there is a small chance your transaction will contain that many, I’m doing it to demonstrate an issue.
We can see that for those 2 flows, it took almost 16 seconds to finish the transaction. Why, when there is basically no logic done?
There are 2 problems:
- bulkification
- interview initialization
Diving deeper
Let’s focus on the issues outlined above.
First, why is bulkification the problem?
Normally, we want to bulkify as much as we can, but in this case it’s working a bit against us. Because in the test I used 1113 records, it got split into 6 chunks of ~200 records. And now how does it look in the flow? System start flow interviews in chunks of 200 records, remember that each record has its interview. When Flow meets the bulkification point, it waits for all interviews at this point and runs against it for all records in the chunk. In the test scenario, there are 2 bulkification points: a decision in one of the test flows, update statement in both.
And now let’s get back to why there is a problem. We have 2 flows and 6 chunks, because of bulkification, we have 2 · 6 = 12 update operations run instead of a single one!
Secondly, what is interview initialization?
As I mentioned before, each flow uses some time to initialize. First is the initialization of the flow itself, and for each of the records, it’s VERY QUICK, but time adds up when the number of records and flow adds up. The screenshot below shows the time of initialization of one interview for one of the records:
For each record, we can observe that it uses 2 ms just to start it. As mentioned before, this quickly adds up when we have many records and flows.
Summary of problems
You can see where I’m going with it, now you can imagine how it will look like, when you have not 2, but lets say 5, 6 or 10 flows. You might not even be able to run 200 records, a single chunk, through your automations!
Solution
There are a few things that we can do to avoid this problem.
1. Single Flow Trigger per Object
Lets merge our flow into a single one:
And we can look at the new timeline:
We can clearly see in the picture that there is an improvement. Going back to my concern, how do I manage separate business interests/teams? As per Salesforce guidelines, we should have only one Flow Trigger per Salesforce Object. But there is something to be done to separate logic. You can use sub-flows to run area-specific logic!
2. Move to Apex
Nothing shocking here. Good old code is faster and can give developers more control. I have recreated Flow logic into TWO separate triggers (please don’t do that in your project).
Code:
trigger AccountTrigger1 on Account (before update){ switch on Trigger.operationType { when BEFORE_UPDATE { AccountTriggerHandler.afterUpdate1(Trigger.oldMap, Trigger.newMap); } } } trigger AccountTrigger2 on Account (before update) { switch on Trigger.operationType { when BEFORE_UPDATE { AccountTriggerHandler.afterUpdate2(Trigger.oldMap, Trigger.newMap); } } } public with sharing class AccountTriggerHandler { public static void afterUpdate1(Map<Id, Account> oldMap, Map<Id, Account> newMap) { List<Account> accounts = (List<Account>) newMap.values(); for (Account account : accounts) { account.AnnualRevenue = 500000; } } public static void afterUpdate2(Map<Id, Account> oldMap, Map<Id, Account> newMap) { List<Account> accounts = (List<Account>) newMap.values(); for (Account account : accounts) { if (account.AnnualRevenue > 500) { account.NumberOfEmployees = 10; } } } }
Results:
In this situation we can also see improvement, what’s worth noting is that in this case I used an anti-pattern — two triggers on a single object. After combining logic into a single trigger, we can expect it to run even faster.
Conclusion
As you can see, there are some things to consider before doing trigger implementation using Flow. In my mind, we should follow a simple rule:
- If we are working on a simple project containing a limited number of teams, then we can use a single flow per object and then split logic using subflows.
- if on our project work multiple teams and there is a lot of business logic – we should look for one of the Trigger frameworks, and contain our logic only in Apex Triggers. It allows us more control over what’s happening and limits the number of places where we can expect logic.
I hope that you are already familiar with our previous posts on Flows. Here you can see that it is not an ideal solution but can work under a few circumstances:
- Make sure you run only one flow trigger per object.
- For complicated logic, consider moving to Apex.