Adam Osiecki
ByAdam Osiecki
Freelance Salesforce Developer
March 5, 2025 Loading views...
shareLinkedIn iconFacebook iconTwitter icon
Table of Contents

Bulkification in Flows

Bulkfication in Flows

Introduction

Flows are code too. Yeah, I know, this might sound like a weird statement, especially as the opening line of a blog post. But the truth is, behind those shiny arrows and rectangles, there’s an Santa’s elf from the toy factory Apex engine running under the hood.

If writing code requires careful attention to structure, unit tests, and bulkification, then why do we pay so little attention to these same principles when building Flows? In this article, I want to address one of these critical topics – bulkification in Flows. (Yeah, Rome wasn’t built in a day, so we can’t tackle all problems in just one small blog post.)

Let me make a personal statement first: I think that using Record-Triggered Flows for complex automation is often a bad idea (Piotr Gajek explains why in detail here). Even Salesforce itself discourages their use in most scenarios (see official guidance).

In fact, I’d argue that Salesforce developers fall into two categories:

  1. Those who already know that Flows are not well-suited for record-triggered automation
  2. Those who will soon find out.

That said, this article isn’t about debating whether Record-Triggered Flows are a good choice – Piotr has already covered that. Instead, I want to focus on what is perhaps the most critical aspect of Flows from a software engineering perspective: performance.

Salesforce has put considerable effort into optimizing Record-Triggered Flows, making bulkification a key mechanism for improving their efficiency. While writing Apex is generally the best approach for automation, Flows – when used wisely – can also be a practical solution, especially for teams that need to deliver business value quickly without writing code.

This article explains how bulkification works in Flows and how we can use it to improve performance.

If you’ve ever struggled with Flow performance issues, exceeded governor limits, or needed to optimize an existing Flow without rewriting it in Apex, this guide is for you. Let’s get started.

What Actually Is Bulkification?

First, let’s cover the basics: what do we mean by "bulkification"? It is the process of designing automation to handle multiple records efficiently instead of processing them one by one.

This concept is especially important in Salesforce development, as certain operations, such as SOQL queries, DML operations (inserts, updates, deletions), and email sends are subject to transaction limits

Well-structured, bulkified automation helps avoid performance issues and governor limit errors. On the other hand, inefficient automation can hit these limits much faster than you might expect. We’ll go through both bad and good examples later.

How Bulkification For Flow Is Working

Let’s start by looking at the official documentation explaining how bulkification works for Flows:

If you’re working with Flows, you don’t even have to think about bulkification. Flow interviews bulkify actions for you automatically.

Nothing could be further from the truth! While Salesforce’s built-in bulkification mechanism is quite clever and works well in many cases, this statement is misleading. Especially for users who don’t work with code on a daily basis.
First, we’ll go over the scenarios where standard bulkification actually works. Then, we’ll look at a few examples that prove why the documentation’s claim isn’t entirely accurate.

Several elements in Flow are bulkified by default, such as:

  • "Get Records" (SOQL queries)
  • "Create Records," "Update Records," and "Delete Records" (DML operations)
  • "Send Email" and "Email Alert" (email actions)

You can find the full list of bulkifiable elements in the official documenation.

Even when Flow is triggered by multiple records in a transaction, each record is handled one by one in separate so-called Flow Interviews. Here’s what happens under the hood:

  1. Each Flow interview runs individually, processing a single record at a time.
  2. When a Flow reaches a bulkifiable element, it pauses and waits for all other Flow interviews in the transaction to reach the same point or complete their execution.
  3. Once all interviews have arrived at a bulkifiable element (or the end of the Flow), the actions are executed together as a batch.
  4. The process repeats, with Flow Interviews continuing one by one until they reach the next bulkifiable element or finish execution.

We’ll go through a detailed example of this in the next section.

One immediate takeaway is that there is no automatic, behind-the-scenes bulkification if the Flow is triggered by a single record. Or, in simpler terms: if you’re only creating, updating, or deleting a single record, bulkification doesn’t happen at all.

How To Leverage It For DML And SOQL Operations In Flows

Before we go into detailed examples, let’s first address one of the most common mistakes made by creators of record-triggered Flows: the assumption that "my Flow will always operate on just one record." Some of you reading this might be thinking the same thing right now.

To be fair, this assumption is often true within the specific business context where the Flow was created. When I review Flow implementations, I frequently hear responses like:

  • "But my component never creates more than one Task at a time!"
  • "There’s no business case where we update multiple Opportunities at once."
  • "The business never required this logic to work on multiple records, so adding bulkification would be overengineering."

However, just because the initial business case only considers a single record doesn’t mean your Flow will always be triggered by the operation on only single record. The developer (or admin) must always account for the possibility of bulk operations, whether intended or not. (there are some technics to limit the chance of this, like Flow Entry Conditions but this is beyond the scope of this article)

Here are three common scenarios where a Flow might be triggered in bulk, regardless of its creator’s original intent:

  1. Your Flow doesn’t operate in isolation. Other triggers, Workflows, Flows, or Process Builders may execute alongside it, potentially causing multiple records to be processed at once. Even if no such automations exist today, you have no control over what might be added in the future.

  2. Mass operations may trigger your Flow. Large-scale processes such as batch jobs running in the system could update records in bulk, for example, to maintain data consistency and integrity.

  3. One-time bulk inserts or updates. This includes scenarios like data migrations or emergency data fixes, where multiple records are inserted or updated simultaneously.

Antipattern 1 – Multiple Updates Of The Same Record In One Flow

Let’s analyze how bulkification works with an example and discuss the first bulkification antipattern.
Assume that we have a requirement:

When new account is created, two new tasks related to that account should be created.

If we ignore proper bulkification practices (or, even worse, believe Salesforce when it says we don’t have to think about bulkification), the record-triggered Flow implementing this requirement might look like this:

Antipattern 1 - screenshot presenting Flow

The full Flow metadata can be found on Gist

To debug the Flow execution, set the Workflow debug level to "Finer". This ensures you get detailed log output from the Flow execution.

setup of debug log levels for debugging the Flow

Now, let’s create five Accounts to test how the Flow handles bulkification:

List<Account> accounts = new List<Account>();
for(Integer i = 0; i < 5; i++) {
    Account acc = new Account(
        Name = 'Test 00' + i
    );
    accounts.add(acc);
}

insert accounts;

Note: The code samples and Flows used in this article were created and tested on a clean Salesforce instance. They might not work in your org if you have required fields or Validation Rules preventing record creation. If you want to test these examples, make sure you use a clean sandbox or scratch org.

Analyzing The Debug Logs

Let’s take a look at the debug logs to understand how the Flow processes the records. The interesting part starts with the following line, which indicates the Flow execution begins:

19:10:20.4 (108086207)|CODE_UNIT_STARTED|[EXTERNAL]|Flow:Account

Flow Creates Multiple Interviews Instead Of Processing In Bulk

The first thing that happens is that Salesforce creates five separate Flow Interviews—one for each record inserted in the script:

14:19:22.353 (353280891)|FLOW_CREATE_INTERVIEW_BEGIN|00D0Y0000035hbF|300J7000000XdcR|301J7000000btJC
14:19:22.353 (353364754)|FLOW_CREATE_INTERVIEW_END|21482c132838add82d2919cb3d8e19547afda37-2355|Antipattern 1
14:19:22.353 (353369050)|FLOW_CREATE_INTERVIEW_END|21492c132838add82d2919cb3d8e19547afda37-2354|Antipattern 1
14:19:22.353 (353372701)|FLOW_CREATE_INTERVIEW_END|
...
14:19:22.354 (354125583)|FLOW_START_INTERVIEWS_BEGIN|5

Then, Salesforce starts processing the first interview:

14:19:22.354 (354158428)|FLOW_START_INTERVIEW_BEGIN|21482c132838add82d2919cb3d8e19547afda37-2355|Antipattern 1

Each Flow Interview processes only one record at a time, meaning it does not see the other four records inserted in the same transaction:

14:19:22.354 (355641613)|FLOW_VALUE_ASSIGNMENT|21482c132838add82d2919cb3d8e19547afda37-2355|$Record|{Id=001J700000I3hqiIAB, Name=Test 000, CreatedDate=2025-02-27 14:19:23, .... }

Then, something a bit suprising (at least for me) happens:

The Flow enters the first element, defers execution, and then the interview ends:

14:19:22.354 (355778334)|FLOW_ELEMENT_BEGIN|21482c132838add82d2919cb3d8e19547afda37-2355|FlowAssignment|Some_logic_for_setting_values_on_first_task
14:19:22.354 (355830566)|FLOW_ELEMENT_DEFERRED|FlowAssignment|Some_logic_for_setting_values_on_first_task
14:19:22.354 (355874086)|FLOW_ELEMENT_LIMIT_USAGE|1 ms CPU time, total 46 out of 15000
14:19:22.354 (355890355)|FLOW_ELEMENT_END|21482c132838add82d2919cb3d8e19547afda37-2355|FlowAssignment|Some_logic_for_setting_values_on_first_task
14:19:22.354 (355908607)|FLOW_START_INTERVIEW_END|21482c132838add82d2919cb3d8e19547afda37-2355|Antipattern 1

This happens for each of the five records before any real processing begins.

Bulkified Actions Are Processed Together

After all individual Flow Interviews defer execution, Salesforce processes the first bulkified action in one go:

14:19:22.354 (360353284)|FLOW_BULK_ELEMENT_BEGIN|FlowAssignment|Some_logic_for_setting_values_on_first_task

Interestingly, this means Assignment actions are also bulkifiable. What is suprising is that if you follow the flow execution carefully you will notice that basically all flow steps are bulkificable and executed toghether, not only the operations described in the documentation

14:19:22.354 (360531152)|FLOW_ASSIGNMENT_DETAIL|21482c132838add82d2919cb3d8e19547afda37-2355|Task_Record_To_Insert.AccountId|ASSIGN|001J700000I3hqiIAB
14:19:22.354 (360560494)|FLOW_ASSIGNMENT_DETAIL|21482c132838add82d2919cb3d8e19547afda37-2355|Task_Record_To_Insert.Subject|ASSIGN|"First Task"
14:19:22.354 (360599183)|FLOW_VALUE_ASSIGNMENT|21482c132838add82d2919cb3d8e19547afda37-2355|Task_Record_To_Insert|{AccountId=001J700000I3hqiIAB, Subject="First Task"}
14:19:22.354 (360987920)|FLOW_BULK_ELEMENT_LIMIT_USAGE|1 ms CPU time, total 51 out of 15000
.......
14:19:22.354 (361016410)|FLOW_BULK_ELEMENT_END|FlowAssignment|Some_logic_for_setting_values_on_first_task|0|0

This happens for all five records at once. The same bulkification mechanism applies when the first Create Records action runs:

14:19:22.354 (361254482)|FLOW_BULK_ELEMENT_BEGIN|FlowRecordCreate|Create_First_Task
14:19:22.354 (361450599)|FLOW_BULK_ELEMENT_DETAIL|FlowRecordCreate|Create_First_Task|1
(..... 4 more entries like this one)
14:19:22.354 (411997975)|FLOW_VALUE_ASSIGNMENT|21482c132838add82d2919cb3d8e19547afda37-2355|Task_Record_To_Insert|{AccountId=001J700000I3hqiIAB, Subject="First Task", Id=00TJ700000KVGjfMAH}
14:19:22.354 (412017488)|FLOW_VALUE_ASSIGNMENT|21482c132838add82d2919cb3d8e19547afda37-2355|Create_First_Task|true
(..... 4 more entries like this one)

Checking the limits, we see that only one DML operation was executed, confirming the Flow successfully bulkified the operation:

14:19:22.354 (412502654)|FLOW_BULK_ELEMENT_LIMIT_USAGE|1 DML statements, total 2 out of 150
14:19:22.354 (412515820)|FLOW_BULK_ELEMENT_LIMIT_USAGE|5 DML rows, total 10 out of 10000
14:19:22.354 (412523446)|FLOW_BULK_ELEMENT_LIMIT_USAGE|27 ms CPU time, total 78 out of 15000
14:19:22.354 (412593926)|FLOW_BULK_ELEMENT_END|FlowRecordCreate|Create_First_Task|5|51

Why This Flow Is Still Inefficient

🚨 While the Flow correctly bulkifies each individual action, it does not combine multiple DML operations into one. The same sequence repeats for the second Task creation, meaning two separate DML operations are executed.

🚨 In our example it means that all "Create First Task" actions from all interviews will be bulkified and executed togehter, the same happens with "Create Second Task" but they never will be bulkified and executed together

🚨 Salesforce only bulkifies operations within the same Flow element. It does not merge multiple DML operations across different elements. That’s why it’s our responsibility as Flow designers and developers to reduce the number of DML actions in a Flow.

How to Fix This Antipattern

Instead of having two separate Create Records actions, we should:

  1. Use a collection variable to store all records that need to be inserted.
  2. Insert them in one bulk operation at the end of the Flow.

This small adjustment makes the Flow significantly more efficient:

fixed Flow design - merging DML operations

First, we combine the two assignment actions into one. We set all required fields and then add both records to a collection variable (Tasks_To_Insert).

Now, instead of two separate DML operations, we use one Create Records action with the collection variable:

optimized Flow - using a collection for bulk insert

The full Flow metadata can be found on Gist

Key takeaways:
🚨 Always combine bulkifiable actions (DML operations, SOQL queries, email sends) into a single action whenever possible.

🚨 Salesforce only bulkifies operations within the same Flow element, which means that if you have two separate DML operations in your flow, they will not be executed together.

🚨 If your Flow performs multiple operations on the same object, review the logic and consolidate them into one step. This will improve performance, reduce governor limit usage, and ensure proper bulkification.

Antipattern 2 – DML And SOQL Operations In Loop

Let’s take things a step further and see how Flow bulkification handles a more complex scenario.

The Requirement

We have the following business requirement:

When an Account’s CleanStatus is updated to "Inactive", find all Tasks related to the Account’s Contacts and delete them.

A naive implementation of this logic might look like this:

  1. Get all Contacts related to the Account
  2. Loop through each Contact, and for every Contact, retrieve all related Tasks.
  3. Loop through the Tasks and delete them one by one.

Here’s what this looks like in Flow:

antipattern 2 Flow

The full Flow metadata can be found on Gist

This implementation introduces a nested loop problem:

  • The first loop iterates through Contacts.
  • The second loop iterates through Tasks for each Contact.
    If you’re an Apex developer, you’re probably cringing right now. SOQL inside a loop? And then a DML operation inside a second-level loop?
    This is a huge red flag in Apex development. Let’s break it down and see how bulkification works in this case, and why it usually doesn’t.

Creating Test Data

To test this scenario, we’ll create:

  • 3 Accounts, each with
  • 1 Contact, each with
  • 2 related Tasks

Here’s the minimum Apex code to set up this data:

/* Inserting 3 Accounts*/
List<Account> accounts = new List<Account>();
for(Integer i = 0; i < 3; i++) {
    Account acc = new Account(
        Name = 'Test 00' + i
    );
    accounts.add(acc);
}
insert accounts;

/* Inserting Contact for each Account*/
List<Contact> contacts = new List<Contact>();
for(Account acc : accounts ) {
    Contact c = new Contact(
        AccountId = acc.Id,
        LastName = 'Flow Test'
    );
    contacts.add(c);
}
insert contacts;

/* Inserting two tasks for each Contact */
List<Task> tasks = new List<Task>();
for(Contact c : contacts) {
    for(Integer i = 0; i < 2; i++) {
        Task t = new Task(
            WhoId = c.Id
        );
        tasks.add(t);
    }
}
insert tasks;

/* Updating Accounts to trigger the flow*/
for(Account a : accounts) {
    a.CleanStatus = 'Inactive';
}
update accounts;

Test data for Antipattern 2
Test data for Antipattern 2 example

Analyzing The Debug Logs

Let’s break down how Flow bulkification is working in this scenario.

The First Query (Fetching Contacts) Works As Expected

The first SOQL query (Get_all_contact_related_to_account) is bulkified, as expected:

  • 3 Flow interviews reach the query
  • Only one SOQL query is executed
  • 3 Contacts are fetched
14:07:03.282 (287315887)|FLOW_BULK_ELEMENT_BEGIN|FlowRecordLookup|Get_all_contact_related_to_account
...
14:07:03.282 (294459064)|FLOW_BULK_ELEMENT_LIMIT_USAGE|1 SOQL queries, total 1 out of 100
14:07:03.282 (294470811)|FLOW_BULK_ELEMENT_LIMIT_USAGE|3 SOQL query rows, total 3 out of 50000
...
14:07:03.282 (294521564)|FLOW_BULK_ELEMENT_END|FlowRecordLookup|Get_all_contact_related_to_account|3|7

Next, all 3 interviews reach the loop (Loop_through_the_contacts) and are deferred:

18:40:49.186 (198839932)|FLOW_ELEMENT_BEGIN|19493b28361b0a0041122ca44b4b195523bd945-3eed|FlowLoop|Loop_through_the_contacts
18:40:49.186 (198859347)|FLOW_ELEMENT_DEFERRED|FlowLoop|Loop_through_the_contacts
18:40:49.186 (198887718)|FLOW_ELEMENT_END|19493b28361b0a0041122ca44b4b195523bd945-3eed|FlowLoop|Loop_through_the_contacts

Then, Flow assigns the current Contact in each iteration:

18:40:49.186 (199102295)|FLOW_VALUE_ASSIGNMENT|19473b28361b0a0041122ca44b4b195523bd945-3eeb|Loop_through_the_contacts|{Id=003J700000GRFHdIAP}

Second SOQL Query Is Still Bulkified

First, and only action in the loop is SOQL operation: Get_all_Tasks_related_to_Contacts. At this point for viligant reader it should be no suprise that action is deffered and will be executed in bulk, so for all 3 interviews we again see log lines like this:

18:40:49.186 (199655777)|FLOW_ELEMENT_BEGIN|19473b28361b0a0041122ca44b4b195523bd945-3eeb|FlowRecordLookup|Get_all_Tasks_related_to_Contacts
18:40:49.186 (199693303)|FLOW_ELEMENT_DEFERRED|FlowRecordLookup|Get_all_Tasks_related_to_Contacts
18:40:49.186 (199736236)|FLOW_ELEMENT_END|19473b28361b0a0041122ca44b4b195523bd945-3eeb|FlowRecordLookup|Get_all_Tasks_related_to_Contacts

And then after that bulk action kicks in:

18:40:49.186 (199916151)|FLOW_BULK_ELEMENT_BEGIN|FlowRecordLookup|Get_all_Tasks_related_to_Contacts
18:40:49.186 (208043240)|FLOW_VALUE_ASSIGNMENT|19483b28361b0a0041122ca44b4b195523bd945-3eec|Get_all_Tasks_related_to_Contacts|[{Id=00TJ700000KVPDXMA5}, {Id=00TJ700000KVPDYMA5}]
18:40:49.186 (208067719)|FLOW_BULK_ELEMENT_DETAIL|FlowRecordLookup|Get_all_Tasks_related_to_Contacts|2
18:40:49.186 (208093934)|FLOW_VALUE_ASSIGNMENT|19473b28361b0a0041122ca44b4b195523bd945-3eeb|Get_all_Tasks_related_to_Contacts|[{Id=00TJ700000KVPDVMA5}, {Id=00TJ700000KVPDWMA5}]
18:40:49.186 (208100381)|FLOW_BULK_ELEMENT_DETAIL|FlowRecordLookup|Get_all_Tasks_related_to_Contacts|2
18:40:49.186 (208120882)|FLOW_VALUE_ASSIGNMENT|19493b28361b0a0041122ca44b4b195523bd945-3eed|Get_all_Tasks_related_to_Contacts|[{Id=00TJ700000KVPDZMA5}, {Id=00TJ700000KVPDaMAP}]
18:40:49.186 (208126290)|FLOW_BULK_ELEMENT_DETAIL|FlowRecordLookup|Get_all_Tasks_related_to_Contacts|2
18:40:49.186 (208251470)|FLOW_BULK_ELEMENT_LIMIT_USAGE|1 SOQL queries, total 2 out of 100
18:40:49.186 (208258965)|FLOW_BULK_ELEMENT_LIMIT_USAGE|6 SOQL query rows, total 9 out of 50000
18:40:49.186 (208264951)|FLOW_BULK_ELEMENT_LIMIT_USAGE|6 ms CPU time, total 98 out of 15000
18:40:49.186 (208303503)|FLOW_BULK_ELEMENT_END|FlowRecordLookup|Get_all_Tasks_related_to_Contacts|6|8

At this point, things still look good:

  • One SOQL Query for all 3 Flow interviews
  • 6 Task records retrieved

The Actual Problem: DML Execution In The Inner Loop

Now we arrive at the real issue.

In the next loop (Loop_through_the_tasks), something unexpected happens. This time, each Flow interview has two records to process, but Flow doesn’t wait for all iterations to complete before executing DML. Instead, after processing just the first record from each interview, the DML operation runs immediately. Then, the loop moves to the second record, and another DML execution happens again.

Test data for Antipattern 2

18:40:49.186 (209523914)|FLOW_BULK_ELEMENT_BEGIN|FlowRecordDelete|Delete_Task
18:40:49.186 (209774035)|FLOW_BULK_ELEMENT_DETAIL|FlowRecordDelete|Delete_Task|1
18:40:49.186 (209855148)|FLOW_BULK_ELEMENT_DETAIL|FlowRecordDelete|Delete_Task|1
18:40:49.186 (209890031)|FLOW_BULK_ELEMENT_DETAIL|FlowRecordDelete|Delete_Task|1
18:40:49.186 (355391131)|FLOW_VALUE_ASSIGNMENT|19473b28361b0a0041122ca44b4b195523bd945-3eeb|Delete_Task|true
18:40:49.186 (355421122)|FLOW_VALUE_ASSIGNMENT|19483b28361b0a0041122ca44b4b195523bd945-3eec|Delete_Task|true
18:40:49.186 (355436808)|FLOW_VALUE_ASSIGNMENT|19493b28361b0a0041122ca44b4b195523bd945-3eed|Delete_Task|true
18:40:49.186 (355600292)|FLOW_BULK_ELEMENT_LIMIT_USAGE|1 DML statements, total 5 out of 150
18:40:49.186 (355609767)|FLOW_BULK_ELEMENT_LIMIT_USAGE|3 DML rows, total 18 out of 10000
18:40:49.186 (355615740)|FLOW_BULK_ELEMENT_LIMIT_USAGE|48 ms CPU time, total 147 out of 15000
18:40:49.186 (355683081)|FLOW_BULK_ELEMENT_END|FlowRecordDelete|Delete_Task|3|145

At this point, Flow has only deleted three Tasks—one per interview. But instead of waiting for the entire loop to finish, it immediately proceeds to the next iteration, triggering another separate delete operation for the second set of Tasks.

Loop Bulkification

This means that bulkification in Flow loops works differently than most people expect. It does not accumulate all records and then execute a single bulk DML operation after all loop iterations are complete. Instead, it executes a bulk action per iteration of the loop-grouping only the first set of records together, then executing another DML for the second set, and so on.

Why It Is So Bad?

For our test case, where each interview only processes one Contact, this behavior didn’t seem too bad. We had only one SOQL query for Get_all_Tasks_related_to_Contacts, which is ok. But imagine an Account with multiple Contacts. Suddenly, the number of SOQL queries grows linearly (one new query for each Contact), and number of DMLs will be multiplied. For 2 contacts we have 4 DML operations, for 3 it would be 6 and so on (assuming that each Contact has 2 Tasks)

What is even worse, the Contact with the most Tasks dictates the total number of separate DML operations. If one Contact has 10 Tasks, while others only have 2 or 3, Flow will still perform 10 separate bulk DML operations, even though the other interviews don’t need them.

This is where Flows silently become inefficient. Everything looks fine in small-scale tests, but in real world scenarios, they hit governor limits much faster than expected.

Loop Bulkification Scheme

How To Fix This Antipattern

At this point, it should be clear that SOQL and DML operations should never be placed inside loops, and if possible, nested loops should be avoided entirely. But rather than just recognizing what’s wrong, let’s focus on how easy it is to fix this issue. We’ll start by bulkifying the Get_all_Tasks_related_to_Contacts element:

  1. Iterate through all Contacts and store their Ids in a new collection variable: ContactIds.
  2. After the iteration is finished, use the collected Contact Ids to retrieve all related Tasks in a single SOQL query

antipattern 2 - fix for first loop

Fixing first part of the Flow

antipattern 2 - Flow query filter

Filter in the bulkified query

Once we have all Tasks assigned to a variable, we simply move the DML delete operation outside the loop:

antipattern 2 Flow fixed
The full Flow metadata can be found on Gist

Remove the loop

If you really don’t like the loop element, here’s the good news for you! In our example (looping only to get set of record Ids) we can replace it with Transform element. It’s just a syntatic sugar, so it doesn’t bring any notificable benefits when it comes to a performance, but it may looks cleaner to you, it’s up to you which one to use.

This is how creating Transform element looks like:
antipattern 2 - fix remove loop

And here you see the full flow with loop replaced by transform:
antipattern 2 - fix remove loop

The full Flow metadata can be found on Gist

Key Takeaways

🚨 Never use SOQL or DML operations inside loops, this leads to multiple redundant queries and excessive transactions. Use collections to accumulate IDs and perform bulk queries outside the loop instead of querying inside iterations.

🚨 Bulkification in Flow works differently than in Apex, so understanding how Flow defers and groups operations is crucial.

Antipattern 3 – Multiple Paths With Separate Updates

Sometimes, a single Flow contains multiple independent logic paths. For example, different paths may handle different record types, allowing teams to work independently without interfering with each other.

On a business level, this separation makes sense. However, on a technical level, it’s not always optimal:

antipattern 3 Flow

In this hypothetical example, all branches ultimately perform the same action—updating the parent Account. Because of the reasons described here, the ideal approach is to consolidate updates into a single DML operation instead of executing four separate actions.

If this Flow processes multiple records in bulk, these independent updates will never be bulkified, leading to inefficient execution and potential governor limit issues.

How To Fix This Antipattern

Instead of updating the parent Account directly in each branch:

  1. Create a new collection variable (ParentAccount).
  2. Set all necessary field values and assign the updated records to this collection.

antipattern 3 Flow fixing

Then, instead of executing multiple Update Records actions, create a single bulk update action:

antipattern 3 Flow fixed

The full Flow metadata can be found on Gist

Creating Test Data

Here’s the minimal code to create Account with four Opportunities.

/* Inserting 3 Accounts*/
List<Account> accounts = new List<Account>();
for(Integer i = 0; i < 4; i++) {
    Account acc = new Account(
        Name = 'Test 00' + i
    );
    accounts.add(acc);
}
insert accounts;

/* Inserting Contact for each Account*/
List<Opportunity> opps = new List<Opportunity>();

Opportunity o1 = new Opportunity(
    AccountId = accounts[0].Id,
    Type =  'Existing Customer - Downgrade',
    Name = 'test',
    StageName = 'Prospecting',
    CloseDate = Date.today()
);
opps.add(o1);

Opportunity o2 = new Opportunity(
    AccountId = accounts[1].Id,
    Type =  'Customer - Direct',
    Name = 'test',
    StageName = 'Prospecting',
    CloseDate = Date.today()
);
opps.add(o2);

Opportunity o3 = new Opportunity(
    AccountId = accounts[2].Id,
    Type =  'Customer - Channel',
    Name = 'test',
    StageName = 'Prospecting',
    CloseDate = Date.today()
);
opps.add(o3);

Opportunity o4 = new Opportunity(
    AccountId = accounts[3].Id,
    Type =  'Channel Partner / Reseller',
    Name = 'test',
    StageName = 'Prospecting',
    CloseDate = Date.today()
);
opps.add(o4);

insert opps;

After running this code you can check that only one Update Record action will be executed. No matter how many Opportunities this Flow processes.

Key Takeaways

🚨 If possible, there should be no Create, Edit, Delete or Update actions in separate logic branches.

🚨 The ideal approach is to collect all operations and execute them together at the end of the Flow.

🚨 Bonus points if there is some Get Record action that is common for all logic branches. In such case it is advised to move that action before all logic branches so that the collection of records could be used by any logic branch

Antipattern 4 – Subflows Executing Bulkificable Actions

As Flows grow in complexity, a common practice is to move portions of logic into subflows. While this can enhance readability, it may introduce challenges in bulkification, especially when subflows perform DML operations directly. In the context of Antipattern 3, consider the following scenario:

antipattern 4 Flow
The full Flow metadata can be found on Gist

Here, the main Flow has been refactored to delegate all logic to a subflow. While this cleans up the main Flow, it sacrifices control over when DML operations are executed:

antipattern 4 subflow

How To Fix This Antipattern

To regain control over bulkifiable actions, avoid performing DML operations directly within subflows. Instead, have subflows return the records to be processed, collect them in the main Flow, and execute the DML operations there:

  1. Make the ParentAccount variable in the subflow available as an output:

antipattern 4 subflow fixing

  1. Remove the Update Records action from the subflow:

antipattern 4 subflow fixed

  1. In the main Flow, assign the subflow’s output variable to a new variable, ParentAccount.
  2. Add an Update Records action at the end of the main Flow, as demonstrated in Antipattern 3:

antipattern 4 Flow fixed

The full Flow metadata can be found on Gist

Key Takeaways

Centralize DML Operations: Perform DML actions in the main Flow to maintain control over bulkification

Design Subflows for Reusability: Structure subflows to handle specific tasks and return data to the main Flow, not execute the DML operations.

Antipattern 5 – Updating The Triggering Record In An After-Save Flow

A common antipattern is performing field updates on the same record within after-save Flows. The key distinction between making such changes in before-save vs after-save Flows is that updates in before-save Flows do not require additional database operations. Each database operation consumes valuable time and can trigger multiple other automations, making it crucial to minimize their occurrence. If you encounter a Flow like this:

antipattern 5 Flow

The full Flow metadata can be found on Gist

In 99% of cases, this Flow should be converted to a before-save Flow. Before-save Flows are optimized for efficiency, allowing field updates on the triggering record without incurring additional DML operations. This approach not only enhances performance but also reduces the risk of triggering unintended automations.

How To Fix This Antipattern

In this straightforward scenario, resolving the issue involves changing the Flow’s optimization setting. By setting "Optimize the Flow for:" to "Fast Field Updates," you configure the Flow to run before the record is saved (before-save). This adjustment ensures that field updates occur without extra database operations. In more complex real-life cases, this might require moving only certain parts of the Flow to before-save Flows.

antipattern 5 Flow fixing

Key Takeaways
Always avoid updating the same record’s fields in after-save Flows. Such updates are acceptable only in very specific edge cases. Generally, they should be placed in before-save Flows unless you have a good understanding of the implications, often enforced by workarounds for deeper system issues.

How To Write Apex Invocable Actions Properly

Invocable Actions are a powerful tool that allows Flows to leverage the full capabilities of Apex. At first glance, creating one seems straightforward, but the devil is in the details. Getting it right requires understanding bulkification best practices. I won’t focus on the fundamentals of writing Invocable Actions, as there are already great resources available. Instead, I’ll analyze bulkification challenges and how to design Invocable Actions that scale efficiently.

Understanding Input and Output Parameters

A common misconception about Invocable Actions is how input and output parameters work.

Invocable Actions always receive a list of records, where each element represents a single Flow Interview—or, more directly, the record that triggered the Flow.

At first, this can be confusing. Let’s say we’re working on Antipattern 2, and we need to pass a collection of Tasks related to an Account into Apex. You might assume the method should accept a List of Tasks, like this:

public class MyInvocableClass {
    @InvocableMethod(label='My invocable method' description='Presenting how to use list as method parameter.')
    // ❌❌❌
    public static List<Task> myInvocableMethod(List<Task> tasks) {

        // some logic here

        return tasks;
    }
}

The problem? The code above expects each Flow Interview to provide only one Task. However, if one of the interviews passes a collection variable instead of a single record, the execution will fail at runtime. ✅ The correct approach is to use a nested list (List<List>):

public class MyInvocableClass {
    @InvocableMethod(label='My invocable method' description='Presenting how to use list as method parameter.')
    // ✅✅✅
    public static List<List<Task>> myInvocableMethod(List<List<Task>> tasks) {

        // some logic here

        return tasks;
    }
}

This may look counterintuitive at first, but key to understanding this is to remember that Invocable Action, like every other action in Flow might be bulkified, and if so, Flow will do only one execution for all bulkified Invocable actions. Logic for invocable action should always start with iterating through elements of the list representing Flow Interviews.

This leads us to the next common mistake.

Working Only On The First Element

public class MyInvocableClass {
    @InvocableMethod(label='My invocable method' description='Presenting how to be bad at bulkification.')

    public static List<List<Task>> myInvocableMethod(List<List<Task>> tasks) {
        // ❌❌❌
        List<Task> myTasks = tasks[0];

        // some logic here
        return tasks;
    }
}

If you see this in the code – you know that immediately – this action is not ready for bulkification. If multiple records trigger the Flow at once, only the first one is processed. Remaining records are ignored Always keep in mind to iterate through the elements. The correct approach is to iterate through the list:

public class MyInvocableClass {
    @InvocableMethod(label='My invocable method' description='Presenting how to be good at bulkification.')

    public static List<List<Task>> myInvocableMethod(List<List<Task>> tasks) {
        // ✅✅✅
        for(List<Task> listOfTasks : tasks) {
            // some logic here
        }
        return tasks;
    }
}

Remember that the number of returned elements should be always the same as number of input elements. That’s why its ok to use the input as output as I did in this examples.

Performing DML Operations In Invocable Actions

Going back to what we said at the beginning of this article: Flows are code too, we can make a similar statement:

"Invocable Actions are subflows too!"

So, just like Flows and Subflows, Invocable Actions should not execute SOQL or DML inside loops. Instead, let Flow control when and how DML operations are executed.

Assume that we have some complex requirement to update a task in invocable action, and this requirement can not be implemented directly in Flow. Each Flow Interview is passing only one Task to update
Let’s analyze the worst way to write an Invocable Action that updates Tasks:

public class UpdateTasksClass {
    @InvocableMethod(label='My invocable method' description='Presenting how to not do DML in ivocable actions')

    public static List<Task> myInvocableMethod(List<Task> tasks) {

        for(Task t : tasks) {
            t.Subject = TaskSubjectHandler.SomeClassDoingMysteriousAndImportantWork(t);
            //❌❌❌ Major mistake - DML in loop
            //❌❌❌ Also, we should let Flow decide when to do the update
            update t;
        }

        return tasks;
    }
}

Why is this a problem?

  • Flow loses control over DML execution.
  • Each Apex execution runs its own DML operation, making bulkification impossible.
  • Multiple DML operations happen when only one is needed.

The right approach is to let Flow handle the DML:

public class UpdateTasksClass {
    @InvocableMethod(label='My invocable method' description='Presenting how to not do DML in ivocable actions')

    public static List<Task> myInvocableMethod(List<Task> tasks) {

        for(Task t : tasks) {
            t.Subject = TaskSubjectHandler.SomeClassDoingMysteriousAndImportantWork(t);
        }

        return tasks;
    }
}

As you see, there are no DML operations in this class. The Invocable Action modifies the records but does not perform the update itself.
In this example we assume that Flow receives the modified records and executes a single bulk update operation at the end.

Key Takeaways

  • Invocable Actions follow the same bulkification rules as Flows and Subflows.
  • Remember that Invocable actions are always operating on collections of records where each element represents input from one Flow Interview.
  • Never execute SOQL queries or DML operations inside Invocable Actions.
  • Return modified records to Flow instead of updating them directly—let Flow handle the bulk update.

How To Bulkify The Flow Called Via Code

Salesforce allows us to start Autolaunched Flows directly from Apex using the Flow.Interview class Lets use previously presented subflow for Antipattern 4 as an example:

antipattern 4 subflow

Calling A Flow Interview For A Single Record

//Create Test Account
Account acc = new Account(
    Name = 'Test 001'
);
insert acc;

//Create Test Opportunity
Opportunity o1 = new Opportunity(
    AccountId = acc.Id,
    Type =  'Existing Customer - Downgrade',
    Name = 'test',
    StageName = 'Prospecting',
    CloseDate = Date.today()
);
insert o1;

//Input parameters for Flow
Map<String, Object> inputs = new Map<String, Object>();
inputs.put('OpportunityRecord', o1);

//Create and start Flow Interview
Flow.Interview.Antipattern_4_Subflow_A myFlow = new Flow.Interview.Antipattern_4_Subflow_A(inputs);
myFlow.start();

In this case, we are working with a single Flow Interview. This means that:

  • We never need a list of input parameters—there is always one input set per interview.
  • Bulkification will not occur, just as it wouldn’t if Flow was triggered by a single record.

What Happens When We Call Multiple Flow Interviews In Apex?

What if we want to execute multiple Flow Interviews in Apex? Will they bulkify the same way as Flow Interviews triggered natively within a Flow?

Let’s test it:

//Create Test Account
Account acc = new Account(
    Name = 'Test 001'
);
insert acc;

//Create Test Opportunities
Opportunity o1 = new Opportunity(
    AccountId = acc.Id,
    Type =  'Existing Customer - Downgrade',
    Name = 'test',
    StageName = 'Prospecting',
    CloseDate = Date.today()
);

//Create Test Opportunity
Opportunity o2 = new Opportunity(
    AccountId = acc.Id,
    Type =  'Existing Customer - Downgrade',
    Name = 'test',
    StageName = 'Prospecting',
    CloseDate = Date.today()
);

//Create Test Opportunity
Opportunity o3 = new Opportunity(
   AccountId = acc.Id,
   Type =  'Existing Customer - Downgrade',
   Name = 'test',
   StageName = 'Prospecting',
   CloseDate = Date.today()
);
insert new List<Opportunity>{o1, o2, o3};

//Input parameters for each Flow
Map<String, Object> inputs1 = new Map<String, Object>();
inputs1.put('OpportunityRecord', o1);

Map<String, Object> inputs2 = new Map<String, Object>();
inputs2.put('OpportunityRecord', o2);

Map<String, Object> inputs3 = new Map<String, Object>();
inputs3.put('OpportunityRecord', o3);

//Create and start Flow Interview
Flow.Interview.Antipattern_4_Subflow_A myFlow1 = new Flow.Interview.Antipattern_4_Subflow_A(inputs1);
Flow.Interview.Antipattern_4_Subflow_A myFlow2 = new Flow.Interview.Antipattern_4_Subflow_A(inputs2);
Flow.Interview.Antipattern_4_Subflow_A myFlow3 = new Flow.Interview.Antipattern_4_Subflow_A(inputs3);
myFlow1.start();
myFlow2.start();
myFlow3.start();

Debug Log Analysis: Why Bulkification Fails

A quick look at the debug logs reveals the issue:

Each Flow Interview starts only after the previous one has fully completed.
This means that Flow bulkification does not work when Flow Interviews are triggered via Apex.
Unlike Flows triggered by multiple records in bulk, calling multiple Flow Interviews in Apex executes them sequentially, preventing bulk execution.

How to Handle This Properly

If, for any reason, you need to call Flow Interviews in bulk via Apex, make sure they follow the best practices outlined in the Antipattern 4 section:

  • Return records to be updated instead of performing DML inside the Flow.
  • Process all returned records in bulk after all interviews complete.

This way, even though the Flow Interviews execute separately, the DML operations could be still bulkified.

Key Takeaways
Flow Interviews started in Apex will never be bulkified automatically—each runs in isolation.

  • If calling multiple Flow Interviews, ensure they do not perform DML inside the Flow itself.
  • Collect the results from all Flow Interviews and execute DML operations in Apex or a final bulk update step in Flow.

How To Detect Antipatterns Automatically

A great open-source CLI tool that helps detect some of the Flow antipatterns discussed in this article is lightning-flow-scanner-sfdx. At the time of publishing this article, the tool supports 19 different rules. Among them, three are particularly relevant to Flow bulkification:

  • DML Statement In A Loop
  • SOQL Query In A Loop
  • Same Record Field Updates

By leveraging these rules, we can automatically identify Antipatterns 2 and 5.

Example 1: Detecting DML And SOQL Queries In A Loop

Let’s run the scanner on Antipattern_2 to detect DML operations and SOQL queries inside loops.

sfdx flow:scan -p path/to/your/flows/Antipattern_2.flow-meta.xml

Expected output:

┌──────────────────────────┬───────────────┬────────────────────────────────────┬──────────┐
│ Rule                     │ Type          │ Name                               │ Severity │
├──────────────────────────┼───────────────┼────────────────────────────────────┼──────────┤
│ DML Statement In A Loop  │ recordDeletes │ Delete_Task                        │ error    │
│ DML Statement In A Loop  │ recordDeletes │ Delete_Task                        │ error    │
│ Missing Flow Description │ description   │ undefined                          │ error    │
│ Missing Fault Path       │ recordLookups │ Get_all_contact_related_to_account │ error    │
│ Missing Fault Path       │ recordLookups │ Get_all_Tasks_related_to_Contacts  │ error    │
│ Missing Fault Path       │ recordDeletes │ Delete_Task                        │ error    │
│ Missing Null Handler     │ recordLookups │ Get_all_contact_related_to_account │ error    │
│ Missing Null Handler     │ recordLookups │ Get_all_Tasks_related_to_Contacts  │ error    │
│ SOQL Query In A Loop     │ recordLookups │ Get_all_Tasks_related_to_Contacts  │ error    │
└──────────────────────────┴───────────────┴────────────────────────────────────┴──────────┘

We can see that it was able to succesfully identify our problems with DML Statement In A Loop and SOQL Query In A Loop.

Example 2: Detecting Same Record Field Updates

Let’s try it on Antipattern 5 example to detect same-record field updates inside an after-save Flow.

sfdx flow:scan  -p path/to/your/flows/Antipattern_5.flow-meta.xml

Expected output:

┌───────────────────────────┬───────────────┬───────────────────┬──────────┐
│ Rule                      │ Type          │ Name              │ Severity │
├───────────────────────────┼───────────────┼───────────────────┼──────────┤
│ Missing Flow Description  │ description   │ undefined         │ error    │
│ Missing Fault Path        │ recordLookups │ Get_Lead_Queue    │ error    │
│ Missing Null Handler      │ recordLookups │ Get_Lead_Queue    │ error    │
│ Inactive Flow             │ status        │ "Draft"           │ error    │
│ Same Record Field Updates │ recordUpdates │ Update_Lead_Owner │ warning  │
└───────────────────────────┴───────────────┴───────────────────┴──────────┘

Since this is a CLI tool, and it is also available as VS Code Extension it makes it a perfect tool both for daily developer work and for improving a quality gate in your pipeline.

Key Takeouts
Lightning Flow Scanner SFDX is a powerful, free tool that helps detect Flow antipatterns. It can automatically identify DML in Loops, SOQL in Loops, and Same Record Field Updates. You can use it manually or integrate it into CI/CD pipelines for better quality control. If you’re working with Flows, you should definitely try it!

Summary

Thank you for taking the time to read this article! I hope it was valuable, whether you learned something new or reinforced existing knowledge. Here’s a short summary of the key takeaways discussed:

  • Bulkification is the process of designing automation to handle multiple records efficiently instead of processing them one by one
  • Always design with bulkification in mind, do not rely on the Salesforce statement that you don’t need to think about the bulkification
  • Even if a Flow works fine in small-scale tests, it may break governor limits in large operations. Design for bulk operations from the start
  • To ensure that your Flows are properly bulkified, always:
    • Avoid multiple updates on the same record within a single Flow.
    • Do not use SOQL or DML operations inside loops.
    • Minimize DML operations in separate logic branches. execute them in bulk at the end whenever possible.
    • Design subflows for reusability. return records to be processed instead of performing DML within the subflow.
  • Do not update the triggering records in an after-save Flow, in most of the cases you should do that in before Flows
  • Remember to properly bulkify Invocable Actions – they operate on collections of records, where each element represents a single Flow Interview.
  • Flow Interviews started in Apex are not automatically bulkified by standard Flow mechanisms.

If you have any questions feel free to ask in the comment section below. 🙂

Was it helpful? Check out our other posts here.


Resources

Adam Osiecki
Adam Osiecki
Freelance Salesforce Developer
Freelance Salesforce developer, passionate about delivering well-crafted solutions, always willing to walk an extra mile to get the job done. Working with Salesforce platform since 2017.

You might also like

Salesforce Flow Considerations
August 29, 2023

Salesforce Flow Considerations

Flows are an amazing automation tool that offers you a lot of possibilities. However it also have a lot of limitations you need to be aware of.

Piotr Gajek
Piotr Gajek

Senior Salesforce Developer

Apex CPU Benchmarking
July 28, 2023

Apex CPU Benchmarking

Have you ever encountered the infamous Apex CPU time limit exception? Check out this post to see how to efficiently measure the performance of your Apex code.

Adam Osiecki
Adam Osiecki

Freelance Salesforce Developer