Remove Duplicates From Arrays in Power Automate — 3 Approaches
Power Automate still does not have a built-in "Remove Duplicates" action. Here are three reliable approaches, from simplest to most flexible.
Approach 1: Union with Itself
The simplest one-liner. The union() function returns distinct values:
@union(variables('myArray'), variables('myArray'))
This works perfectly for arrays of simple values (strings, numbers). It compares values directly and removes duplicates.
Example:
Input: ["apple", "banana", "apple", "cherry", "banana"]
Expression: @union(variables('myArray'), variables('myArray'))
Output: ["apple", "banana", "cherry"]
Use a Compose action with this expression, and you have your deduplicated array in one step.
Approach 2: Select + Union for Object Arrays
When dealing with arrays of objects and you want to deduplicate based on a specific property:
-
Select — extract the key field into a simple array:
@item()?['email'] -
Compose — deduplicate with union:
@union(body('Select'), body('Select')) -
Filter array — filter the original array to keep only the first occurrence of each key:
@equals( indexOf(outputs('Compose_Unique_Keys'), item()?['email']), indexOf(outputs('Compose_Unique_Keys'), item()?['email']) )
Wait — that always returns true. The trick is slightly different. Use a Select on the unique keys and for each key, use first(filter(...)) to grab the first matching object:
@first(filter(variables('myArray'), item()?['email'], items('Apply_to_each')))
Approach 3: XPath for Complex Deduplication
For more complex scenarios, convert to XML and use XPath's distinct-values:
-
Compose — convert array to XML:
@xml(json(concat('{"root":{"item":', string(variables('myArray')), '}}')))` -
Compose — apply XPath:
@xpath(xml(outputs('Compose_XML')), '//item[not(. = preceding-sibling::item)]')
This is more verbose but handles edge cases that union() doesn't cover.
Which Should You Use?
| Scenario | Approach |
|---|---|
| Simple string/number arrays | union() — one expression |
| Object arrays, deduplicate by key | Select + Union + Filter |
| Complex nested deduplication | XPath |
| Performance critical (1000+ items) | union() or move to a child flow with chunking |
Performance Note
For arrays with more than a few hundred items, all three approaches work fine. Above 5,000 items, you may hit expression evaluation limits. In that case, consider:
- Processing in batches using a child flow
- Moving deduplication logic to a SQL query or Dataverse view before pulling into the flow
- Using a custom connector with an Azure Function for heavy data processing
Key Takeaway
Start with union(array, array) — it solves 80% of deduplication needs in a single expression. Only reach for more complex approaches when you are working with object arrays or need key-based deduplication.
Comments
No comments yet. Be the first!