Remove duplicate rows
Part of the Cleaning Recipes Guide · Last updated: 2026-03-31
Deduplicate rows using one or more key columns.
Best for import files where Email, SKU, or Customer ID must be unique.
This page is intentionally detailed so you can understand not only which recipe to choose, but also how to prepare your CSV, what to expect during the apply flow, and what to verify after the change runs. That makes it easier to use saved recipes confidently on recurring imports instead of cleaning values by hand.
If you are comparing similar actions, start with the recipe preview below, then work through the screenshots and verification checklists further down the page. Those sections are designed to mirror the real UI you will see in Online CSV Editor.
Keeps one record and drops the duplicate copy.
First-time walkthrough for beginners
If this is your first time using Remove duplicate rows, follow these steps in order. The screenshots below come from the real product flow so you can compare your screen with the guide as you go.
Open a file and find one example you want to fix
Start by loading your CSV or a sample file into the editor. Before opening the recipe tools, look for one real example that should change, such as Two rows share alice@example.com. That gives you something concrete to compare after the recipe runs.
- Check whether the issue appears in one column or across several columns.
- If the file is large, note a few rows you can revisit after applying the recipe.

Open Recipes and start a new recipe draft
Click the Recipes button in the toolbar. Beginners can choose New recipe or Start from example, then save a reusable recipe after they confirm the action works the way they expect.
- Use Start from example if you want to learn the recipe editor with a safe starter action already loaded.
- Saved recipes stay browser-local unless you deliberately share the definition.

Configure remove duplicate rows in the editor
This recipe works best when you have a reliable key such as Email, SKU, or Customer ID. Before applying, decide whether the first matching row or the last one should be preserved when duplicates appear.
- Choose the key column or columns that truly define a duplicate record.
- Pick whether to keep the first or last matching row based on which record is usually more trustworthy.
- Read the apply summary so you know how many rows were removed and can sanity-check the outcome.

Apply the recipe and confirm the result before export
Apply the action, then compare the changed table against the expected result One row remains. Use the apply summary together with the example panel below to confirm the recipe did what you intended before exporting the CSV.
- Make sure the output now matches the intended result, such as One row remains.
- Read the apply summary and confirm that the changed row or cell count matches your expectation.
- Export the CSV only after scanning a few rows near the top, middle, and bottom of the file to catch edge cases.
Quick version
- Add Remove duplicate rows and choose the key column or columns that define a duplicate.
- Pick whether the first or last matching row should be kept.
- Apply the recipe and review the removed row count in the summary panel.
Example
Choose a stable key such as Email or ID rather than a display name that may repeat legitimately.
Before you run this recipe
- Identify the exact columns or rows that remove duplicate rows should change before you open the recipe form.
- Keep one visible example in mind, such as Two rows share alice@example.com, so you can compare the result after the recipe runs.
- If you expect to repeat this cleanup on future imports, save the recipe with a descriptive name instead of applying it only once.
What to verify after applying
- Make sure the output now matches the intended result, such as One row remains.
- Read the apply summary and confirm that the changed row or cell count matches your expectation.
- Export the CSV only after scanning a few rows near the top, middle, and bottom of the file to catch edge cases.
Common mistakes beginners should avoid
- Using a display field such as Name as the duplicate key when legitimate repeated values can occur.
- Keeping the wrong copy because you did not decide in advance whether first or last should win.
When this recipe is the right choice
Use Remove duplicate rows when you want a repeatable cleanup rule instead of manual editing across many rows. The strongest clue is the use case itself: Best for import files where Email, SKU, or Customer ID must be unique.
In practice, this recipe is most valuable when the same cleanup problem appears in recurring exports from CRMs, spreadsheets, analytics tools, or ecommerce platforms. Saving the recipe means you can apply the same standard every time a similar CSV arrives, which is exactly what makes the guide useful for long-term workflows rather than one-off fixes.
Use this recipe in context
Open the editor, import your file, click Recipes in the toolbar, and apply this action on its own or combine it with other saved actions. If you want the recipe to run immediately when a file opens, use the Apply recipe on import dropdown in the importer first.
For the best results, treat this page as a reusable operating note: review the example, compare it to your live CSV, run the saved action, and then return to the guide whenever you need to train a teammate or document a repeatable cleanup process.