Fix Garbled CSV Characters Like é, ’, or �

By CSV Editor Team · Last updated: 2026-03-16

If your CSV suddenly shows text like José, François, ’, or the replacement character , the file usually is not truly ruined. In most cases, the content was saved with one text encoding and then opened with another. That mismatch makes normal names, punctuation, and symbols look broken even when the underlying rows and columns are still intact.

The safest fix is to diagnose the encoding first, re-open or re-export the file correctly, and only then continue with import work. For most modern workflows, UTF-8 is the best default. The goal is to repair the text without introducing new problems in delimiters, quoted fields, or row counts. If the parser is throwing multiple kinds of errors, use the main CSV troubleshooting guide as your parent diagnostic flow.

Quick answer

  1. Check whether the issue is garbled text, not just a wrong delimiter.
  2. Re-open the CSV using UTF-8 if your editor or import dialog lets you choose encoding.
  3. Verify a few known names, apostrophes, and symbols before saving anything.
  4. Export a fresh UTF-8 CSV and keep the original delimiter unless the destination requires another one.
  5. Test the repaired file in the destination tool before replacing the original workflow file.

What garbled characters in CSV usually mean

Garbled characters happen when byte sequences are interpreted with the wrong character set. A common example is UTF-8 text being read as Windows-1252 or ISO-8859-1. That is why a name such as José might appear as José and a curly apostrophe might turn into ’.

This is different from a file that is structurally broken. If the row count, columns, and quoting still look mostly correct, you are usually dealing with an encoding problem rather than a parser problem.

Common signs of a CSV garbled characters problem

  • Accented letters become sequences like é, ñ, or ü.
  • Quotes or apostrophes become “, â€, or ’.
  • Some symbols appear as , meaning the app could not decode the bytes cleanly.
  • The CSV imports, but customer names, product titles, or addresses look damaged afterward.
  • The same file looks fine in one app and broken in another.

Encoding issue vs delimiter issue

If your CSV opens as one big column or splits into the wrong number of columns, that usually points to a delimiter mismatch. If the columns look correct but the text inside cells looks mangled, that usually points to encoding.

Sometimes both happen at once, which is why it helps to separate the checks. If structure looks wrong, review how to change a CSV delimiter safely. If structure is fine but text is not, continue with the encoding workflow below.

Typical examples of garbled text

These before/after patterns are common when UTF-8 bytes are decoded incorrectly:

  • José becomes José
  • München becomes München
  • It’s ready becomes It’s ready

If you recognize that pattern, the data often is still recoverable from the original export by reopening it with the right encoding.

Step-by-step: how to fix garbled characters in CSV

  1. Create a backup first. Do not save over the only copy until you confirm the repair worked.
  2. Inspect sample fields you can recognize easily. Check accented names, punctuation, currency symbols, and any known multilingual values.
  3. Re-open or re-import with UTF-8 selected. If the app supports encoding selection, test UTF-8 before trying risky manual edits. The deeper explanation lives in CSV UTF-8 encoding explained.
  4. Verify delimiters and quoted fields stayed intact. Encoding repairs should not change column count or quote handling.
  5. Export a clean UTF-8 CSV. Use BOM only when the destination tool explicitly requires it, and if the importer still rejects the file, continue to fix invalid UTF-8 byte sequence issues.
  6. Run a small test import. Confirm the repaired characters survive the next step in the workflow.

Mistakes that make garbled CSV text worse

  • Using find-and-replace on visible junk characters instead of fixing the source encoding mismatch.
  • Changing delimiter and encoding at the same time without isolating the real cause.
  • Opening the file in a spreadsheet that auto-formats IDs, dates, or long numbers before you verify text integrity.
  • Saving repeatedly in different tools, which can make partially broken text harder to recover.
  • Assuming every importer wants BOM when many modern tools work best with plain UTF-8.
  • Skipping a final import-readiness check before re-uploading repaired customer or product data.

Quick QA checklist before final export

  • Names and symbols display correctly in sample rows
  • Delimiter still matches the destination system
  • Quoted cells remain quoted correctly
  • Row and column counts match the source
  • Test import succeeds without new character corruption

FAQ

Why does my CSV show à instead of accented letters?

That usually means UTF-8 text was decoded as a legacy encoding such as Windows-1252. The bytes are being interpreted incorrectly, so normal accented characters show up as multi-character junk.

Does the replacement symbol � mean my data is permanently lost?

Not always. It means the current app could not decode some bytes cleanly. If you still have the original source file, reopening it with the correct encoding may restore the text.

Should I export CSV as UTF-8 or UTF-8 with BOM?

Default to UTF-8 unless the destination importer specifically asks for UTF-8 with BOM. BOM can help some legacy spreadsheet flows, but it is not universally required.

Related guides

Canonical: https://csveditoronline.com/docs/csv-garbled-characters-fix