Convert

CSV to SQL Converter

Turn a CSV file into a script of INSERT statements you can paste into Postgres, MySQL, or SQLite. Runs in your browser.

Need to load a CSV into a database without spinning up a `COPY` workflow, fighting the import wizard in pgAdmin, or writing a one-off Python script? This tool produces a plain `.sql` file of `INSERT` statements that you can paste directly into psql, the MySQL CLI, the SQLite shell, or your DB GUI of choice.

Each row in your CSV becomes one `INSERT INTO data (...) VALUES (...);` statement. Strings get single-quoted with `'` properly doubled. Numbers pass through as-is. Empty cells render as `NULL`. The output is portable across Postgres, MySQL, and SQLite — see the FAQ for table-name customization. Conversion runs entirely in your browser.

How it works

Three steps, no signup

  1. 1

    Drop your CSV

    Drag a .csv file into the box above, or click to pick one. The first row is treated as headers and used as column names in the INSERT statements.

  2. 2

    We emit INSERT statements

    One INSERT per data row, targeting a hardcoded table named `data`. Strings are quoted, numbers pass through, empty cells become NULL, embedded single quotes are escaped.

  3. 3

    Download the .sql

    A plain SQL script is ready instantly. Open it in your DB tool, change the table name if needed (see FAQ), make sure the table exists, and run.

FAQ

Frequently asked questions

What does the output look like?
One `INSERT INTO `data` (`col1`, `col2`, ...) VALUES (...);` per row. Column names are wrapped in backticks. So a CSV row `Alice,30` from columns `name,age` produces `INSERT INTO `data` (`name`, `age`) VALUES ('Alice', 30);`. Each statement ends with a semicolon and a newline so you can pipe the file straight to psql or mysql.
Why does it always use a table called `data`? Can I change it?
v1 hardcodes `data` as the table name. To change it, open the downloaded .sql in any text editor and find-and-replace `INSERT INTO `data`` with your real table name. Or use sed: `sed -i '' 's/INTO `data`/INTO `users`/g' file.sql`. A future version will accept the table name as an input on this page.
What SQL dialect is this?
Intentionally portable — works in Postgres, MySQL, and SQLite with no changes for most data. The backtick column quoting is MySQL-native; Postgres ignores backticks in the unquoted form (you may want to swap them for `"` if you have reserved-word column names). SQLite accepts both. There's no `CREATE TABLE` — you need to create the table separately.
How are strings, numbers, and NULLs handled?
Strings get single-quoted (`'value'`). Numbers (CSV cells that parse cleanly as numbers) pass through unquoted. Empty cells, `null`, and `undefined` render as `NULL` (no quotes). Booleans render as `TRUE` / `FALSE`. Note that CSV is type-less — every cell starts life as a string, so this tool emits everything as a string by default. If you need true numeric typing, cast in your DB after the load.
What about single quotes inside cell values?
Doubled. A cell containing `O'Brien` becomes `'O''Brien'` in the SQL — that's the SQL standard for escaping a single quote inside a quoted string and works in every major dialect. No backslash escaping is used (which is good — backslash escaping varies by dialect and SQL mode).
Do I need to create the table before running this?
Yes. The output is INSERT statements only — no `CREATE TABLE`, no schema. Create the table with the right column names and types in your DB first (`CREATE TABLE data (name TEXT, age INTEGER);` or similar), then run the .sql to load the rows. This separation is intentional: schema decisions belong to you, not to a converter.
When should I use this vs. a real bulk-load (`\copy`, `LOAD DATA`, etc.)?
Use INSERT scripts for small to moderate data — up to maybe 100k rows. They're portable, easy to review, work in any DB GUI, and don't require server-side file access. For large loads (millions of rows), native bulk-loaders are dramatically faster: `\copy` in Postgres, `LOAD DATA INFILE` in MySQL, `.import` in SQLite. INSERT-based loads can take hours where `\copy` takes seconds.

Need to compare two files?

Drop two spreadsheets and see every change in seconds. Free, private, runs in your browser.