SQLite is not a toy database
src: SQLite is not a toy database - 2021-03-25
Abstract
SQLite can be used instead of Excel, can be used to search in JSON files, … It can also be used as database for small websites. It’s quite powerful, and we can use basic SQL features on data from CSV files. For app, it can be embedded directly.
Whether you are a developer, data analyst, QA engineer, DevOps person, or product manager - SQLite is a perfect tool for you. Here is why.
A few well-known facts to get started:
- SQLite is the most common DBMS in the world, shipped with all popular operating systems.
- SQLite is serverless.
- For developers, SQLite is embedded directly into the app.
- For everyone else, there is a convenient database console (REPL), provided as a single file (sqlite3.exe on Windows, sqlite3 on Linux / macOS).
Console, import, and export
The console is a killer SQLite feature for data analysis: more powerful than Excel and more simple than pandas
. One can import CSV data with a single command, the table is created automatically:
> .import --csv city.csv city
> select count(*) from city;
1117
The console supports basic SQL features and shows query results in a nice ASCII-drawn table. Advanced SQL features are also supported, but more on that later.
┌────────────┬────────────┐
│ dates │ city_count │
├────────────┼────────────┤
│ 21 century │ 1 │
│ 20 century │ 263 │
│ 19 century │ 189 │
│ 18 century │ 191 │
│ 17 century │ 137 │
│ ... │ ... │
└────────────┴────────────┘
Data could be exported as SQL, CSV, JSON, even Markdown and HTML. Takes just a couple of commands:
.mode json
.output city.json
select city, foundation_year, timezone from city limit 10;
.shell cat city.json
If you are more of a BI than a console person - popular data exploration tools like Metabase, Redash, and Superset all support SQLite.
Native JSON
There is nothing more convenient than SQLite for analyzing and transforming JSON. You can select data directly from a file as if it were a regular table. Or import data into the table and select from there.
┌──────┬─────┬─────────────────┬──────────┐
│ code │ num │ name │ unit │
├──────┼─────┼─────────────────┼──────────┤
│ ARS │ 032 │ Argentine peso | peso │
│ CHF │ 756 │ Swiss Franc │ franc │
│ EUR │ 978 │ Euro │ euro │
│ GBP │ 826 │ British Pound │ pound │
│ INR │ 356 │ Indian Rupee │ rupee │
│ JPY │ 392 │ Japanese yen │ yen │
│ MAD │ 504 │ Moroccan Dirham │ dirham │
│ RUR │ 643 │ Russian Rouble │ rouble │
│ SOS │ 706 │ Somali Shilling │ shilling │
│ USD │ 840 │ US Dollar │ dollar │
└──────┴─────┴─────────────────┴──────────┘
Doesn’t matter how deep the JSON is - you can extract any nested object:
┌────────┬──────────────────────┐
│ id │ name │
├────────┼──────────────────────┤
│ 7.538 │ Internet provider │
│ 7.539 │ IT consulting │
│ 7.540 │ Software development │
│ 9.399 │ Mobile communication │
│ 9.400 │ Fixed communication │
│ 9.401 │ Fiber-optics │
│ 43.641 │ Audit │
│ 43.646 │ Insurance │
│ 43.647 │ Bank │
└────────┴──────────────────────┘
CTEs and set operations
Of course, SQLite supports Common Table Expressions (WITH
clause) and JOIN
s, I won’t even give examples here. If the data is hierarchical (the table refers to itself through a column like parent_id
) - WITH RECURSIVE
will come in handy. Any hierarchy, no matter how deep, can be ‘unrolled’ with a single query.
┌──────┬──────────────────────────┬───────┐
│ id │ name │ level │
├──────┼──────────────────────────┼───────┤
│ 93 │ US │ 1 │
│ 768 │ US, Washington DC │ 2 │
│ 1833 │ US, Washington │ 2 │
│ 2987 │ US, Washington, Bellevue │ 3 │
│ 3021 │ US, Washington, Everett │ 3 │
│ 3039 │ US, Washington, Kent │ 3 │
│ ... │ ... │ ... │
└──────┴──────────────────────────┴───────┘
Sets? No problem: UNION
, INTERSECT
, EXCEPT
are at your service.
Calculate one column based on several others? Enter generated columns:
Generated columns can be queried in the same way as ‘normal’ ones:
Math statistics
Descriptive statistics? Easy: mean, median, percentiles, standard deviation, you name it. You’ll have to load an extension, but it’s also a single command (and a single file).
┌────────────┬──────┬────────┬──────┬─────┬─────┬──────┐
│ book_count │ mean │ median │ mode │ p90 │ p95 │ p99 │
├────────────┼──────┼────────┼──────┼─────┼─────┼──────┤
│ 1483 │ 349 │ 295 │ 256 │ 640 │ 817 │ 1199 │
└────────────┴──────┴────────┴──────┴─────┴─────┴──────┘
Note on extensions. SQLite is missing a lot of functions compared to other DBMSs like PostgreSQL. But they are easy to add, which is what people do - so it turns out quite a mess.
Therefore, I decided to make a consistent set of extensions, divided by domain area and compiled for major operating systems. There are few of them there yet, but more are on their way:
More fun with statistics. You can plot the data distribution right in the console. Look how cute it is:
┌──────┬────────────┬────────────────────────────────┐
│ slot │ book_count │ bar │
├──────┼────────────┼────────────────────────────────┤
│ 0 │ 116 │ ********* │
│ 1 │ 254 │ ******************** │
│ 2 │ 376 │ ****************************** │
│ 3 │ 285 │ ********************** │
│ 4 │ 184 │ ************** │
│ 5 │ 90 │ ******* │
│ 6 │ 54 │ **** │
│ 7 │ 41 │ *** │
│ 8 │ 31 │ ** │
│ 9 │ 15 │ * │
│ 10 │ 11 │ * │
│ 11 │ 12 │ * │
│ 12 │ 2 │ * │
└──────┴────────────┴────────────────────────────────┘
Performance
SQLite works with hundreds of millions of records just fine. Regular INSERT
s show about 240K records per second on my laptop. And if you connect the CSV file as a virtual table (there is an extension for that) - inserts become 2 times faster.
There is a popular opinion among developers that SQLite is not suitable for the web, because it doesn’t support concurrent access. This is a myth. In the write-ahead log mode (available since long ago), there can be as many concurrent readers as you want. There can be only one concurrent writer, but often one is enough.
SQLite is a perfect fit for small websites and applications. sqlite.org uses SQLite as a database, not bothering with optimization (≈200 requests per page). It handles 700K visits per month and serves pages faster than 95% of websites I’ve seen.
Documents, graphs, and search
SQLite supports partial indexes and indexes on expressions, as ‘big’ DBMSs do. You can build indexes on generated columns and even turn SQLite into a document database. Just store raw JSON and build indexes on json_extract()
-ed columns:
You can also use SQLite as a graph database. A bunch of complex
WITH RECURSIVE
will do the trick, or maybe you’ll prefer to add a bit of Python:
Full-text search works out of the box:
┌─────────────────────┬────────────────────────────────┬────────────┐
│ author │ title │ publisher │
├─────────────────────┼────────────────────────────────┼────────────┤
│ Ruby Ann Boxcar │ Ruby Ann's Down Home Trailer P │ Citadel │
│ Ruby Ann Boxcar │ Ruby Ann's Down Home Trailer P │ Citadel │
│ Lynne Ann DeSpelder │ The Last Dance: Encountering D │ McGraw-Hil │
│ Daniel Defoe │ Robinson Crusoe │ Ann Arbor │
│ Ann Thwaite │ Waiting for the Party: The Lif │ David R. G │
└─────────────────────┴────────────────────────────────┴────────────┘
Maybe you need an in-memory database for intermediate computations? Single line of python code:
You can even access it from multiple connections:
And so much more
There are fancy window functions (just like in PostgreSQL). UPSERT
, UPDATE FROM
, and generate_series()
. R-Tree indexes. Regular expressions, fuzzy-search, and geo. In terms of features, SQLite can compete with any ‘big’ DBMS.
There is also great tooling around SQLite. I especially like Datasette - an open-source tool for exploring and publishing SQLite datasets. And DBeaver is an excellent open-source database IDE with the latest SQLite versions support.