I was originally turned off by Turso given it’s cost basis, and, don’t get me wrong, it’s still too expensive for what it is. However, I wanted to revisit the use case for SQLite over Vitess and Neon, under the assumption you manage the SQLite yourself. Solutions like Vitess and Neon are still catch-all solutions and most effective from a raw cost standpoint when all data must be relational. Turso’s approach, however, works best when you can keep data isolated and partitioned. In this post, I’ll dive deeper into the multi-tenant model to identify the best use cases for it.
Turso itself is pretty expensive at $0.75 per billion rows read. The nuance here is that designing your own WAL SQLite infrastructure is much easier than managing your own Vitess or Neon Database. In certain cases the WAL SQLite pattern will outperform Vitess and Neon. Furthermore, the pattern in S3 is surprisingly easy and if designed properly is both economic and ergonomic.
Turso A globally distributed, SQLite-compatible edge database designed for low-latency multi-tenant deployments.
Vitess An open-source clustering system that shards and orchestrates MySQL to provide seamless horizontal scaling and high availability.
Neon A serverless Postgres platform that decouples storage from compute, offering instant branching and on-demand, cost-efficient scaling.
The phrase multi-tenant is thrown a lot, but, it isn’t very clear what the use case is if you’ve never heard of it before. I think the clearest way to describe it is like this:
Assume you have an application with multiple organizations. These organizations do not interact with each other. They all share the same tables and relational structure. If you separated each organization into it’s own database. There would be no impact to the application. This is when multi-tenant makes sense.
There is a very key caveat to this. Even if you have this pattern in your application. Your services still need to be able to load the sqlite in memory and also be able to reliably write concurrently (Turso solves that issue for you... At a cost...).
Looking at managed providers like PlanetScale, Neon, and Turso at scale, Turso delivers the best raw performance... The problem really comes from its multi-tenant architecture which has severe concurrency limits you won’t encounter with MySQL or Postgres. Even with the Turso fork, you’ll hit those constraints pretty fast trying to use it like a regular database. By contrast, Vitess and Neon offer a proper scalable MySQL/Postgres experience at an obvious higher cost.
One nice touch is that Turso gives a free tier and a preview environment, making it easy to test before committing. However, I think if you are seriously considering multi-tenant architecture you will blow their free and base plans pretty quickly.
Key takeaway Turso’s top plan ($416.58/month) charges $0.45 per GB and $0.75 per 1B row r/w. It’s fairly expensive for what you get.
If you aren’t getting heavy usage. Stick to PlanetScale or Neon for flexibility. However, if you need to scale, I wouldn’t recommend Turso, instead, I would suggest rolling your own WAL SQLite architecture. It will be significantly cheaper and potentially even easier to manage. There does need to be a few key criteria met before committing to your own SQLite infrastructure though.
Batch Writing Does your application require real-time writes? If so I would avoid this pattern and opt for managing your own Vitess or Neon cluster.
Partitioned Data Is your data organized per user or client and the data is in isolated tables or does not interact with each other? If yes, SQLite is a very good idea.
Redundancy Can you back up your SQLite files reliably (e.g., to S3)? Without a clear redundancy plan, DIY SQLite infrastructure is risky.
For clarity on how much cheaper this is then Turso. Let’s look at at how much S3 costs. S3 storage is $0.023 per GB. $0.004/$0.005 per 1,000 GET/PUT requests. This means that it is significantly cheaper. Let’s assume that you are batch writing 100,000 records per PUT. Here’s what the cost comparison table looks like for perspective:
Provider | Cost per GB | Cost per 1m writes |
---|---|---|
Turso Developer Plan | $0.75 | $1.00 |
S3 + SQLite | $0.023 | $0.05 |
You could probably cut it down more if you needed to, assuming you keep sqlite in memory for a service and are just backing up data. Turso might have an edge if you need a bit more volume for concurrent writes and multiple services need to be able to read updates. However, the multi tenant use case usually does not have that and you can also just decouple the SQLite queries into a microservice.
If all the above criteria is met. I think you will be pleasantly surprised by how ergonomic the solution is. Below I demonstrate how to create a simple S3Lite client with WAL. This is definitely not a production scale implementation and is moreso meant to give you a general idea of how to design your own SQLite infrastructure.
import fs from 'fs'
import os from 'os'
import path from 'path'
import S3Lite from 's3lite'
import { PutObjectCommand, S3Client } from '@aws-sdk/client-s3'
export interface Item {
id: number
name: string
}
/*
* Client for interacting with a WAL-enabled SQLite database stored on S3,
* with separate read and write clients and batch write functionality.
*
* @NOTE: THIS IS A DEMO TOY IMPLEMENTATION, PROD NEEDS REDUNDANCY AND FAULT TOLERANCE
*
* @param dbUrl - The S3 URL of the SQLite database file.
* @param bucket - The name of the S3 bucket where the database and WAL live.
* @param region - AWS region of the S3 bucket (defaults to 'us-east-1').
*/
export default class S3WALClient {
private writeDb: ReturnType<typeof S3Lite.database>
private readDb: ReturnType<typeof S3Lite.database>
private s3: S3Client
private queue: string[] = []
private batchInterval?: NodeJS.Timeout
constructor(
private dbUrl: string,
private bucket: string,
region = 'us-east-1'
) {
const commonOptions = {
s3Options: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!
}
}
this.writeDb = S3Lite.database(dbUrl, commonOptions)
this.readDb = S3Lite.database(dbUrl, commonOptions)
this.s3 = new S3Client({ region })
this.startBatchWriter()
}
/*
* Enqueue an item name for batch insertion.
*
* @param name - The name of the item to insert.
*/
public enqueueItem(name: string): void {
this.queue.push(name)
}
/*
* Retrieve all items from the database.
*
* @returns A promise resolving to an array of items.
*/
public async getItems(): Promise<Item[]> {
await this.readDb.open()
await this.readDb.exec('PRAGMA journal_mode=WAL;')
const items: Item[] = await this.readDb.all('SELECT id, name FROM items;')
await this.readDb.close()
return items
}
/*
* Stop the interval that flushes the write queue.
*/
public stopBatchWriter(): void {
clearInterval(this.batchInterval)
}
/*
* Start an interval that flushes queued writes every 1 minute.
*/
private startBatchWriter(): void {
this.batchInterval = setInterval(() => {
this.flushQueue().catch(console.error)
}, 60_000)
}
/*
* Flush any queued item names to the database and upload the WAL file to S3.
*/
private async flushQueue(): Promise<void> {
if (this.queue.length === 0) return
await this.writeDb.open()
await this.writeDb.exec('PRAGMA journal_mode=WAL;')
await this.writeDb.exec(
'CREATE TABLE IF NOT EXISTS items (id INTEGER PRIMARY KEY, name TEXT);'
)
for (const name of this.queue) {
await this.writeDb.run('INSERT INTO items (name) VALUES (?);', name)
}
this.queue = []
await this.writeDb.close()
await this.uploadWAL()
}
/*
* Upload the local WAL file generated by SQLite to S3.
*/
private async uploadWAL(): Promise<void> {
const localDir = os.tmpdir()
const baseName = path.basename(this.dbUrl)
const dbPath = path.join(localDir, baseName)
const walPath = `${dbPath}-wal`
await this.s3.send(
new PutObjectCommand({
Bucket: this.bucket,
Key: `${baseName}-wal`,
Body: fs.createReadStream(walPath)
})
)
}
}
// Example usage:
const client = new S3WALClient(
'https://your-bucket.s3.amazonaws.com/tenant.db',
'your-bucket'
)
client.enqueueItem('Widget')
// After some time (e.g., 2 minutes) we can read back:
setTimeout(async () => {
const items = await client.getItems()
console.log(items)
client.stopBatchWriter()
}, 120_000)
I would definitely recommend implementing a solid plan for redundancy and fault tolerance. But, the fact that the core logic just ends up looking like this, while also offering some major performance and cost gains makes WAL SQLite a compelling choice in the right scenario.
I still heavily lean towards managing your own Vitess or Neon cluster if you are looking to scale up. A lot of the redundancy and fault tolerance is managed for you and it is easy to mess up SQLite files if you don’t know how to manage them properly. However, there are definitely case scenarios where this SQLite model makes sense and if you can partition data it is both the most ergonomic and economic solution.