We started Instant because we believed we needed a new kind of database for the future of app development. Now, with agents building more software than ever, that need is bigger than ever.
In 2021, we wrote “Database in the Browser”, a deep exploration of every pain point developers face building modern apps. Fetching data, keeping it consistent, optimistic updates, offline mode, permissions. The thesis was simple: these are all database problems in disguise.
A year later, we published “A Graph-Based Firebase”, which laid out how a triple store and a new query language could give developers the relational power of Supabase with the real-time magic of Firebase. The essay went viral and the team raised a seed round to build Instant.
Two years of heads-down development later, Instant was open-sourced. It hit the front page of Hacker News with 1,000+ upvotes. The demo of spinning up a database instantly resonated with the community.
In 2025 Instant became a full backend solution with database, auth, permissions, and storage all governed by the same data model. Then came create-instant-app and end-to-end typesafety, so getting started was as easy as a single terminal command.
In 2026 Instant entered the AI era. Modern LLMs natively know how to use it. Give them a bit of context and they can build apps like Counter-Strike and Instagram in a few prompts.
With the rise of agents, we believe more software will be built than ever. Infrastructure needs to scale to millions of apps, not just millions of users. Instant is the database for that future.
"Database in the Browser" outlines what the future of app development could look like. The ideas resonate with thousands of developers.
"A Graph-Based Firebase" goes viral on Twitter. The team lays out how a triple store and a new query language could give developers the best of Firebase and Supabase.
After two years of development, Instant is open-sourced. Hits the front page of Hacker News with 1,000+ upvotes. The demo of spinning up a database instantly captures developer imagination.
Instant becomes a complete backend as a service with database, auth, permissions, and storage.
Launch of create-instant-app and end-to-end typesafety. Getting started with Instant becomes as easy as a single terminal command.
Modern LLMs natively know how to use Instant. Agents can build full apps in a few prompts. Instant handles 10,000+ concurrent connections and 1,000+ queries per second in production.
Every app will eventually need real-time sync, optimistic updates, and offline mode. These shouldn't require a team of engineers. They should come for free.
When the right abstraction exists, it's a waste of tokens to build it again. One coherent package beats ten separate services wired together.
In the AI era, more apps will be built than ever. We need hosting that scales to millions of apps, not just millions of users.
A 12-line chat app. A single terminal command to get started. Schema, permissions, and queries all in your code. If it's not delightful, we haven't shipped.
Instant looks simple on the surface. A few lines of code and your app has a real-time backend. But there's a lot of interesting architecture that makes this possible.
All data in Instant is stored as triples: [entity, attribute, value]. A user's name, a goal's title, a relation between them. They're all expressed the same way.
This simple, uniform structure can model any entity and any relationship. Because triples work the same on both the frontend and backend, we can use the same data model everywhere.
1// Fetch all goals with their todos2db.useQuery({3 goals: {4 todos: {},5 },6});Developers write InstaQL, a declarative syntax using plain JavaScript objects. You describe the shape of the data you want, and that's the shape you get back.
No joins, no SQL, no GraphQL resolvers. The query language was designed so that the shape of the query mirrors the shape of the result.
On the client, InstaQL queries compile to Datalog, a logic-based query language that runs in a lightweight engine right in the browser.
This local datalog engine is what makes optimistic updates possible. When you mutate data, the change applies to the local triple store instantly. The engine re-evaluates affected queries and your UI updates before the server even responds.
{ todos: { $: { where: { done: } } } }
WITH done_triples AS ( SELECT entity_id FROM triples WHERE app_id = 'instalinear' AND ave AND attr_id = 'todo-done' AND value = ), todo_data AS ( SELECT t.entity_id, t.attr_id, t.value FROM triples t JOIN done_triples d ON t.entity_id = d.entity_id WHERE t.app_id = 'instalinear' ) SELECT * FROM todo_data
On the server, the same InstaQL queries take a different path. They're translated into SQL and executed against Postgres.
You get the performance and reliability of a battle-tested database. One query language, two execution paths: datalog locally for speed, SQL on the server for truth.
For writes, developers use InstaML, a simple API for creating, updating, deleting, and linking data.
Write operations optimistically modify the client-side triple store for instant feedback, then send transactions to the server as the source of truth. If the server rejects a write, the local store rolls back automatically.
1// Create a new todo2db.transact(3 db.tx.todos[id()].update({4 title: "Ship the feature",5 completed: false,6 createdAt: Date.now(),7 })8);1// Only the creator can see their own todos2const rules = {3 todos: {4 allow: {5 view: "auth.id == data.creatorId",6 update: "auth.id == data.creatorId",7 delete: "auth.id == data.creatorId",8 },9 },10};Every read and every write passes through a permission layer based on Google's Common Expression Language (CEL).
Permissions are expressive enough to handle complex rules like role-based access, row-level filtering, and field-level visibility, but readable enough that you can reason about them at a glance.
We're always looking for exceptional hackers who want to work on hard problems at the intersection of databases, sync, and AI.