PRODUCTS

KEYWORDS

Database Insurance

Did you know Dolt and Doltgres can be run as replicas to your existing MySQL, MariaDB, or Postgres databases? In this mode, Dolt acts as database insurance. Each transaction commit on your primary becomes a Dolt commit on your replica, preserving the history of your database in Dolt. You can use Dolt’s version control functionality on your replica for all manner of rollback, including complex reverse patches of multiple transactions.

Dolt. This is fine.

Database insurance is becoming all the more important with the threat of agents going off the rails. It seems like not a month goes by without another “an agent deleted my database” story. Dolt can protect you from rogue agents without switching out your primary. Just add a Dolt replica. This article explains.

Agents Delete Databases#

This is no longer hypothetical. Agents delete databases.

Last week, Pocket’s Claude-powered coding agent turned a staging task into a production incident. It deleted the company database and the attached backups in one shot. This is the cleanest possible example of the new failure mode: give an agent broad infra access and eventually it will speedrun your disaster recovery plan.

In March, Alexey Grigorev asked Claude to help clean up AWS resources and got an unexpected terraform destroy adventure instead. Production database gone. Snapshots gone. Support upgraded mid-incident. It reads like a normal ops postmortem except the culprit was an agent with cloud credentials.

And in July last year, after Replit’s agent deleted a database, Replit’s CEO said the lesson was that agents should not have access to production databases. Fair enough. But is that achievable? Agents are going to find their way into production workflows, whether through direct SQL, admin tools, or application APIs. The practical question is not whether mistakes will happen but rather how much damage is done when they do.

These stories get attention because they are spectacular. “Entire company database deleted in nine seconds” is a headline. But the underlying issue is not new. Any tool, human- or agent-driven, that has production credentials, broad access, and enough autonomy to make a lot of changes very quickly is dangerous.

Databases were already at-risk. Agents just increased the blast radius.

The old production failure mode was a tired human typing DROP TABLE into the wrong terminal. The new production failure mode is an enthusiastic agent with API keys, shell access, a task list, and a limited context window.

Agents Write Junk#

Outright deletion is the catastrophe case. It is easy to notice. It makes the news. Much more common is quiet failure when an agent writes junk to production.

Maybe it backfills the wrong values. Maybe it misinterprets a schema. Maybe it “fixes” a bug by overwriting a field everywhere. Maybe it makes a long sequence of individually valid writes that are collectively nonsense. In many cases the agent is not even talking SQL directly. It is calling your production API, which is often worse because the damage looks like legitimate application traffic. I would guess for every “the agent deleted my database” story, there are hundreds of “the agent wrote bad data into production” stories.

A recent article on the subject, “Databases Were Not Designed For This” by Arpit Bhayani, generated a lively Hacker News discussion. Arpit’s point is that databases were built for boring callers: deterministic apps, reviewed writes, short-lived connections, obvious failures. Agents break every one of those assumptions at once, so a bunch of database hygiene practices that were once “nice to have” — statement timeouts, idempotency keys, soft deletes, append-only logs, role-per-agent, query tagging, the works — suddenly become load-bearing infrastructure. I think that framing is right. The database is no longer talking to careful application code but to an agent who must fix the issue before its context fills up.

This is where normal backups start to feel insufficient. Backups are great for disaster recovery. They are less great for “between 2:13 PM and 2:21 PM, the agent wrote garbage into three related tables, and I need to surgically undo those writes specifically”.

That is a version control problem, and Dolt is the world’s first and only version-controlled database.

Protect Yourself#

The easiest way to get database insurance is to run a Dolt replica on a separate host. You keep your existing primary. MySQL stays MySQL. MariaDB stays MariaDB. Postgres stays Postgres. Dolt or Doltgres sits downstream as a replica, ingesting binlog or WAL changes and turning each committed transaction into a durable Dolt commit.

Setup is the same basic shape as standard replication. No application rewrite. No cutover. No “replace your database with our database” project. Just add a replica.

The result is useful immediately. You get:

  • A continuously updated copy of production.
  • A commit history of every transaction.
  • Diffs over time.
  • Branches if you need to experiment.
  • Rollback tools much more expressive than “restore last night’s backup”.

In the pre-agent era, this was nice to have. With Claude and Codex in all your engineer’s hands, it looks more like a necessity.

Catastrophe#

Backups are still your number one defense here. If a machine dies, a region disappears, or an attacker wipes out infrastructure, backups are what save you. A Dolt replica is defense in depth.

It gives you another live copy of the database and a detailed transaction history. If your primary gets destroyed by a rogue agent, the Dolt replica can tell you exactly what happened and when. If you push that replica to DoltHub or another remote, you now have true off-host safety with different access credentials as well.

This is the right layering:

  1. Backups for disaster recovery.
  2. Replication for availability.
  3. Dolt replication for history, auditability, and surgical undo.
  4. Dolt remote as disaster recovery if all else fails.

You want all four.

Bad Writes#

Bad writes are where a Dolt replica really shines.

First, find the bad writes. Use dolt_log to identify the suspicious time window or commit range. Use dolt_diff() to inspect exactly what changed. Because the writes are preserved as commits, you are not guessing. You are looking at history.

Then undo them on a branch.

Sometimes a simple dolt_revert() is enough. Other times, you want to reverse a sequence of commits or a custom patch that keeps the good changes and removes only the bad ones. Dolt gives you those tools. Once you have the corrective patch, you can get the SQL you need to apply to your primary with dolt_patch().

This is a much better story than:

  1. Restore backup to staging.
  2. Diff it manually against production.
  3. Write custom SQL.
  4. Hope you found all the bad rows.
  5. Hope you didn’t revert good writes too.

With Dolt, the database history is already organized into commits. That makes complex rollback possible.

That is what “database insurance” means in practice. Not just “I have a copy somewhere”, but “I can understand what happened and undo it precisely”.

Conclusion#

Use a Dolt replica for database insurance against catastrophe or more mundane bad writes. In the human operator era, a Dolt replica was nice to have. In the agentic operator era, a Dolt replica is essential. Questions? Come by our Discord. We’re happy to help get your replica set up.