Journey On Improving Indexes

TECHNICALSQL
9 min read

Dolt is a SQL database with Git-style versioning. We are a drop-in replacement for MySQL, making it very easy to use Dolt. The versioning features of Dolt bring many clear advantages over using MySQL, however there is one area where we fall behind: performance. Some performance loss is intrinsically tied to how Dolt stores its data to support all of the versioning features, however there are many avenues we can explore to close that gap. One such avenue is making sure that indexes are utilized as often as possible, and our journey to make this happen is detailed in this blog post.

The SQL Engine

As mentioned earlier, Dolt is a drop-in replacement for MySQL, however the Dolt project itself does not have a built-in way to directly interface with MySQL clients. This is handled by the go-mysql-server project (hereinafter referred to as GMS), which we adopted from source{d}. In short, Dolt uses GMS's SQL engine to handle any SQL statements, while Dolt handles the storage operations. This separation of concerns allows for greater flexibility for the future, as the Dolt project may add new SQL interfaces in the future (PostgreSQL, Microsoft SQL Server, etc.) without a complete rewrite. This separation also means that there are optimization opportunities in both GMS and Dolt.

One of the best optimizations that a database can make is determining where and how to use indexes. For Dolt, indexes were only used in specific circumstances, which greatly hindered their usefulness. We recently landed a complete rewrite of how indexes are handled in GMS, which has sped up some queries by several orders of magnitude.

How Indexing Previously Worked

In May 2020, I blogged about the then-new introduction of indexes, and gave a high level overview of how indexes work. To summarize, an index is a secondary table where the key columns are the indexed columns of the indexed (source) table, and the other columns are the primary key columns of the indexed table. Whenever an index is used, we first get the row from the secondary table, grab the source table's primary key, and directly access the desired row from the source table. There are exceptions such as a covering index (where the source table access is unnecessary), but this is the general case.

To get into how rewriting indexes gave such a massive performance increase, I need to go into a bit more detail about how indexes worked before the rewrite, and I'll use an example SQL statement to walk us through. I'm omitting many steps as otherwise this would be a novel rather than a blog, so that we may focus on the steps most relevant to our recent improvements. Let's start with the following:

CREATE TABLE example (
  pk1 BIGINT,
  pk2 BIGINT,
  col1 BIGINT,
  col2 BIGINT,
  col3 BIGINT,
  PRIMARY KEY (pk1, pk2),
  INDEX cols123 (col1, col2, col3),
  INDEX cols32 (col3, col2)
);
INSERT INTO example VALUES
  (1,10,100,1000,10000),
  (2,20,200,2000,20000),
  (3,30,300,3000,30000),
  (4,40,400,4000,40000),
  (5,50,350,2500,15000);

This gives us three tables:

Source Table "example"

pk1 pk2 col1 col2 col3
1 10 100 1000 10000
2 20 200 2000 20000
3 30 300 3000 30000
4 40 400 4000 40000
5 50 350 2500 15000

Index Table "cols123"

col1 col2 col3 pk1 pk2
100 1000 10000 1 10
200 2000 20000 2 20
300 3000 30000 3 30
350 2500 15000 5 50
400 4000 40000 4 40

Index Table "cols32"

col3 col2 pk1 pk2
10000 1000 1 10
15000 2500 5 50
20000 2000 2 20
30000 3000 3 30
40000 4000 4 40

Now let's walk through the execution of the following SQL statement:

SELECT * FROM example WHERE col1 >= 300 AND col2 >= 3000 AND col3 >= 30000;

First the SQL engine from GMS will parse the statement and create a plan that resembles the following:

Project(*)
 └─ Filter(((col1 >= 300) AND (col2 >= 3000)) AND (col3 >= 30000))
      └─ UnresolvedTable(example)

Working its way from the innermost point, the SQL engine first resolves the table by requesting a table named example from Dolt. Dolt, of course, finds the table in storage and returns it to the SQL engine.

Project(*)
 └─ Filter(((col1 >= 300) AND (col2 >= 3000)) AND (col3 >= 30000))
      └─ ResolvedTable(example)

Moving up, the SQL engine encounters a filter. As an optimization rule, the engine can attempt to reduce this filter's work by making use of an index. The engine can only use indexes if all columns in a filter use the same comparator, and in this example we're using >= for all columns, so the engine proceeds. The engine queries Dolt on all indexes available on example, and Dolt returns the two indexes: cols123 and cols32. The engine does not natively support partial indexes (partial matches against indexes, where the first n columns match rather than the entire index), so for Dolt to support partial indexes, Dolt also returns three more indexes: cols1 representing INDEX (col1), cols12 representing INDEX (col1, col2), and cols3 representing INDEX (col3). The engine looks over the columns that each index covers and attempts to exactly match the columns to the ones used in the filter. It ends up matching the index cols123, and then asks Dolt for an index lookup, which contains the information that Dolt will later use for index access, on the index cols123 using the operator >= (an index lookup is opaque to the engine). The plan now looks like the following:

Project(*)
 └─ Filter(((col1 >= 300) AND (col2 >= 3000)) AND (col3 >= 30000))
      └─ IndexedTableAccess(example on [example.col1,example.col2,example.col3])

The engine retains the filter step for verification of the data returned by the table, as the point of the index is to reduce the execution time, not to conform the output to the filter. Next we move up once more to the projection, which will determine which columns are returned further up. The projection is on *, which means "all columns". To determine what "all columns" are, we move it down through the filter and to the table, which returns all of the columns of the table. Our final plan looks like the following:

Filter(((example.col1 >= 300) AND (example.col2 >= 3000)) AND (example.col3 >= 30000))
 └─ Projected table access on [pk1 pk2 col1 col2 col3]
      └─ IndexedTableAccess(example on [example.col1,example.col2,example.col3])

This is also the exact plan returned from prepending EXPLAIN to the query:

EXPLAIN SELECT * FROM example WHERE col1 >= 300 AND col2 >= 3000 AND col3 >= 30000;

Now that the plan is complete, it is executed. On the innermost step, Dolt uses the aforementioned index lookup to point to the exact starting row in the index table cols123, which is (300, 3000, 30000). As the operator was >=, we simply increment forward until we reach the end of the table, returning every row along the way. Before a row is returned, Dolt converts it from its storage format into the format that the SQL engine expects. For each row returned, it runs through the projection, and as all columns are used, the projection is almost a no-op. After the projection, the row makes its final stop through the filter, either passing through if it succeeds, or is discarded and ignored if it fails. As you'll notice from the table, Dolt returns (350, 2500, 15000), which fails the filter (col2 >= 3000) and (col3 >= 30000), and ends up discarded. Finally, the row is returned to the client.

How Indexing Works Now

One of the largest limitations of the previous index code was that the comparisons between columns had to be the same for all columns. This means that the following query, while very similar, would not use an index:

EXPLAIN SELECT * FROM example WHERE col1 >= 300 AND col2 >= 3000 AND col3 > 30000;
Filter(((example.col1 >= 300) AND (example.col2 >= 3000)) AND (example.col3 > 30000))
 └─ Projected table access on [pk1 pk2 col1 col2 col3]
     └─ Exchange(parallelism=12)
         └─ Table(example)

1

This means that the engine would fall back to scanning the entire source table, running the filter over each row. For our example this would still be extremely quick, but not so for tables with millions or billions of rows. To work around this, we needed a new way to represent how a filter translates to an index, and we came up with ranges.

The ranges in GMS are inspired by those from the Apache Commons. Each range contains a number of columns, where each column represents a contiguous set of values. A range's column count matches the number of columns in an index, so the index col123 would have a range with 3 columns (the first representing col1 and so on). Continuing on that example, the range column representing col1 lies on the infinite plane of numbers, going from negative infinity to positive infinity. Grabbing col1 from the previous example, our range column would end up as [300, infinity), with the 300 being inclusive. The same principles hold true for the remaining columns.

With a range column being a contiguous set of values, we can apply set operations for all of our AND (intersection) and OR (union) filter statements. In addition, as a range is a collection of range columns, we may also apply set operations to ranges in their entirety (although some operations require a bit more work to implement). Ranges make partial index support trivial, as all range columns start with the default (-infinity, infinity), and unused columns simply retain the infinite set (also meaning that integrators only need to worry about "exact matches", with only the engine caring about partialness).

Although the introduction of ranges for indexes are a breaking change in GMS, this also gives Dolt the ability to know the exact filter used, and to push the filtering to the storage layer. This removes the unnecessary conversion for rows that are guaranteed to be rejected from the engine's filter, and is relatively cheap to do even for valid rows (so cheap that, even when testing millions rows, the additional time used was within run to run variance). Lastly, Dolt (and all other integrators) does not need a separate code path for every operator, as it is all condensed into a single, uniform, and consistent value set.

To run through the detailed example again, everything is the same up until the optimization rule where a filter may utilize an index. At that time, the query plan looks like the following:

Project(*)
 └─ Filter(((col1 >= 300) AND (col2 >= 3000)) AND (col3 >= 30000))
      └─ ResolvedTable(example)

Instead of checking the operators, the engine directly asks Dolt for the table's indexes, and Dolt returns the same ones: col123 and col32. Since partial indexes are now handled natively in the engine, Dolt no longer generates additional partial indexes. The engine then checks the columns and finds the best index to use, based on whether there is an exact match, how many columns are included in the prefix (the first n columns on an index), etc. Next it constructs a range covering each index column, and assigns those columns the infinite set. From there it runs through the filter, applying each column filter as an intersection or union against the range column. If the filter is valid (e.g. an invalid filter would be col1 = 1 AND col1 = 2) then a valid set of ranges are created (e.g. col1 <> 1 is equivalent to col1 < 1 OR col1 > 1, thus we need to two ranges to represent both). These ranges are passed on to the integrator, which either returns an index lookup if the ranges are supported (an integrator may not support having infinity as a bound for example), or nothing if they're not (which falls back to a table scan). For Dolt, all valid ranges return an index lookup.

With an index lookup in hand, the engine continues just as before, and it finishes with a plan that looks exactly the same as it previously did.

Filter(((example.col1 >= 300) AND (example.col2 >= 3000)) AND (example.col3 >= 30000))
 └─ Projected table access on [pk1 pk2 col1 col2 col3]
      └─ IndexedTableAccess(example on [example.col1,example.col2,example.col3])

This is because all of the changes work behind the scenes to cause more filters to result in index usage. Looking at the EXPLAIN at the top of the section, we see that we're now making use of indexes.

EXPLAIN SELECT * FROM example WHERE col1 >= 300 AND col2 >= 3000 AND col3 > 30000;
Filter(((example.col1 >= 300) AND (example.col2 >= 3000)) AND (example.col3 > 30000))
 └─ Projected table access on [pk1 pk2 col1 col2 col3]
     └─ IndexedTableAccess(example on [example.col1,example.col2,example.col3])

Conclusion

Although the changes seem minimal, the old indexing logic was deeply ingrained into GMS, and it was a fairly large effort to switch over to the new range implementation.

Lines to change indexes to use ranges

It definitely paid off, as Dolt is now making use of indexes in many more places, bring us that much closer to MySQL's performance. When you include all of our versioning features, we believe that Dolt is a serious contender in the database market, offering the characteristics that one expects from a database, while possessing features that no other database has. You can stay up-to-date on our progress by following our releases, and you can directly interact with us by joining our Discord server. We're very attentive to feedback, and you can help us grow Dolt to work perfectly with your specific use case. We hope you'll join us for the ride!


  1. An exchange parallelizes table scans to increase throughput. This is done in GMS through the use of partitions. Indexes do not yet take advantage of partitioning, however Dolt internally parallelizes scans on the secondary table, giving a similar performance benefit.

SHARE

JOIN THE DATA EVOLUTION

Get started with Dolt

Or join our mailing list to get product updates.