background

BRIN & Correlation, Poor Partitioning, 10 Beginner Tips, Table & Index Usage | Scaling Postgres 276

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss the importance of correlation with BRIN indexes, how partitioning can kill performance, 10 tips for beginners and how to track table and index usage.

Content Discussed

YouTube Video

Podcast Audio

Transcript

All right. I hope you, your friends, family, and coworkers continue to do well. So now that it's the summer, there seem to be fewer Postgres posts. And just to give you a heads-up, I'm probably going to be skipping Scaling Postgres next week. But don't worry, I'll have another episode the week after to include all the content for the following two weeks. But for this week, our first piece of content is "When Good Correlation is Not Enough". This is from hakibenita.com. And he's talking about how important correlation is to a BRIN index. So a BRIN index is a block range index. What it does is it stores, usually, just two ranges, a min and max, for finding a given set of data.

And because this index operates on ranges, having a high correlation between what's in the index and what is the physical order of the table, is really important to be able to find the relevant data. Because the index just contains a min and a max value. So in the example here when you're trying to find E, you know it's not between A and C, you know it's not between G and I, but it could be here between E and F. Now, of course, E is between D and F, but it doesn't know definitely that it's there.

And when you have a poor correlation between the physical ordering and the logical ordering, you get BRIN indexes that look like this. You have a range from B to E and A to G and C to F. So E could be in any one of these places. It means having to hit more places on the index to try and find the value of interest. And he talks about correlation, again, this is, as it says, quote "Statistical correlation between physical row ordering and logical ordering of column values". And he shows a BRIN index with perfect correlations. He creates a table, creates a BRIN index and the size for the amount of data is 520 kilobytes.

Very small because it's only recording ranges. And when he uses this query, the execution time is eight milliseconds. And because all the index has is the range, it then needs to go to the Heap to find the actual data. It removes rows that are unneeded through the index recheck, so you're just getting the rows of interest. Now this post goes into a lot of detail about exploring what's on the pages and what's in the index. He installs the page inspect extension here. But basically, when you have a perfect correlation between the fiscal rows and the logical rows, that's when a BRIN index performs best and is at its smallest. But what happens if you throw in an outlier? So this is something that's outside of the expected range.

In this example, he added some outliers every 70th row. He increased the timestamp by a year. So this creates a huge range. Now if you look at the correlation according to the statistics after an analysis, it's still almost one, it's 0.97. He goes ahead and creates the index and then he runs the index and now suddenly something that ran with perfect correlation has eight milliseconds. Now you can see the execution time is over 3.2 seconds and it has to throw out almost a million rows. So this huge range just caused huge problems for the BRIN index, because essentially it has to scan all of the indexes and then scan all of the tables to try and find the data.

And he goes ahead and explores info on the pages and everything to show you how this is happening. So this is why BRIN is really good for append-only tables. And when you're looking for particular ranges like dates. So if you have a table recording time series data, a BRIN index is a perfect candidate for that. Just as long as you're not deleting or updating the data and causing the physical rose to become out of alignment with what's in the index. And the reason why he wrote this post is because he actually ran into this problem at his organization and here's how they got around it. So as of Postgres 14, there's a quote "...new set of operator classes minmax_multi_ops...".

So as opposed to just having one min and max value in the range, you can actually include multiple values as well. So basically the index itself contains more data and if you look at the size, it's over twice as large as the old BRIN index. It's still very small, but it is over twice the size of the original BRIN index. And if you look at the performance with the same query, it's not eight milliseconds like the perfectly correlated data was. But it's 29 milliseconds. It's definitely not over three seconds and not many rows are removed by the index recheck. And he goes into a little bit of how the multi-minmax BRIN index works. He has this results summary down here looking at different query timing and index sizes.

So if you have a perfect correlation with no index, doing this particular query took a little over a second, 1.3 seconds. When you have a perfect correlation and a BRIN minmax index, again, it's that eight milliseconds, about 520 kilobytes, in size. When you have perfect correlation and a B-Tree index, the timing is 0.1 milliseconds. So super fast. And that's expected to be able to pull out particular values but look at the index size, this 21 megabytes. So many times larger than the BRIN index. So you do the BRIN index for space saving and, again, it's great when you're looking at ranges but if you need individual value, B-Tree is going to be much faster.

It depends on how many rows you're pulling back the point at which a BRIN index actually performs faster than a B-Tree index. But once you start introducing outliers, that BRIN index using the standard minmax single range method, goes to over three seconds even though the index is still small. Whereas the outliers with the multi-minmax is at 29 milliseconds. So a little bit faster than the perfectly correlated one and a little bit more than twice the size. But at least it performs pretty well. So definitely check out this blog post if you want to learn more about BRIN indexes and how they work.

Next piece of content- "KILLING PERFORMANCE WITH POSTGRESQL PARTITIONING". This is from cybertec-postgresql.com. And they're talking about a scenario where you have a table with about 10 million rows. And you do a query against it, looking for value in it in 0.028 milliseconds. So super fast using an index scan. But let's say you anticipate this table growing so you want to partition it. So you go ahead and do HASH PARTITIONING by the ID and they go ahead and create eight partition tables. And they load the same data and they also create the index for it that existed before.

But now when they run this query, suddenly it's looking like this which is not good. The reason is that it is partitioned by ID, but yet the query is looking for value. So this is very important to remember if you're going to do partitioning, you need to have the partition key in every query that you do. Because then it can narrow down the partitions and only use the partition where that key exists. And a lot of times, maybe you want to use a multi-column index where the partition key is the first value and the second value would be the val in this case, the value.

So that would be able to find it super quick. So this is a case where you get the worst performance by partitioning and the reason is because your queries don't have the partition key as a part of the query. They need to be there to give you that really good performance. And the problem essentially gets worse the more partitions you add because there's more work that the database has to do. As they say here quote "...using too many partitions will significantly increase the time the planner needs to do its job".

So check out this blog post if you want to learn more as well as this next post "Partitioning in Postgres and the risk of high partition counts". This is from pganalyze.com And this is Lukas's "Five minutes of Postgres". He covers this post as well as another where they actually saw a performance benefit of switching from list partitioning to hash partitioning. So check out his content if you want to learn more about that.

Next piece of content- "10 Postgres tips for beginners". This is from postgres.ai. And let's just see what the tips are. The first is "Understand early: tuples are physical versions of rows". Correct, but as I read through it what he was communicating is that this is all about Postgres' MVCC. So what that means, is when you get a delete, it doesn't immediately remove the row. It marks for deletion and has to be vacuumed later. When you get an update, it actually inserts a new physical row and the old one is marked for deletion eventually once it's no longer needed.

And that even a canceled insert creates a dead row. Unless I think you have an ON CONFLICT, do nothing. Then I think it doesn't create that. But he's talking about these core concepts of Postgres that you're creating these additional rows. He, of course, shows how you can look at this actually happening by looking at the hidden rows on each table like ctid, xmin, xmax, et cetera. But of course, this has ramifications for why you don't want to use Postgres to update a single row many, many, many times. Essentially, it's creating new rows, unlike other database systems that have undo logs like Oracle or mySQL, I believe.

Next tip "EXPLAIN ANALYZE- always with BUFFERS!". So basically, EXPLAIN shows you the query plan, and EXPLAIN ANALYZE runs it and shows you the results of the query and the query plan. Then BUFFER adds in how many blocks were hit, how many disks were read, et cetera. So important information to have if you're diagnosing a query. Next tip "Optimal UI tool choices: beyond pgAdmin''. So he does mention one of the most powerful clients is the PSQL command line and that's what I tend to use. But apart from pgAdmin, he also mentions DBeaver, DataGrip, and Postico as well.

I should also mention for each of these points, he's listing episodes of Postgres FM because he's one of the hosts of that show. He has links to episodes where they discussed some of these tips in detail. Number four is "Loggin: adjust the settings ASAP". And what he's looking for immediately is getting logging set up. So he wants to log all the checkpoints, all the auto vacuums that are running, the temp files that are created, the log waits that are happening, and setting a log_min_duration_statement. So that you're seeing slow queries coming in.

Five is "Power up observability with extensions: pg_stat_statements and more". So definitely, pg_stat_statements is huge for query analysis. It's my go-to. He also mentions a few other extensions, one is auto_explain, so it auto-explains slow queries. But two I haven't really used, and he says a lot of people have it, are pg_stat_kcache and pg_weight_sampling. So you could check this out if you want to learn more about that. Six is "Embrace DB branching for development (use DBLab)". So this is a service that Postgres AI does. What the website does, in terms of allowing you to create very fast clones of the database for development or analysis purposes.

So he endorses that. Seven "Ensure data checksums are enabled". So enable those to make sure that your data's consistent and doesn't get corrupted. Eight "Tune autovacuum for frequent runs and faster execution". Yeah, particularly as you're getting more traffic, this becomes more important. And he has a good recommendation down here which is to trigger autovacuum more often. Usually, it only starts when 10-20% of the tuples are changed and dropping it down to as low as 1% or lower would be advantageous.

Nine "Query optimization over a config tuning". I agree with this a hundred percent. You can see a hundredfold or thousandfold changes in query performance by optimizing queries or adding indexes or things of that nature. Tuning, maybe it gets you a 2x change, maybe a 3x change. And ten "Index maintenance". So basically this is rebuilding your index periodically because no matter what you do, they're going to slowly degrade in performance. So definitely a great set of tips and check this out if you want to learn more.

Related to that, this is also the episode of Postgres FM that happened last week on "Beginner tips". So you can click here to play the episode or you can watch the YouTube video down here.

Next piece of content- "Effective PostgreSQL Monitoring: Utilizing the views pg_stat_all_tables and pg_stat_all_indexes in PostgreSQL 16". This is from enterprisedb.com. And there these are two views that give you the statistics about all the different tables in the database as well as all the indexes. And he specifically mentions two contributions that he added. One is adding the Last index scan timestamp(last_inx_scan) to the all_indexes view and the Last Sequential(last_seq_scan) scan timestamp to the all_tables view.

This gives you a great indicator of the last time a table was scanned or an index was scanned. So it helps you understand how often that index or that table is being used. Because a lot of the stats that exist don't really tell you the last time something happened. It does track statistics about how many times something was scanned but not the last one. So with these two additions, we can really see the last time indexes were used or the last time even tables were used. So if you want to learn more about that, you can check out this blog post.

Next piece of content- "Remote Access Anything from Postgres". This is from crunchydata.com. And to do this, he's talking about the ogr_fdw foreign data wrapper. This is actually for doing geospatial work but apparently, this library can communicate with as he says quote "...dozens of formats and services using a single API". So once you create the extension ogr_fdw, you can access a ton of different data. You can access the Google Sheets via CSV URL, you can access an Excel document residing in S3, and you can even access data in a SQLite file over HTTP. So there are tons of different ways of accessing data using this extension. Check out this blog post if you want to learn more.

Next piece of content, PgBouncer 1.20.0 has been released. This looks like it's actually deprecating the online restart feature using the -R flag. I've never really used it because all of my implementations use HTTP and you actually can't use it with that. But apparently, you can do it using reuseport to support online restarts. I haven't seen an actual procedure for doing that but apparently, you can use it to do that. And then a number of different features were added. It doesn't look like any major bugs or security issues but just a feature release. So check it out if you want to learn more.

Next piece of content- "Generated columns and PostgreSQL". This is from dbi-services.com. Generated columns are great for creating a column generated from other data within that table. So like you could auto-calculate totals based on other values within that table. And this gives a simple example for that. I also tend to use it a lot for generating TSP fields for full tech search. So you can check this out if you want to learn more.

Next piece of content is actually a YouTube video called "PostgreSQL hidden gems (Tech Talk NB- April 2023)". Also called a "PostgreSQL can do what now?". And this is more programmer-focused talk about some different capabilities such as crosstab queries, using window functions, using JSON operations, full-text search queries, as well as row-level security. So if you're interested in any of that content, definitely check out this video.

And the last piece of content is "A PL/PgSQL Simple Roman Number Translator". So this is a part of a weekly challenge and he chose to use PL/PgSQL to do it. So if you want to learn more about how to do that, you can definitely check out this piece of content.

episode_image