background

BRIN Pitfalls, pg_stat_statement Troubleshooting, Natural Join, Geography Type | Scaling Postgres 126

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss BRIN pitfalls, using pg_stat_statements for troubleshooting, natural joins to detect duplicates and the geography type.

Content Discussed

YouTube Video

Podcast Audio

Transcript



[00:00:00] In this episode of Scaling Postgres, we talk about Brin pitfalls, PG stat, statement,troubleshooting, natural join and geography type. I'm Kristen Jameson, and this is scalingpostgres episode 126.

[00:00:17] One.

[00:00:22] Alright, I hope you, your family and coworkers continue to do well. Our first piece ofcontent is avoiding the pitfalls of Brin indexes in postgres. This is from Crunchydata.com andBrin is a block range index. Now in this post they talk about blocks in pages, but pretty muchthey're interchangeable. That's basically the default storage unit of postgres and a block rangeindex indexes ranges of blocks. So it doesn't give you the specific data, but a range of blocks tolook for the data in. That's what makes a Brin index so much smaller usually. And by default itstores ranges of 128 blocks. Now to go over the pitfalls, they first create a test table here thatusing generate series to create a series of timestamps and basically one row per second. Andthey actually use a page inspect extension so you can actually take a look at what the indexlooks like. So for block number zero, it's going to be this range here and then at the 128 blocknumber, the block range that is storing is here and it gives it a little graphical representation ofwhat you can see within each page or within each block. So what that means you have multipleblocks and that block range is zero to 128. So this is the first row of the index, essentially. Sowhen you're going to search for a particular value, it has to look through this block range to findthe exact value you're talking about. So the index helps get you close, but not exactly like a Btree index would. So that's why a B tree index is faster to pull out single values compared to aBrin index. And then they looked at how many pages their particular table was using and lookingat the count of rows, which was 1 million. And basically there are about 20,000 records for eachbrin record. So you can see why Brin is so small. And then they have this really great chart herewhere they're showing number of records, the relative table size, comparing it to the B tree andthen the brin size. So you can see the brin is tenfold smaller at a small number of records. Buthere it looks like it's close to 1000 times smaller, two megabytes versus 21gb or maybe 10,000times smaller once you have a million records. So Brin efficiency goes up the more records thatyou have. And then they show how fast it is to retrieve a record when you have these differentrecord sizes. So as you could see, the Beatri index to pull out a single record is relatively fast,whereas the single record with a Brin index it seems to be ten times as slow to pull out thatsingle record. But when you're looking at ranges, then the performance gets much closer. Somuch so that when you're looking at a month of data and a billion row table, the Brin index isactually a little bit faster in retrieval than the Btree index. And they have some of the conclusionshere. One, Brin space savings over Btree are significant. Second, Btree generally outperformsBrin for single record queries. Thirdly, the query planner didn't even consider the Brin index untilthere were at least 100,000 records in it. And finally, Brin performs better comparatively whenlooking for more records.

[00:03:43] So the question, when should you use Brin? And they have a very good statementhere that says use Brin when you have a large insert only in order table with large select results.So basically it works well when you have a lot of records in the table because again, it'soperating on block ranges insert only because order is important, as we'll see in a second in thispost. Because if you have blocks covering all sorts of ranges, brin loses its efficiency and thatyou're pulling out a lot of records. If you have a huge table but you're just pulling out one recordat a time, just stick with the B tree index. But if you're rolling up data like they show here, like aday, a month, a year, then Brin starts to make more sense. Now in terms of what to look out foris when you have things out of order, because then the block ranges don't align up neatly andyour Brin index will lose efficiency. And they have this great illustrative guide here showing it. Sobasically this is the record layout for the pages and then at the top here, it shows you the rangeof records that are included within this page. So this is the range that the p --rint index will be looking at. Now, when you have a delete, you're going to remove this recordfrom here. Now once that space is vacuumed, it's going to clean it up. So now that record is nolonger there. But then if you have a new insert, the space can be reused and now say this newrecord gets inserted in this database here, where four was. So now if you look, the new range isone to 19. So if it needs to look up the value ten, for example, it has to look in two differentranges now. So that's a lot more data to search through than just searching through one blockbefore. That's why it's important to use a brand index with contiguous data values, like asequence or an ever increasing timestamp. And then updates also have an issue. So in terms ofan update, let's say you updated twelve to four. Well, twelve gets deleted because that's how anupdate works in postgres. And then a new record is inserted in this free page and that impactsthe block range. So now instead of looking in the range of 17 to 18. It's looking from four to 18.Now, if you tried to select twelve, well, now twelve is in the entire range of all three of theseblocks. So it's going to pull forward three blocks to try and search through. So then they did atest doing 20% deletes and 20% inserts, and you can see how the performance degrades. Solooking at 100 million records, one record goes from ten milliseconds down to three to 8 secondsto retrieve it. And again, whether you pull an hour, a day or month, it doesn't really matter. Buteven looking at 10 million records, one record goes from eight milliseconds to 680 millisecondsbecause of all these additional ranges it has to search through. So definitely in terms of thepitfalls of Brin, you want to keep the data contiguous and not have very many, if at all, insertsand deletes. So this Brins work really well for appendonly tables. So if you're interested in usingBrin, I highly suggest you check out this post. It was really great in terms of understanding howBrin works and some of the Pitfalls.

[00:06:54] The next post is a quick PG Stat statements troubleshooting hack. This is fromCyberTech postgresql.com. They have a scenario where they kind of need a monitoring system,but they don't have one implemented yet. But they do have PG stat statements. So basicallywhat they do is they create a snapshot table that takes snapshots of the PG Stat statementstable over time and stores them with a Timestamp column. So you can see here it created a Statstatement snapshots table with a Timestamp column, followed by all the remaining columns fromPG Stat statements. And then this is just very quick and dirty. Just did a watch to execute andinsert the new data from PG Stat statements into the snapshot table. Now, of course, you canimplement this different ways, but this is kind of a quick and dirty. They call it a troubleshootinghack to be able to get some statistics over time relative to how queries are performing. Thenthey give you some different queries that you can use to find out and analyze the data. So forexample, this one is which top three queries were burning the most CPU time between times tone and times t two, as well as what queries had the biggest absolute execution time comparedto the hour before. Now, he says you can also improve the usage here by doing a couple ofthings. The first is to create this snapshot table as an unlogged table. So what that means, it'snot going to write to the wall, it's not going to be sent to the replicas, but of course it's no longercrash safe. So you could lose the data at any time. But it does give better performance anddoesn't impact the system as much to do this logging essentially the second thing to make it alittle bit more efficient is maybe don't include all the queries every time from PG Statsstatements. Or third option is don't update a record in the snapshot table if it hasn't updatedsince its last update. So again, this is a pretty interesting way to create a hack to be able tomonitor your queries that are executing in your postgres system. So if you're interested in that,check out this post from CyberTech Postgresgirl.com.

[00:09:00] The next post is use natural full join to compare two tables in SQL. So he had seen apost that was using Union and Accept to compare two tables and show the differences, but hefelt it was a more efficient use to use a natural full join. So this compares all the rows of the tablebetween different on --e and finds the differences. So here's his implementation here and it basically shows you thedifferences as well as the table where the values differ. Now he did say this will not work if youhave nulls in your table, but he did give another implementation where there are null values inthe data, basically using the distinct predicate. So if you want to use this technique to comparetwo different tables, definitely check out this post from Jooq.org.

[00:09:50] The next post is PostGIS and the geography type. This is from Countrydata.com andthis explains the geography type that's available within PostGIS for calculating sphericalcoordinates. And he talks about the importance of this because normally we think of flat planeswhen looking at a map, like Cartesian coordinates an X and a Y. But really we need to thinkspherically because the Earth is a sphere and whereas it looks on a map that oh, look. New Yorkexists, right between Los Angeles and Paris, France. That's not how it exists on the Earth.Actually, the shortest distance is actually this route here because along the sphere this is theshortest distance. So it's important in many areas to use this geography type to deal withspherical coordinates. Now, he does say something like if your working area is geographicallysmall enough, a state, a region, a country to fit in a projected system using a projected systemand the geometry type for all your calculations will result in simpler SQL and faster queries. But ifyou have global data, you'll find your system is simpler to code and reason about if you use thegeography type. So if you're interested in PostGIS and learning more about the geography type,definitely check out this post.

[00:11:11] The next piece of content is webinar being committed a review of transaction controlstatements. Three out of three follow up. This is from secondquader.com and it's the third seriesabout transactions in postgres and they cover transactions and locking and why it's important toavoid collisions and how you can improve transaction flow by reducing contention. It talks aboutthe blind spot of inflight commits. So if you have a commit and then the system crashes, youdon't know if that commit was committed yet. They show kind of a way around that, a way torecover after failing at the worst time is the way they phrase it here. And then commit at mostonce, showing how to recover after failing at the worst time and in the worst way. So if you'reinterested in that, definitely check out this webinar from second quadrant. You can just click thelink here to register for it.

[00:12:03] The next post is beyond JSON B, a generalized unstructured data type for Postgres.So this is talking about JSON B and how the different types that are supported within JSON Bare the number, the boolean, the string, null array, and object. But they bring up the point how doyou handle things such as binary data? Because there's no real great type to do it. You're goingto run into issues if you try to do it as a string. How do you handle Internet types, point types, etcetera? And they're arguing that perhaps Postgres should have a superset of functionality tosupport things within JSON, because they mention all these other implementations that actuallyhave added features to the JSON standard to be able to support more complex data types. Soreally, this is just a discussion and an opinion piece. And if you're interested in learning more,definitely check out this post from Angry.

[00:13:00] The next piece of content is Continuous PostgreSQL Backups using Wall G. So ifyou're interested in using Wall G for doing your backup and restore procedures, definitely checkout this post from Supabase IO.

[00:13:16] The next piece of content is NSS on Windows for PostgreSQL development. So NSS,as they say here, is an alternative TLS library to OpenSSL, so if you're wanting to use that withPostgres on Windows, you can definitely check out this blog post from Secondquader.com.

[00:13:35] And the last piece of content is the PostgreSQL Person of the Week is Tatsuyo Ishii.So if you're interested in learning more about Tatsuyo and his contributions to Postgres,definitely check out this blog post that does it. For this episode of Scaling Postgres, you can getlinks to all the content mentioned in the show Notes. Be sure to head over toScalingposgres.com, where you can sign up to receive weekly notifications of each episode, oryou can subscribe vi --a YouTube or itunes. Thanks, our. --

episode_image