background

Sequential UUIDs, Runnning Totals, Read Committed, TOAST | Scaling Postgres 41

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we review articles covering sequential UUIDs, running totals with window functions, read committed vs repeatable read and TOAST.

Content Discussed

YouTube Video

Podcast Audio

Transcript



[00:00:00] In this episode of Scaling Postgres, we talk about sequential UUIDs running totals,read committed versus repeatable read and toast. I'm creston. Jameson. And this is scalingpostgres episode 41.

[00:00:23] Alright. Our first article this week is sequential UUID generators. And this is from thesecond Quadrant.com blog. Now, UUIDs of course means a universally unique identifier. Andsome of the issues that can crop up with using them in a database system is that they are notsequential, they're entirely random. So if you wanted to use it as a primary key, which has aunique index every time you add a new UUID, it's going to have to find that placing the indexwhere that is going to be properly ordered. It's much easier to do that with a sequential integer.For example, you're essentially just appending it to the index, whereas this has to searchthrough the entire index for each new UUID generated to find the right location. So what thisblog post talks about is actually imparting a little bit less randomness, so that you can get a littlebit more sequential UUIDs. Now, they talk through some of the issues here that entirely randomeuids can cause, like some slow insert performance, maybe even some slow selectperformance, because not as much of the recently changed data is cached because it has tosearch all over the index for it. And also talks an issue about a write amplification where yourwall is larger due to having to do full page writes because again, you're touching many differentpages. For example, when you're updating an index and they talk about making them a little bitsequential and some naive implementations you can do. Maybe you can put an integer in frontof a Uuid, essentially append it with some random digits. Or you put a, they say here a 32 bitUnix timestamp in front of it and append some random bits. But there's actually an extensionthat does this for you called Sequential UUIDs. Now, this supports two modes. One is get a Uuidsequence. Next value. You specify sequence. You want to use a block size and a block count. Oryou could do UUID time next val and have an interval length and a block count. And they showroughly what the implementation looks like for doing a sequence versus using a time. Now, whatmakes this a little bit different is that either the sequence or the interval length actually wrapsaround after so many UUIDs are generated to give you a little bit more of that randomness, buttry to diminish the implications of the UUID being entirely random. And this is the part where Ithink the blog post gets even more interesting because they did a benchmark. So they comparetests from four UUID generators. One UUID generate version four, which is entirely random, onebased upon time. And this is the configuration they used, one based upon a sequence where theblock size is 256 and one where the sequence is, they tested three different ways. Small isessentially they start from an empty table just doing inserts. So how many transactions persecond? How fast can they do the inserts? Small, the table is empty. Medium the table hasessentially fits in memory and large is it no longer fits in memory, it essentially has to go to disk.And what's very interesting is that you clearly see the cost of having entirely random UUIDsversus some sort of sequence imposed upon them. You can see how much your performancedrops off here and then further at this point here. So that's approximately five times slower at thelarge stage. And even it goes half as fast once your sequence block size is 256. So it's resettingthe sequence pretty quickly. So that's for inserts. And then it also looks at write amplification andagain, looking at the small data set. So an entirely empty table doing the inserts, you'll noticethat it's this much smaller due to the full page write images that have to be done. Looking at themedium, the wall size gets even worse. Again, it looks like about five times as much walls beingwritten. But it's even worse than that because once you normalize over how many transactionsthat are actually processed, you can see it's a lot worse, maybe 20 times as much compared tousing the extension with a time based in a sequence with a 64K block size. Then of course, theyshow the large data set and then normalize it here and it still gets pretty bad. But what theymake note of is that the sequential writes of the disk that they're using are pretty quick. But itdefinitely clear --ly shows the wall size difference ramifications of using entirely random UUIDs. And then theylook at the cache hit ratio. So they're looking essentially select performance. And again, whenyour table is empty, there's pretty much no difference. Again, when it's medium, you see entirelyrandom dropping off. And again with large that doesn't no longer fits in memory. The cache hitratio drops to like 85%, whereas again, based upon the 64K block size sequence in the timestays up at approximately 99%. So, I thought this post was super interesting and I loved howthey really did an analysis to see how well each performs. You can really see the cost related torandom UUIDs. Now, they say they tested a 256 block size and a 64K block size and they didsee differences. But the question is what is the best setting for you? It's actually kind of basedupon the table. So it's something that you may need to experiment with to find the best and mostoptimized settings. So if you're using UUIDs for primary keys or even if you're using it as anindex, this is something to potentially keep in mind. And if you don't need that entirely randomset of UUIDs, could you impose some sort of sequence to it to potentially get better performanceout of your database. So this is definitely a blog post to check out.

[00:06:24] The next post is easy. Running totals with windowing functions in PostgreSQL. This isfrom the Musings rants and ponderings of a DB architect at the Dennisgobo blogspot.com. Nowthis is a relatively short blog post, but he basically talks about using a window function togenerate running totals that essentially look like this. So you have charges come in and youwant to know, as each charge happens, display the running total or essentially a balance of anaccount. And this post was really great in terms of going over exactly the code you need to do.Explains everything with regard to like what does rows between mean, what does unboundedpreceding mean, what's current row? So if you haven't gotten into Windows functions for thepurposes of things like running totals, this was a really great intro and really explained all thedetails behind it and was a really great example. So I definitely suggest you check it out if that'sof interest.

[00:07:25] The next post is Transactions in PostgreSQL read committed versus repeatable read.This is from the CyberTech Postgresql.com blog. Now this course is talking about transactionvisibility and they explained what recommitted is. Basically when you're in a transaction andyou're querying a table, you will be able to read whatever is committed. So if you're middle of thetransaction and somebody another user commits something, you will be able to see that the nexttime you run a statement within your transaction, essentially if it's Committed, you could read it.The next option is repeatable. Read. What that means is when you start your transaction you areessentially saving the state of what the data looks like. And even though if other users arecommitting changes, you're not going to see a difference in your repeatable read. Now bydefault, Read Committed is the mode that's enabled and it's usually the one you want to keep ifyou're using an online transaction processing load. However, repeatable read may have uses incase of data warehouse and they have a graph that shows the example here where someone isusing a Read Committed isolation level. And you can see when user three doesn't insert into thetable that the count, they will be able to see that Committed change, whereas user two who'susing a repeatable read, it will consistently give 17 while they're within that transaction. So ifyou've ever been curious about the difference between Read Committed versus Repeatableread for transaction isolation, definitely a blog post to check out.

[00:08:57] The next post is the best postgres feature. You are not using CTEs aka with clauses,and this is from the Craigchurstines.com blog and essentially it covers what CTEs which standsfor a common table expressions or essentially with clauses are. So again, I like to think of these,they're essentially like self contained sub queries. So you could easily write transactions usingsub queries, but essentially these pull those sub queries out and make them essentially theirown function. So for example, this creates a I'll call a function User's Tasks. So with Users Tasksand then you define what query you want that to store and then y --ou can chain them together and use them. Now, he does have a caveat here is that it may notbe quite as performant SQL, but it definitely wins the award of being more readable because it'smuch easier to parse a SQL statement in bits and pieces. So if you haven't used CTEs andyou're potentially wanting to use them, definitely a blog post to check out.

[00:10:03] The next post is toasting up large objects. Blobs. CLOBs. Let's talk toast. So this istalking about Toast, which stands for the oversized Attribute Storage technique. Now,PostgreSQL and they mentioned EDB postgres here as well, they used a fixed page size,commonly 8. It doesn't allow a tuple or essentially a row to span multiple pages, therefore it's notpossible to store very large field values directly. So essentially, it creates a pointer to this toasttable and it says, quote, almost every table you create has its own associated, unique toast tablewhich may or may not ever end up being used. Depending on the size of the rows you insert atable with only fixed width columns like integers may not have an associated toast table. Andagain, most of this is transparent to the user and enabled by default. But this goes into howthey're used a little bit of the implementation typically used for byte A and text fields that canstore very large values. It talks about the implementation and then he talks about the differencebetween Blobs and clubs. So for example, byte A and text is toastable. So those columns in atable can be placed in the toast. You don't need to use a special large object API transactionsnot required. You need to track the OID. And this is something particularly of interest as you'rescaling is that the object storage limit is 1GB. However, using the separate Blob club, the objectstorage limit is 4GB. So if you're wanting to get a little bit more information about toast and howthat works, this is definitely a blog post to check out.

[00:11:47] The next post is how postgres is more than a relational database extensions. And thisis from the Citusdata.com blog and essentially they're talking about one of the things that makesPostgres particularly interesting to use as a data storage system is its ability to acceptextensions. So yes, they continue to add new and great features to the core, but extensionsallow other communities or groups or even companies to develop extensions that fully utilize thecore PostgreSQL database system but add additional functionality to it. And he goes over someextensions in the real world, as he says here. One is custom data types. So for example, hetalks about hyperlog Log, which basically does really fast approximate counts. Talks aboutmonitoring in terms of using PG Stat statements, which is an extension you could add to get kindof analytics to the queries taking place. Talks about PostGIS, which essentially, quote, turnsPostgres into the world's most advanced open source geospatial database, talking about theirown extension, Citus, which transforms postgres into a distributed horizontally scalable databaseand then even talks about accessing data outside of postgres using the foreign data wrappers.And basically extensions are a way for you to add capabilities that the others in the communityhave built or even built your own. So if this is of interest to you, definitely a blog post to checkout. The next blog post is GUI tools for postgres. And this is from the PG IO blog, and it'sessentially a list of GUI tools. So first is PG Admin. It's the one I tend to use when I've used one.They also talk about Omnidb DB beaver. They have a great list of different GUI tools. So ifyou're in the market for one, this is a potential blog post to check out. Now, related to that in thelast post is Webinar introduction to Omnidb follow up. And this is from the second Quadrant.comblog. So again, the second GUI tool listed in the previous blog post was Omnidb. So again, ifyou're in the market for a Gui tool, maybe check out this webinar to see if it has the features thatyou're potentially looking for.

[00:14:03] That does it. For this episode of Scaling Postgres, you can get links to all the contentmentioned in the show Notes. Be sure to head over to Scalingpostgres.com, where you can signup to receive weekly notifications of each episode, or you could subscribe via YouTube or itunes.Thanks. --

episode_image