30% Faster, UUID Downsides, Growing WAL, Processes vs. Threads | Scaling Postgres 271

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss how to get 30% faster performance, the downsides of UUIDs, having too much WAL and whether Postgres should use processes or threads.

Content Discussed

YouTube Video

Podcast Audio


All right. I hope you, your friends, family, and coworkers continued to do well. Our first piece of content is "Making Postgres 30 Percent Faster in Production''. This is from And they're talking about a scenario where if you're running pgbench and you use prepared statements, so you say protocol prepared, it can actually be 30% faster than when running it with a simple or extended protocol. So basically prepared statements give you a performance boost particularly from a planning phase and not having to replan every query.

And where this comes in is talking about connection poolers. So most clients that I work with, when they're using the connection pooler, they use it in transaction mode. Because that enables you to essentially multiplex your connections to the server. You could have maybe 10 or 20 application connections or client connections for every one server connection on the Postgres database. And generally, that gives you higher throughput.

But as a consequence, in transaction pooling mode, session-based features won't work. We even talked about that last week in Scaling Postgres. But this post talks about an enhancement to PGCat, which is another Postgres connection, Pooler, where it actually supports prepared statements, both in session and transaction mode. So this is pretty incredible. You could still run in transaction mode and get the benefits of that mode for being able to scale up a lot of connections and generally get more throughput. But you can also get the performance benefits of using prepared statements. And they have a PG benchmark here going through PGCat for 1, 10, 100, and 1,000 clients, showing this simple protocol versus extended, versus prepared.

And they said they got about a 15% performance boost relative to simple and about a 30% performance boost relative to extended. So if this holds true in production, that would be awesome. And how they're handling this is the Pooler actually maintains a cache of whether a statement has been prepared and then sends that follow-up execution to the very same Postgres connection. And they go into a couple of paragraphs explaining in more detail how they're actually doing the implementation.

So this is pretty incredible if it's actually going to work and give you that much of a performance boost. Now there are some caveats, like for example, I see down here where they said if your client library prepares their statements using PREPARE explicitly, PGCat will ignore it and that query isn't likely to work outside of a session mode. So basically it looks like there are certain conditions that need to exist for this to work.

But if you're interested in this you also should consult this next post from "Speed up Postgres with transaction pulling with prepared statements". And this is Lucas's “Five minutes of Postgres”. This is the post that he covers, but he also looks at the source code that made the change and it looks like they made it based upon a pool request that has been sitting with PG bouncer. So it looks like PG bouncer initially has a pull request providing this very feature, but it just hasn't rolled in yet.

But he also shows how there's additional work being done and that this type of feature may be released in PG bouncer in the future. So it looks like we may get PGCat and PG bouncer being able to support prepared statements, even though you're doing transaction pooling, which is awesome. And he even showed someone who put the PR for PG bouncer in production and it showed a drastic reduction in CPU utilization for their particular use case. So definitely interesting things going on and I encourage you to check out both posts if you want to learn more.

Next piece of content- "UNEXPECTED DOWNSIDES OF UUID KEYS IN POSTGRESQL". This is from And this is going back to the discussion of whether your primary keys or your identifiers should be integer IDs or UUIDs. And I'm definitely on the fence of always using bigints for your IDs unless you have an explicit reason to use UUIDs. And I generally say that for performance reasons. Because it's a well-known problem that UUIDs have problems with right amplification and generating a ton of wall as you're inserting data because it gets inserted all over the heap because they are essentially random IDs.

But you can't avoid some of that if you use a type of UUID where the initial portion is more sequential and the last portion is random. And this post talks about that a little bit, but also a specific query downside as well. So they have a comment here with regard to what I was saying is that quote "Since workloads commonly are interested in recently and sorted rows, this means that a random UUID-based index. We'll need to keep essentially the whole index and its cache, whereas the sequential id index can make do with only recent pages". So that's another benefit of is that recent data will all be kept together on the disc. And that should make query that data much faster.

But he did an actual test where he created this table and created a bigint column, a random UUID column, using version four, and then a timestamp-based UUID method. So the initial portion is timestamp based, and the latter portion is random. And they're calling that a uuid_v7. Then he inserted, I believe, 10 million rows, credit indexes on each of them, and did some index only scans to see how they work. So querying by the bigint returned in 526 milliseconds, whereas querying from the random UUID returned in 1.2 milliseconds. So over twice as slow doing the count. Now they say one might think that's size-related, but then if you query from the time-based UUID, not the fully randomized one, it returns in 541 milliseconds. Very close to the bigint.

So it's definitely not the size, it appears to do with randomization. So he did an explain plan to take a look and he says quote, "Holy behemoth buffer count batman. What's going on with the number of buffers..." So you could see for the bigint query, the shared buffered head was about 27,000. The shared buffer hit for the random UUID was 856,000, so drastically higher. Now they're doing an index only scan here and the indexes don't contain information related to whether a rogue is visible or not. So you basically have to go to the heap to determine that information. But to help with that there's a visibility map that's critically used during index only scans to kind of give you that information.

Now the performance catch with this for sequential IDs, the next value is almost always on the same page, so it's relatively efficient. But with the random UUIDs, he says there's about a one and a seven chance that the next value will be on the same page. So it says this results in having to do about 8.5 million extra buffer lookups for visibility checks. And when using the time-based UUID, you could see the shared buffer hits was 39,000 So definitely more than the bigint, but again drastically smaller than that random UUID.

So his conclusion, which is one we've seen for other posts for other reasons is quote, "if you want to use UUID's, try to use a sequential variant". So that's definitely a must for performance reasons. And apparently, UUID v7 is going to be a new standard and he says hopefully it's coming in PostgreSQL 17. However, he was using a function called uuid_generte_v7. And he didn't mention where this came from, but when I did a Google search, I did find this GitHub project called pg_uuidv7 that is an extension to create these.

And the exact function mentioned is the one he used. So he might have used this, I'm not sure, but he did say there are other implementations that are available. So definitely if you want to use UUIDs, definitely use a sequential version. Hopefully, it will be coming as a next major version of Postgres or maybe you could check out this library or others.

Next piece of content- "Why is my WAL directory so large?". This is from And he's talking about a WAL, which is the Write Ahead Log, and the primary purpose of it is to handle crash recovery. So things get written ahead to the Write Ahead Log and then the actual data files actually get changed. So the WAL should be kept in a size, as he says here, between, your configuration setting, of min_wal_size and max_wal_size. But it also takes into account wal_keep_size to determine how many WALs should be retained when you're doing, say, streaming replication.

And so what happens is when these WAL files are no longer needed, they either get deleted, if you're not archiving the WAL, or archive that WAL to a separate location. So the reasons that WAL can start to stack up is that the archiving process is failing. Maybe you have a bug in the command you're using and you're not archiving the WAL. The second option is archiving a slow. So you're inserting data, changing data, so fast that the archiving process can't keep up. Now, I've generally only seen this if you're using some sort of over-the-network object storage.

Like trying to send an individual WAL file to S3. That can definitely be slow and you can insert data faster than you're actually archiving. And then the third option is replication slots, or I should say, orphan replication slots. Because this replication slot tells the primary database "do not get rid of this WAL until the replica using the slot has consumed it". So basically, if you have a replica going offline, and you have a slot in place, that's essentially an orphan, and basically, WAL won't be archived.

So those are the reasons, but what's great about this post is it gives you some queries you can use to try to determine what's going on. So this first query is great because it looks at the pg_stat_archiver and he's showing what the last archive WAL was and then what is the current WAL file being written to. So you can look for any latency between those and you can see how far behind you are. And then this next square, he's showing you how you can look at replication slots. And then, of course, pg_stat_replication shows you a lot of information with regard to replication about how things are keeping up as well. But definitely check out this blog post if you're interested.

Next piece of content. I'm actually showing the Postgres weekly article that was published this week because they had a post on quote "Postgres Reconsiders Its Process-Based Model (Again)". And this article is actually from subscriber content, so I didn't want to paste the subscriber-based link in here. But this was an interesting read, so you can subscribe to LWN or you can click on the link or go to Postgres weekly if you want to learn more about this discussion. And definitely, there are people on both sides of the fence on whether we should, whether we not, what are the advantages, what are the disadvantages? I found it an interesting read. So you can definitely check this out if you're interested.

Next piece of content- "When Hell Freezes Over?". This is from And this is a discussion about primarily transaction ID wraparound and in relation to what's coming with Postgres, as well as how OrioleDB is handling some of this. So it looks like 64-bit transaction IDs are coming to Postgres eventually. But this post talks about, even though once you achieve that, you're still going to have some performance disadvantages. And he states how taking it from a first principles approach or from a Greenfield approach, to the code you can actually get even more performance. And he's advocating something that OrioleDB is working on, which is a quote "a new open-source pluggable storage system for Postgres...".

So basically this is essentially an alternative to what Z heap is basically supporting an undue log. So when it's supposed to instead of creating a new row for every update in Postgres and then eventually vacuuming it up, they're talking about using an undo log to record the changes that exist and do an update in place. So they're advocating that their new storage system overcomes a lot of different problems. And I think it's great that they're working on this, but if you're interested in learning more about it, definitely check out this point post.

Next piece of content- "Native enums or CHECK constraints in PostgreSQL?". This is from And they've come to the conclusion that I've seen in a number of posts, they don't want to use native enums and they're only going with CHECK constraints because they're just easier to work with, easier to manipulate. There are, unfortunately, just too many downsides with using enums and they go into detail about their decision process and where they decided to only use CHECK constraints, which is what I've heard from other people as well. So check out this blog post if you're interested.

Next piece of content. It looks like PostgreSQL has achieved both the most desired and the most admired ranking in the 2023 Stack Overflow Survey. So kudos to Postgres. You can check out this content if you want to learn more about it.

Next piece of content- "The Do's and Don'ts of Postgres High Availability Part 2: Architecture Baseline". This is from And they have some recommendations about what you should do architecturally for high availability. The first is providing a quorum that has a minimum of three nodes. So, you don't want to use two nodes because you're going to run into issues like false failovers, split-brain problems, routing irregularities, and partition vulnerability. So you definitely want at least three, if not more. And they show some diagrams, showing an example of where they're actually placing each of these items in an availability zone.

The next area they cover is rules for connecting, and basically they suggest using a proxy between your application and the database. So that the application does not have to know what the primary is, what the standby is, et cetera, but just use that routing layer to determine where to go. But they have some things to watch out for in designing it. And that maybe you want some redundancy within there and then also be careful how you can figure it out. And then finally they cover backups on some guidelines with regard to that. So if you're interested in that definitely check out this blog post.

Next piece of content- "Fun with PostgreSQL Puzzles: Moving Objects with Arrays, Sequences, and Aggregates". This is from And basically, this is the solution to last year's, or last December's, Advent of Code for day 17. So definitely check this out if you're interested in that.

Next piece of content- "PostgreSQL as a Vector Database: Create, Store, and Query OpenAI Embeddings With pgvector". This is from And if you want to use AI or work with OpenAI and use pgvector and Postgres to be able to store those embeddings, you can definitely check out this blog post.

Next piece of content- "Deploy Pgpool on K8S as Load Balancer". This is from and they discuss how to set up PGpool and Kubernetes for this.

Next piece of content. There was another episode of Postgres FM last week. This one was on “Memory”. So if you want to learn about how it works, important settings, and how to go about tuning it, definitely listen to this episode here or look at the YouTube channel down here.

Next piece of content. The PostgreSQL person of the week is Creston Jamison, basically me. So if you want to learn more about me and my work with Postgres, definitely welcome you to check out this piece of content.

And the last piece of content, we did have another episode of the Rubber Duck Dev Show this past Thursday afternoon. This one was on “3D & Robotics, using Dragon Ruby with Kota Weaver”. So we talked about 2D versus 3D, along with a fair amount of math principles, and how it relates to the robotics he's working with in his company. And now, he's using the Dragon Ruby platform to help him with simulations. So if you're interested in that type of content, we welcome you to check out our show.