background

Postgres In 2024 | Scaling Postgres 297

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss hopes for Postgres in 2024, whether you need foreign keys, incremental backups, and five ways of doing not exists.

Content Discussed

YouTube Video

Podcast Audio

Transcript

Hi everyone, welcome to 2024. I hope everybody had a great holiday season and you're back and ready to go for 2024. But before we get started, I hope you, your friends, family, and coworkers continue to do well. To kick off 2024, I thought I would start with this post from Jonathan Katz on "THOUGHTS ON POSTGRESQL IN 2024". This is from jkatz05.com and at the bottom he said he thought this was going to be a quick post, but he ended up spending multiple days on it. I think it shows; it is a really good post and I would encourage you to read this because I can't cover everything and he has links to other supplemental material you should definitely check out. But to get started, he was talking about having a feature development meeting at PGCon in 2023, and the topic was what are the biggest challenges for PostgreSQL users? He basically put those into three different buckets. 
One was availability, another was performance, and another was developer features. So a lot of the blog post covers that. Now, in terms of availability, what he's talking about is having your database available, meaning that there's no downtime, potentially due to misconfigurations, or that if you're doing upgrades, there's minimal downtime, or even avoiding excessively locking something when you want to make some change. Basically, making availability as high as it can possibly be. Now, the first error he talks about is with regard to logical replication and how, with the new features that have come out this year, being able to support active-active. Basically, this is the new name for multimaster, basically having two active or two primary servers at the same time. So if one server goes down, you still always have access to an instance to write to. 

The other one is talking about doing blue/green deployments, so you always have an instance to go to, and logical replication can provide that capability and zero downtime upgrading. Now, I have not used a lot of logical replication in the past, but I've recently started using logical replication for Postgres upgrades. So the last few upgrades I've done, I haven't been using pg_upgrade, but I've been using a logical replication solution to do it, for a couple of reasons. One, it can help you minimize downtime even more than pg_upgrade can. I mean, with the link capability of pg_upgrade, you can really cut down your downtime, but the logical replication switchover is even faster in some of my experiences. And two, when you're dealing with coalition changes that can occur across operating systems, you're going to be stuck with that on pg_upgrade and have to re-index all your indexes. With a logical replication solution, you don't have to worry about that, so that's super great. I've liked using it so much, I think this could be a path forward for doing any sort of upgrade. So not just major versions, but even if you wanted to do a minor version upgrade, I could see using this technique as well to kind of minimize some of that downtime. And could this be the future other than streaming from the WAL? In other words, do they just become logical replicas you're using as opposed to WAL streaming replicas? I don't know, but I see that as an interesting path forward. Now, they don't discuss that, particularly here, but they talk all about the enhancements that happened this year in terms of the active-active feature set that was added, but also the things that are waiting in the wings in 2024, which I'm really excited about. One is being able to support sequence transitions, so for example, sequences don't get passed over to the replica. That would be awesome to have. Two is the fact that DDL statements aren't being transferred from the primary to the replica. Having that capability would be a great boon. 

And then the other thing that he didn't mention here that would be awesome to incorporate in Postgres is being able to transfer the state of a replication slot from a primary to a replica. So if it gets promoted, that replication slot is already present and at the correct location. So if these particular three areas are addressed, I could see a lot more use cases for using logical replication, and they would be great additions. The other thing he talks about is unblocking different locks. So this is where there are certain activities where you alter a table that requires an ACCESS EXCLUSIVE lock. So being able to minimize the number of times that happens. So is there an option, for example, that you can create indexes concurrently? Could you do some of these table changes concurrently as well? That would be awesome. The next bucket that he talked about is performance-related changes. So the first one he talks about is being able to support really large instance sizes of computers and can Postgres scale to use a 448 CPU box with 24 terabytes of RAM, for example. The other area they're talking about is IO. So being able to support direct IO, not relying necessarily on the operating system, but also asynchronous IO. And apparently, a lot of that work is being worked on, and we'll see what comes to fruition in 2024.

Another huge area that he mentions, I don't know how much work is being done on it, is parallel recovery. So right now, if your primary system goes down and it needs to restart and it starts reading back on the WAL, it does that in a single process. Or if you're having a replica that's trying to follow a primary and trying to get caught up, it uses a single process to do that. But the problem is, I've had some customers that it's a struggle to get that replica caught up because it's a single process. If you could paralyze it, paralyze that recovery, it would allow the replicas to catch up faster. It would allow a primary to reboot faster as it replays the different WALl files. So that's not only a performance reason, but also an availability reason. So this kind of fits into both buckets and then the next bucket is developer features. So he's praising things like the multi-range data type that was recently added, as well as the work that's been done on JSON. But he said they've fallen a little bit behind in the feature set of Postgres relative to the current SQL/JSON specification, so they should probably apply more work to that. Also applying different features that exist in some other databases, like global temporary tables, global partitioned indexes, and things of that nature should help people migrate to Postgres if we had those particular features. And of course, he follows up by talking about AI and machine learning data and how the pgvector extension is great, but should some of those features go into core Postgres? He does talk briefly about security, particularly about encryption and transparent data encryption, and how working towards that is important. He talks a little bit about extensions and how it's great that we have extensions, but he says he feels the effect of an extension sprawl. There are so many extensions out there. I'm not necessarily sure how you address that, but apparently, there are multiple extension repositories available. If those could be coalesced somewhat, that may make the issue easier. And lastly, he talks about community building and basically making it as accessible as possible for everyone to join the community and make a contribution. So that's a great point of view as well. So definitely highly encourage you to check out this blog post. I think it's the best one for the new year.

Next piece of content - "Do you really need Foreign Keys?". This is from shayon.dev, and I'll go out on a limb and say you don't need them until you need them. What I mean by that, starting out, maybe they're not as important if you know your code and all the data changes that are happening to the database and you follow particular guidelines. But once you start introducing more people, and more programmers to the mix, that's when constraints, like foreign keys, become incredibly important. Maybe you're using an ORM, but that ORM can be bypassed by most programmers which could defeat the constraint capabilities of some application frameworks. 
And there the only thing protecting you is a foreign key constraint for some issues you may encounter. So that's why I say you probably don't need them until you need them. But his reason not to use them religiously is performance impact during writes and deletes. So this is definitely a case whenever you place some sort of constraint on the system that must be checked before it does whatever action it's going to do, whether you're inserting, updating or deleting data. Now, he doesn't have an INSERT example here because I wonder what the performance difference of that would be. But he shows a DELETE example for a particular schema he set up, and it took half a second to delete a set of records with foreign key constraints. When he removed those foreign key constraints, and did the same delete, it took one 10th of a second, so it was five times faster without the foreign key constraints doing DELETE operations. So there definitely is a performance cost. The next area, he says, is to possibly not use them as unnecessary lock contention. So yes, checking those constraints could lead to some lock contention. So that's definitely something to be aware of in your system. Next is the complexity of database migrations and scaling. I don't know so much about scaling, but definitely when doing database migrations you have to do them more carefully when they're foreign key constraints there because you have to do things in a certain order to make sure that everything is accurate.

Next is unwarranted indexes. So generally, when you put on a foreign key constraint, you're going to want an index added as well. So those are additional indexes that maybe you wouldn't ordinarily use, but you need them there in the off chance you're going to have to validate that constraint when you're doing some sort of operation. But maybe you wouldn't ordinarily need that for doing the queries against that particular table. So it's just an additional tax on modifications to that particular table. Basically, he says, practice reflection. So he says, quote "As you start on designing a new project, I think it's perfectly fine to initially incorporate foreign keys". But then revisit this periodically to make sure that it makes the most sense. I think I'd probably recommend something similar at this time. Go ahead and implement it while the project's small, but know that they could be a burden for your performance. And as you start scaling then make the determination, should we drop the constraints for these particular tables in these cases to get the most performance? Or is the data integrity more important to you so you can make those decisions at that time? But I thought this was a thought-provoking blog post. Definitely encourage you to check it out.

Next piece of content- "PostgreSQL 17: Support for incremental backups". This is from dbi-services.com and this is awesome, the fact that we may get incremental backups in Postgres 17. So right now, when you're doing a backup, you would use pg_base_backup. That copies all the data files and some of the WAL files, and then you would need to retain all of the WAL files for a particular point in time. When you need to do a restore, you restore that backup that you took and all the WAL files to bring the database back up to that point in time. So depending on how often you take that full backup, it could be one day's worth of WAL files, it could be three days, it could be seven days, depending on how often you do that full backup. But if your system goes down and you need to do a restore, you're going to be applying all of those WAL files that have been generated. 
But with an incremental backup, it's a different way to do it. So you can take the full backup and then you only backup what has changed since the last full backup. So maybe you take a full backup once a week, incrementals every day, and you still have the WAL file, so you can restore to a particular point in time in presumably a much faster way. Or maybe you still do a backup every day, but every couple of hours or every 4 hours you do an incremental backup and then the WAL files from there. That should allow you to restore even faster. So he shows an example of that here: he creates a database, inserts some data into it, and then does a pg_base_backup, just backs up the database to the particular directory, adds some additional data, and then he does an incremental backup using pg_base_backup. Using the incremental flag to define the backup manifest that was created for the initial backup, he defines the new directory where this incremental backup should go.

Basically, he did the backups_incr1 directory. There is something that needs to be enabled in order to make this happen, and that is WAL summarization. So that helps the system determine what blocks have changed since the last full or last backup. So you do need to enable that first before you do the first full backup. But he went ahead and did that and did the backups again. Then he did a second incremental backup and then to do the restore you actually used a new tool called pg_combinebackup. So basically you use this command, pg_combinebackup. You give it the main backup directory and all the incrementals you want and you output it to a particular destination directory. So in this case he's using var/tmp/restore. Once this has been restored, you can simply navigate to this directory and start PostgreSQL and it works. So like I said, this should be much faster for restore times using this incremental backup solution. So this is great.

Now, related to this, this is also the piece of content that was covered in "Five minutes of Postgres" by pganalyze.com and it is "Waiting for Postgres 17: Incremental base backups". So Lukas doesn't talk about this post, but he talks about the patch that was done and he shows examples of running it in his video. So if you want to learn more about this, definitely encourage you to check out his piece of content as well.

Next piece of content- "5 Ways to implement NOT EXISTS in PostgreSQL". This is from eversql.com. He's proposing an example where you have a set of users and they have sessions to something on the database or to the application, and you want to know what users have not had any sessions. So a set of users quote "... NOT having any activity". So there are five different ways to do this, he proposed. One is using NOT EXISTS, the other is using NOT IN. So not in the set of IDs. Another is using <> ALL. Another is doing a join between them, a left outer join from users to sessions, and then checking for a NULL value and saying the primary key of the session to determine what users haven't had sessions. Or using the EXCEPT keyword. So he goes through and shows the different syntax of the commands to use here. I won't go into detail on this, you can definitely check out the blog post for that. He suggests cases where one technique is slow, and one technique is fast, but I like the performance results at the bottom here. So each of these cases in the columns, the first one is 10,000 users in 10,000 sessions. The next one is a million users in 10,000 sessions. The next one is 10,000 users in a million sessions. And the final column is a million users in a million sessions. Now, looking across this clearly, don't use <> ALL, probably don't want to use nNOT IN. Although with small data sets, it's not terrible. But the ones that seem to have the most consistent performance are NOT EXISTS and the LEFT JOIN method and I have tended to use the LEFT JOIN method. I haven't used too much of NOT EXISTS. But what's interesting is that NOT EXISTS actually gives the lowest performance for the largest sizes, although it's close. So I don't know what the statistical significance of the performance difference is here, but definitely, these are the two I would probably go to if I needed to do these types of queries. But check out this blog post if you want to learn more.

Next piece of content- "Praise, Criticism, and Dialogue". This is from rhaas.blogspot.com.com and this was an interesting post. It's talking about the Postgres community and talking about how he feels. A lot of the mailing list culture he says quote "...feels bruising". Meaning there's a lot of criticism but not as much praise, and he feels we should try to focus more on the praise. You still need criticism to build an excellent product, but it should be balanced with sufficient praise as well. And I can definitely agree. I find myself being too critical of myself and others or other situations, so having that balance is definitely important. What I found interesting is all of the comments talking about the importance of this message, essentially. So definitely a good post to talk about and think about.

Next piece of content, there was another episode of Postgres FM for the past two weeks. The first one was on "Partitioning by ULID", which I think is a precursor to a time-based UUID. But a lot of what they were talking about is using UUID version seven IDs, which are time-based in the beginning and random at the end, and how those will be really valuable in the future. People may tend to use those as opposed to big serial columns or bigints with a sequence attached to them. Because they don't have the problem of fully random UUIDs that get written all over the index and cause performance problems for Postgres, but they give you relatively incremental IDs that you can use for primary keys. So this is actually something I hope will come out in 2024 as well as the version seven UUIDs for Postgres. They were talking and they said it looks like it's been delayed until the standards groups actually finalize UUID version seven, but we'll just have to see what happens. But check out their piece of content if you want to learn more.

The next episode from them was on "work_mem", and this is a Nikolay solo episode where he talks about using work_mem and how he likes to configure it, and I really like his opinion here. It's kind of what I do as well, whereas you basically set a particular value to start, make sure it's not too high that you're going to run into an out-of-memory situation, and then you basically monitor what's going on. So things like checking the logs to see if you're writing temporary files. So that may mean your work_mem needs to be a little bit higher for particular queries. Maybe you increase work_mem for the whole system, but another path you can take is maybe you only want to do it for particular users. A reporting user really needs to run some complex queries, so this user needs a higher work_mem. Or maybe, you know, this particular query is going to use a lot of work_mem. So for that particular session or for that particular transaction where this report needs to be run, you give it a high work_mem. But it's definitely important to log and monitor what's going on to make sure that you minimize issues of your work_mem being too low and causing queries to write to disk because that just diminishes your performance. But definitely another great episode. I encourage you to check it out.

The last piece of content is "Demand the impossible: rigorous database benchmarking". This is from erthalion.info and this is more of a thought piece on the challenges of benchmarking and all the variables that you need to take into account when doing it and how hard it really is, along with some examples that he's encountered. So if you want to learn about the challenges of benchmarking, definitely encourage you to check out this blog post.

episode_image