background

11TB WAL, pgec Writes, Patroni 3 & Citus, PostgREST | Scaling Postgres 256

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss WAL growing to 11 Terabytes, pgec supporting writes, Patroni 3.0 & Citus for high availability, and setting up PostgREST.

Content Discussed

YouTube Video

Podcast Audio

Transcript

All right. I hope you, your friends, family, and coworkers continue to do well. Our first piece of content is "11TB WAL in PostgreSQL". This is from engineering.adjust.com. They're talking about a situation where on a Saturday afternoon, things were going fine and then suddenly, they got notified that the disc space kept increasing for their main database. Now once they looked into it, they realized it was basically the WAL. So, the WAL had grown to 11 terabytes in size, which was significantly larger than it should be. Whenever I hear about unbounded WAL, the thing I mostly go to is there's some sort of orphan replication slot.

So is a replica down or was a slot created and then not removed so it's just retained all of this WAL? Is it not being archived? So I like to think about what may be the issue here. But in their investigation, replicas were fine and they showed some issues for some logs. The problem was not a replication slot but basically, the archive command wasn't working. They use pgBackRest to do their WAL archiving. And they were getting 'permissions denied' errors. So basically, it all resulted in improper permission set on the directory that it needed access to in order to do the WAL archiving.

Now they corrected the directory. They stopped and restarted pgBackRest but the problem persisted again. The reason was that it was their init script that had the error. Apparently, the init script changed but the service had not been restarted until right when this problem happened. So basically, they deployed a change to their init script but they never stopped and started the service to test it out. When they eventually restarted the service, the bug caused its problems. But if you want to learn more about that, definitely check out this blog post.

The next piece of content- "pgec reads: memory replicated cache, writes: PostgreSQL with a Redis API". This is from shortishly.com. And they're talking about pgec which stands for Postgres Edge Cache. So this is a new caching system that is compatible with Redis, Memcached And also as a REST API. So you can use any of these APIs to interact with it. And what it does is it allows you to cache information using logical replication from Postgres. And I covered this in a previous episode in Scaling Postgres. I wanted to monitor this new tool and see how it's coming along. And one thing that they're showing here that I don't believe it was mentioned previously, is that they now support Mutating Operations.

So if pgec receives a delete or an HSET through Redis, it can actually be updated. So it just directly updates PostgreSQL. And then of course the logical replication eventually updates the caching system on pgec itself. So this is an update and it goes into more detail of the implementation so you can learn more. Again, this is using Erlang and it looks like they're ETS tables or they're ETS cache. So if you want to keep up with what's going on with this, definitely check out this blog post.

Next piece of content- "Patroni 3.0 & Citus Scalable, Highly Available Postgres". This is from citusdata.com. We mentioned with the new version of Citus, I believe it was 11.2, that Patroni 3.0 now worked with it. So it's a new version of Patroni and this blog post is actually written by the maintainer of Patroni. So he talks about the enhancements that were done to 3.0 with Patroni and also how it works with Citus. So historically, when you set up Patroni, you would have your primary Postgres database system, one or more standbys or replicas.

Then you have the applications talking to the primary through an HAProxy so it can be redirected in case of a failure to the standby. And also some sort of consensus storage and here they're using etcd but you can use different tools for doing that as well. Well in the Citus implementation, where they're doing scale-up Postgres, they have coordinator roles and worker roles. So generally, applications talk to the coordinator and then one of the queries gets distributed to the particular worker that has the data of interest.

Because again, you're scaling out your tables to different individual database servers. So you still have the structure but each part of Citus now has a replica for high availability. So there's an additional replica or standby here, there's a standby here for this worker, and a standby here for this worker. So this enables you to not only scale out the database but also do high availability using Patroni. So if you want more details about this, definitely check out this blog post.

Next piece of content- "POSTGREST: GET STARTED WITH AUTOMATED POSTGRESQL APIS". This is from cybertec-postgresql.com. And they're talking about PostgREST which basically gives you a REST API for your database. So it's a utility that sits between you and your application. Instead of sending selects, inserts, updates, and deletes to your database through a database connection, you send POSTS, GETS, OR DELETES or maybe PATCHES and POSTS through PostgREST. Then it makes the changes to the database.

So if you prefer working with an HTTP API to Postgres, you can use PostgREST to do that. And apparently, it generates the API out of your database automatically. So it looks at all the system catalogs, what tables exist, and things of that nature to be able to do that. So it's a quick way to get an API for your application. And they go into how to install it and get it set up with some sample data. So if you're interested in that, you can check out this blog post.

Next piece of content- "Handling integer sequence overflow without downtime in Postgres". This is from pganalyze.com. And this is the next episode of "Five minutes of Postgres". They covered the blog post "The Integer at the End of the Universe: Integer Overflow in Postgres. So Lukas gives his perspective on those blog posts as well as showcases some other historical blog posts where integer IDs did run out. He gives the story between them and how they handled them for three or four different cases. So if you want to find more information about that, you can check out this piece of content.

Next piece of content- "Introduction to Postgres Backups". This is from crunchydata.com. This is an introduction to different ways you can back up your database. There's, of course, the easiest way to do it, just a pg_dump and a restore. But generally, these have to be smaller databases. A large database can take a very, very long time to do. But it is a logical backup and there are some advantages to that where you can only do a schema or particular tables in order to back up data. The other way of backing it up is physical backups like backing up the files and the write ahead log files as well.

And the tool that Postgres provides to do that is pg_basebackup. So that enables you to back up your database as well as the WAL files and allows you to do point-in-time recovery of that database. And they go through the process of how you could set that up and then do a restore. And then they covered pgBackRest. So this is a very popular tool that's not bundled with Postgres, but it has some advantages being able to do differential and incremental backups. So they go over using that tool as well.

Now they do have a chart down here that shows the different backup methods. You have logical backups with pg_dump and physical backups. You have pg_basebackup and pgBackRest and say what each is good for. But I will say that recovery to a point in time is actually incorrect for pg_basebackups, you would want to give that a checkbox cause that does point-in-time recovery, pg_basebackup does. But if you want to learn more about how to back up your system, you can check out this blog post.

Next piece of content- "TLS set up on Postgres 15– Common Practice". This is from highgo.ca. They're talking about the different changes you can make to Postgres to enable secure and encrypted communication between clients and servers. So there are server settings you do via the postgresql.conf file where, primarily, you give it certificate information on what it should use to encrypt communications. But then you also have to change the pg_hba.conf file to define what connections are allowed to speak to a given database, user, et cetera.

Primarily, this is using the host SSL option and then defining the different authentication methods if you want to use certificates to do that. And then lastly, the client options mentioned where you can require only SSL connections or allow it if that option is available and they discuss that here. So if you're interested in learning more about setting up TLS connections for Postgres clients talking to Postgres servers, you can check out this blog post.

Next piece of content- "Which is worse when working on production databases? Being drunk or tired?". This is from ledgersmbdev.blogspot.com. And he's talking about, basically, human factors in database operations. So basically, while you're drunk or while you are sleepy, should you be working on your production database? And the answer is probably not. But he goes into more depth in this talk here that you can link to and actually watch the video that took place at a Fosdem. So if you're interested in that type of content, you can check this out.

Next piece of content- "PostgreSQL 14 Internals". This is from postgrespro.com. I believe the final section of the book PostgreSQL 14 Internals has been translated and is now available in a PDF version. So you can check out this link if you're interested in that.

Also from postgrespro.com is "PostgreSQL 16: part four or CommitFest 2023-01". So these are the different patches that were worked on during the CommitFest in January. They have details about each one, you can click or scroll down to see more details. There's a lot of detail that they present here about each patch.

Related to that is "A PostgreSQL Developer's Perspective: Five Interesting Patches From January's Commitfest". So this is from timescale.com. While they're not listing everything, I think as the previous post was, this focuses on five in particular. So if you're interested in what's potentially coming in Postgres 16, you can check out these two blog posts.

Next piece of content- "PGSQL Phriday #006: One Thing You Wish You Knew While Learning PostgreSQL: psql commands". So this is from andreas.scherbaum.la. This is an entry we missed on last week's Scaling Postgres due to when we do our shows. But this blog post mentions psql's \timing option and \watch option. I highly agree and would say these are the number one and number two commands I wish I had known when I started using psql. These are definitely tops on my list for being aware and using them while you're doing database management. But check this blog post out if you want to learn more.

Related to that, this is the PGSQL Phriday Wrap-Up & Summary. This is by scarydba.com where he goes over all the different posts that were done.

The next piece of content. There was another episode of Postgres FM this week. This one was on "Wait events" and they covered what they are, why they have them, and how to use them to help diagnose performance issues. So if you're interested in that you can watch their YouTube video or listen to the audio version.

Next piece of content, the PostgreSQL person of the week is Samay Sharma. If you're interested in learning more about Samay and his contributions to Postgres, definitely check out this blog post.

And the last piece of content, we did have another episode of The Rubber Duck Dev Show this past Thursday afternoon. This one was on "Monetizing Your Developer Content With Seb Wilgosz & Lucian Ghinda". So if you do a blog, if you have a podcast, or if you do video content, we covered how you could potentially monetize or get paid at least some sum for potentially producing this content. So if you're interested in that, we welcome you to check out our show.

episode_image