Showing Plans, Atomic, Configuration, Migrations | Scaling Postgres 107

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss how to show live query plans, the importance of atomic operations for scaling out, configuration options and DBaaS migrations.

Content Discussed

YouTube Video

Podcast Audio


[00:00:00] In this episode of Scaling Postgres, we talk about showing plans, atomicconfiguration and migrations. I'm creston, Jameson. And this is scaling postgres episode 117.

[00:00:20] All right, well, with the craziness still going on, I hope you, your family and coworkersare safe. And as always, I hope you're having a great week. Our first piece of content is PGshow Plans. Watching execution plans in PostgreSQL live. This is from and they're talking about a new extension that I believe they've released calledPG Show Plans. And it allows you to see active query plans. Now, they mentioned there will besome overhead to this, but the performance overhead of PG Show Plans will be discussed in afuture blog post. But this tells you how to go ahead and get started using the extension. So youhave to clone the repo. And it is from CyberTech So it shows you how to set anenvironmental variable, do a make and install. Basically, they give you all the commands on howto get it set up. And then you need to add it to your shared preload library. So it will require arestart. But once you get that set up, you can look at query plans. So they did an example herewhere they had to set up a long running query with a PG sleep. And then they queried the view.Essentially. PG. Show plans. So you see the actual query you're running. Here the select allfrom PG show plans. That's what this record that's being returned. But then the next one is this,selecting all from PG Stats. So you can see the exact query that's being used for this. So in thelong running query, you can get what that plan is for that query using this extension. So this isreally great. I could see a lot of use cases where this could be beneficial. The only creston iswhat's the performance hit like that they're going to cover in a next blog post. What would beinteresting is because adding it to the shared preload libraries requires a restart, I wonder ifthere's a way to activate it or inactivate it that would say just require reload, but a very interestingnew extension. So if you want to learn more, go ahead and check out this blog post fromCybertechn The next article is Optimizing SQL step one, explain in PostgreSQLpart one. And this is from Higo CA. And this is a basic post explaining how Explain works. So ifyou're using the extension from the previous post, this can tell you how to interpret the results.So they start from a very basic start explaining how the query processing works, running througheach of the stages. And it shows you a basic Explain command and then goes over and explainsthe different output and what it means. They also cover things like you need to be sure that yourtable is analyzed to get the most accurate reporting when doing an explain. And also thatExplain Analyze actually runs the query for you. But what's also interesting at the last part of thisblog post is that they're building a query from a given plan output so that they're looking at theplane and then reverse engineering what that query is and telling you how you go through anddo it. So I've never seen this used before, but it is an interesting technique to really kind of get tounderstand explain output. So if you're interested in learning more about this, definitely checkout this blog post.

[00:03:39] The next piece of content also from Hago CA is Atomic Commit and Atomic Visibilityfor PostgreSQL Explained. So this is talking about the future where we're wanting to movetowards a scale out scenario where there are multiple PostgreSQL instances that essentially areacting as a common database and you're using foreign data wrappers and storing part of thedata on each of those PostgreSQL instances. Well, in that scenario, you're going to want to havean atomic commit. So basically you have a commit that occurs across multiple foreign servers,essentially. So they go through the process and explaining why that's important, where you'regoing to have to do a prepare transaction in each of the they're calling participants, but theycould be the foreign servers. They're going to return a result and then they're going to go aheadand commit or roll back terms of the decision resulting in the result of that transaction. Soessentially a two phase commit and how you need that to happen at an atomic level. All ornothing. Then he talked about atomic visibility, meaning when you do a query, you want atomicvisibi --lity across the different foreign servers and how this is a feature that's going to be important.And again, they graphically demonstrate you have one DB node here, a second here, and thendepending on where different things happen, you could have a commit occur at a differenttimeline. And you want this visibility to be atomic. So you need to know whether something hasbeen committed or not across the whole cluster of machines and then they go into a little bitabout global snapshots. So again, this is kind of like a forward looking and planning for thisfeature. So if you're interested in that, definitely check out this blog post.

[00:05:20] The next piece of content is what's wrong with Postgres? And this is and this is a follow on post to the one that was mentioned last time, whereis Postgres running? And this one basically how he interprets determining what's wrong withPostgres is you look in the log file and he says on Debian and Friends. So like Ubuntu, it's quiteeasy. You just run this command to find out where the log file is. It gives the output of PGLSclusters, gives you exactly where the log file is. However, for Red Hat and Friends, it's a little bitharder and it goes through some different techniques you can use to determine where the logfile is. And he even has this flowchart helping you to determine it. So it definitely seems to be alot easier to do this with a debian. So if you're interested in learning more, check out this blogpost.

[00:06:08] The next Piece of content is actually a YouTube video. This was posted on theEnterprise DB YouTube channel and it's how to use to configure and tune thePostgreSQL server. So it covers as well as a great many different configurationoptions that are available. And then finally it also goes into actual tuning recommendation aswell, and some different tuning tools you can use. Now the tuning recommendations start ataround the 38 minutes mark, but prior to that they go through these other topics in this webinar.So if you're interested in that, go ahead and check out this YouTube video.

[00:06:46] The Next Piece of content is a tale about migrating a PostgreSQL database betweendifferent database as a service providers. And this is from So apparently acustomer wanted to migrate from one database as a service provider to another, and becausethese are essentially walled gardens or platforms, they couldn't do a direct replication. So whatthey did is they created an intermediate database instance. So basically they had the sourcedatabase as a service database. They replicated to an intermediate server, basically using thearchived wall files to replay what it was here, and then they used that to actually finally load itinto the target database as a service provider. So it's an interesting discussion of the processthey use to do that. So if you have a potential need to do that, definitely a blog post you maywant to check out on

[00:07:43] The next piece of content is embedded SQL in C for PostgreSQL with ECPG. So youcan within the C language embed SQL and they give an example of a sample embedded SQLprogram here. So this is bypassing libpq and using this embedded SQL technique, and it usesthe ECPG capabilities within Postgres. So as it says down here, the preprocessor is calledECPG and part of the PostgreSQL core distribution, and I assume this means embedded C forpostgres of sorts. And then in the conclusion they talk about some of the disadvantages of thistechnique versus the advantages. So if you have a desire to potentially do this, definitely a blogpost to check out. The next piece of content is logical replication between PostgreSQL andMongoDB. And it looks like they're using a change data capture method and a MongoDBdecoder plugin they developed to be able to stream data from PostgreSQL servers intoMongoDB. So if you have this particular use case, you may want to check out this blog post fromHigo CA.

[00:08:56] Next post is developing PostgreSQL for Windows part three. So this is part three ofthe post about developing for PostgreSQL on Windows. And this is from

[00:09:09] The next post is PostgreSQL GSS API authentication with Kerberos. Part TwoPostgreSQL Configuration. Now, last week we presented part one, which described just settingup Kerberos on Ubuntu. The second one actually sets up the PostgreSQL configuration. So they're talking about setting up the key file, doing the PostgreSQL client authentication, theusername maps and user authentication. So if you have a desire to use Kerberos withPostgreSQL, definitely a blog post to check out.

[00:09:43] The next piece of content is that the PostgreSQL Person of the Week is DaveKramer. So if you're interested in learning more about his work and contributions to PostgreSQL,definitely check out this blog post.

[00:09:57] And the last piece of content is Tile serving with dynamic geometry. And this is And again, this topic covers the PG Tile serve. They're a microservice forinterfacing with Postgres. And this one talks about building dynamic geometry such as the seriesof Hexagons here and how you can do it using this tool. So if you're interested in this content,definitely a blog post to check out.

[00:10:24] That does it. For this episode of Scaling Postgres, you can get links to all the contentmentioned in the show notes. Be sure to head over to, where you can signup to receive weekly notifications of each episode. Or you could subscribe via YouTube oritunes. Thanks. --