background

Index Deduplication, Fast Hierarchy, More Monitoring, Cloud-Native | Scaling Postgres 117

Join Over 1,000 Engineers & Get New Episodes Weekly!

In this episode of Scaling Postgres, we discuss index deduplication in Postgres 13, fast hierarchical access, more essential monitoring and cloud native Postgres.

Content Discussed

YouTube Video

Podcast Audio

Transcript



[00:00:00] In this episode of Scaling Postgres, we talk about index, deduplication, fasthierarchy, more monitoring and cloud native. I'm Kristen Jameson, and this is scaling postgresepisode 117.

[00:00:21] You all right? I hope you, your family and coworkers continue to do well during thesetimes we live in. Currently our first piece of content is deduplication in PostgreSQL version 13Btree indexes. And this is from CyberTech and postgresql.com. And they're talking about anenhancement that's coming in version 13 where they're reducing duplication in Btree indexes. Sobasically allowing the index not to have to track every single value if it exists as a duplicate. Andwith this you get some pretty significant space savings. So they say it's similar to how a genindex works in that if the index keys for different table rows are identical, a gen index will store itas a single index entry. Now, that's not necessarily how this works, but they're using the sameconcept to not have to store every single identical value in the index to make storage moreefficient. And they link to the commit here and they say, quote, the big difference is between thegen and the B tree with deduplication is that the duplicate index entries can still occur in Btreeindexes. And they even include in the bottom here where they're testing it out, they show thetable, the index, and they inserted looks like 20 million rows and then they compared the size ofthe indexes. Now, if you look at the gen index, that's pretty small compared to the B tree index ingeneral, just like looking in version 1251 megabytes for the gen index, 400 megabytes for the Btree index. But access is a little bit slower compared to the B tree index, at least in this use case.But in version 13, the B indirect is 126 megabytes. So not as small as gen as the gen indexbecause again, it still does indeed duplicate everything.

[00:02:12] But there's a significant savings going from 408 megabytes down to 126 megabytes.So in this example, a three times savings in space, which is huge. So that means this indexcould fit in a smaller amount of memory, which may speed access to it. So it's another greatenhancement. Now, to get the benefits you need to do something similar to how the indexchanges for version twelve were done, is that you need to reindex. So if you do, say, a reindexconcurrently in your indexes, you should get space savings for those indexes once you upgradeto version 13. But if you're interested in learning more, definitely check out this blog post.

[00:02:51] The next post, also from Cybertechyphen postgresql.com, is PostgreSQL speedingup recursive queries and hierarchic data. So the focus of this is data that exists in the hierarchy,and a lot of times you can use the width recursive Cde to query it. But they have another optionhere to speed up these types of queries using something called lTree, which is part of the contribpackage. He says, I quote lTree implements a data type lTree for representing labels of datastored in a hierarchical tree like structure. So since this is an extension, you go ahead and installit. And he created a table and inserted some data into a table linking a parent and child to oneanother. So for example, B is the child, A is the parent and he built this hierarchy here. And thenusing lTree you can get a listing of the hierarchies. So for example, if you say select using lTree,it represents each label separated by a dot and you can append rows to the end of the hierarchy.But he says be careful if the column is null because then you can append it where if it's blankthen you can and in terms of the label you have to use essentially letters because specialcharacters like a dot will result in an error. Now, he made a materialized view to precalculatethese values so he used the width recursive to create them and he's using the lTree so that thetable looks like this and the path to it is shown in the lTree column here and then you can querythis data. So query the hierarchy using where the statement contains for example, this and itpulls out all of that set of hierarchy. So as he says this is looking for everybody in ABCG andthen you can also use an index with this. Now he doesn't mention but I believe he's using a Gistindex to be able to use this I believe contains operator so that you can speed up access to it. Soif you have a need for querying hierarchical data and you want to speed up access to it, maybecheck --out the lTree extension to see if it can assist you with doing that.

[00:05:05] Next post is essential PostgreSQL Monitoring Part Two and Essential PostgreSQLMonitoring Part Three. So in a previous edition of Scaling Postgres we covered the part one thatcovered cluster level parameters for PostgreSQL. Part two covers database level one. So forexample, it says one thing you want to monitor is connected clients. Well, it shows you exactlyhow to do that by just looking at the PGSTAT activity table, what query you would run if you wantto track the size of your database. This is the query you would use how to track table Bloatacross all tables in index Bloat and they have some tools that you would need to do to pull thatout. Monitoring long running transactions, looking at deadlocks and then looking at the oldestvacuum and oldest analyze and what tables you need to query. Just be able to see that. Partthree covers table index and system level metrics to monitor that includes the table size tableBloat, how many sequential scans are occurring as well as for indexes their size Bloat, as wellas their cache hit ratio in terms of system level. It covers memory used, the load average andthen the free disk space. So these have been a great list of things to monitor for PostgreSQL.And definitely I suggest checking out these blog posts to keep them as a handy reference to seewhat else you should potentially be monitoring in your PostgreSQL installation. And again, theseposts are from the PG IO blog.

[00:06:43] The next piece of content is webinar cloud native BDR and PostgreSQL follow up. Sothis is a webinar that they produced@secondquadron.com and they're talking about their cloudnative BDR and PostgreSQL. So cloud native basically means container enabled, I've come tounderstand. So it's basically using Kubernetes to manage PostgreSQL or their bi directionalreplication product BDR. And it goes over an interesting presentation where they're talking abouta number of decades ago we were running on essentially bare metal servers and then westarted running on virtualized environments for the last decade or so and now we're kind ofmoving into the container realm. Now I definitely understand it from an application perspective indatabases. I'm still not really running them on containers again, because some of thedisadvantages that potentially exist with local disk access or fast disk access, which youdefinitely want with PostgreSQL. But again, that has been continuously improving and they'retalking about using their operator that they developed to run PostgreSQL in a Kubernetes clusterand have it working and performing well. Now, what I think is more of a game changerpersonally is their BDR product because that is essentially a multimaster database. And there itmakes a whole lot of sense running a workload like that on Kubernetes because essentially onedatabase is the same as another within the cluster. But again, with that product, you have tokeep in mind, you have to develop your application differently. You can't do certain things in thattype of environment compared to normal PostgreSQL replication where there's always a primaryand they can do things like utilize sequences to produce IDs and things of that nature. But Ifound it a very interesting webinar that kind of looks towards where the future may lead. So ifyou're interested in checking that out, definitely check out this webinar. You just need to click thelink here, submit some contact information, then you get to view the webinar.

[00:08:50] The next post is Optimizing SQL Simplifying queries with window functions. Andagain, another post I like to read about is Window Functions. So this is a pretty basic post thattalks about understanding window functions. So they set up an environment within employeesthat have departments. They insert a bunch of data into it and then they start querying it. Thefirst way they said you can get an average salary is someone using sub queries, and then theycompared it to using a window function to do it. And then they go through and describe all thedifferent window functions that are available in terms of row number, rank, dense rank, lag lead,first value, last value, and then they show a query with a number of these different windowfunctions in it. One to get the row number by department, one to get the rank by department,one to get the dense rank by department. And then they start looking at Lag and --Lee to give you more information about the salary amount and following up with running total.So if you're interested in learning more about window functions, definitely check out this blogpost from Higo CA.

[00:10:00] The next post is removing PostgreSQL bottlenecks caused by high traffic. This is fromProcona.com and they're basically going over a list of things that you should check out andconsider changing to get better performance out of your PostgreSQL installation. And theycompare each of the items that they mentioned from a complexity standpoint and from apotential impact standpoint. So the first thing they mentioned is tuning your performanceparameters. So this is the PostgreSQL parameters. The primary ones are the effective cachesize, shared buffers, and the work mem. And they go over how you would potentially want to setthose. And they have some estimates that are listed out here. So definitely the first thing tocheck out and do the next, they talk about session connections and how to manage those,basically how many connections you're having. And once you start getting a lot of traffic, you'reprobably going to want to use a connection pooler like PG bouncer. So they discuss that and thedifferent pooling options that are available.

[00:10:58] Then they're talking about optimizing auto vacuum. Now, interesting, they say someof the parameters that you should be changing is the auto vacuum, max workers, maintenance,work memory, auto vacuum freeze, max age. And whereas you could do this, I generally preferto make as a first step, making auto vacuum more aggressive and most importantly, changingthe vacuum cost limit or the auto vacuum cost limit so that more work gets done with the autovacuum. Because the problem with adjusting this is that all the workers use that same vacuumcost limit. And if you just add more workers, less work gets done for each worker process. Sothe most important value to change in my opinion, is the cost limit because that allows eachworker to do more work in a given unit of time. And then if you feel you need some moreworkers, go ahead and increase that. But know that if you do that, each worker is going toactually do less work because the governor on that is the cost limit. So in my opinion, thenumber one parameter to change is the cost limit. Then they talk about advanced auto vacuumand it basically looks like delving into table based parameters, which you can definitely do tohave particular tables vacuum more often. Next they cover Bloat and their solution for that isusing the extension PG repack, but you can also minimize bloat by auto vacuuming more oftenor vacuuming more often to make sure that dead tubles are cleaned out frequently. How to avoiddata Hotspots basically optimizing your indexes. Make sure you don't have indexes you don'tneed so you get potentially more heaponly Tuple updates. If you have very large tables with a lotof rows, consider table partitioning, optimizing for parallel querying and potentially usingdenormalization techniques. Another area they mentioned is competing application processes.Basically, if you're running your application processes on the database consider separating outthose workloads so you have a dedicated database and a dedicated application server orservers. They're talking about replication latency and here they're referring to synchronousreplication as having a bottleneck of sorts because you have to have a commit happen on twodifferent servers potentially crossing a network domain. So running asynchronously gets youmore performance. And as I say, last but not least, they talk about the server environment. Sobasically things you can do on the hardware itself in terms of adjusting Ram CPU drive systemsas well as potentially a disk partitioning. So if you're wanting to learn more about theirperspective on performance areas you can adjust to help manage high traffic, definitely checkout this blog post.

[00:13:44] The next post PostgreSQL write ahead logging tradeoffs bounded versus archivedversus replication slots. So this is talking about when you're setting up replication how that wallgets to the Replica, essentially. So you can bound it by specifying a certain number of wall keepsegments on the primary so that the Replica has them available to pull them over to keepreplication going. The second method is archiving them so you archive them on the primaryserver and you ship them somewhere so that --the Replica has access to them so they can be replayed. Now, that's not streaming, that'sessentially log shipping. But it is a valid one option to do. And then lastly is replication slots.That's where the primary and the Replica are actually interacting with one another that arecommunicating. The primary is aware that this Replica exists and it's going to keep all of thosewall files until it knows the Replica has retrieved them and then it knows it can delete them. Nowthey go over the different trade offs. Basically a bounded wall where you're using wall keepsegments essentially the primary can go on forever and it's not going to run out of disk space orshut down because it ran out of disk space but it's not aware of any of the Replicas. Whereas ifyou're using replication slots, essentially it has to retain those walls. So if there's some problemwith the Replica you could start running out of disk space on the primary because the Replica isessentially not consuming them. So again, they go over some of the pros and cons of these. Butessentially the wall keep segments allows your primary to move forward without any risk. Thedisadvantage is that your standbys or your replicas can fall behind and you have to manage that.The replication slots, you can be assured that everything is going to keep up to date, but youpotentially may run out of disks on the primary or be able to mitigate that. And then the archivedwalls really for log shipping, not really streaming. So the author's perspective is use replicationslots and very closely manage your disk space monitor for it. Have alerts set up for it as well asarchiving the wall. But again, it's up to you. And they go over the different pros and cons in thisarticle from Enterprisedb.com. So if you're interested, go ahead and check out this blog post.

[00:16:07] The next post is locks in PostgreSQL three other locks. And this is from Haber.com,and I believe it's from something that may have been posted back in 2015 on Postgrespro RuSo, a Russian site. So if you're interested in learning more about other locks, such as DeadlocksPredicate Locks Advisory Locks. So if you're interested in learning more about this, definitelycheck out this post.

[00:16:35] The next piece of content is the PostgreSQL person of the Week is Marcus Winnand.So if you're wanting to learn about Marcus and his contributions to PostgreSQL, definitely checkout this blog post.

[00:16:47] The next post is Spatial Constraints with PostGIS and PostgreSQL, part two and partthree, both from crunchy data. So they're talking about, of course, database constraints inregards to PostGIS with PostgreSQL. So if you're interested in learning more about differenttypes of constraints for postgres data, definitely check out these blog posts.

[00:17:11] That does it. For this episode of Scaling Postgres, you can get links to all the contentmentioned in the show notes. Be sure to head over to Scalingpostgres.com, where you can signup to receive weekly notifications of each episode, or you could subscribe via YouTube or itunes.Thanks.

[00:17:29] Our. --

episode_image