Today there were some changes on explain.depesz.com – a bugfix, and functionality improvement.
Birthday cake again
Last year I got cake in shape of Jameson cake. This year, I was surprised by cake being an idea of my dream-car Mitsubishi EVO:
It was fully edible (including 100% chocolate tires, and spoiler), and very good.
To the potential nay-sayers: I know it's not really shape of Evo. But it had “three diamonds" on both front and back, clearly visible “Evo" signs, so it definitely counts 🙂
Tips N’ Tricks – Running your queries from within Vim
I use VIM. For more or less everything. Including writing blogposts.
Usually, when I was working on blogpost about PostgreSQL, I would write an sql file, switch to another console with psql running, run \i, get output, and then copy/paste the results to my blogpost in another vim.
It worked, but wasn't really nice.
Today, I realized that I can do something much smarter.
I can just type in Vim, and then pass the data to psql, using simple “visual mapping":
:vmap R :!psql -e<enter>
How does it work? When I'm in Vim, and I select (visual) some text, I press shift-R, and the selected blob is sent to psql.
Of course – psql has to know which database to connect to, as which user, and so on, but this is handled by setting PG* environment variables before running Vim.
Thanks to “-e" option, I get all the queries printed back to me, so I don't lose them from my text file.
It works just great.
While I didn't show it in the ascii cast, I can of course also run in this way multiple queries, use transactions, and everything else. The only problem might be that every such run is executed in new psql, which means that you don't have single session.
But, that doesn't seem to be big problem (at least for me).
It would be nice to have vim as full blown sql client, and I think it's perfectly possible, but I just don't care enough to spend time writing necessary scripts.
Getting top-N rows per group
Yesterday on irc someone asked:
Hi, how do I get top 5 values from a column group by another column??
From further discussion, I learned that:
total rows in table is 2 million. It'll have unique words of less than 1 million.. (approx count)
I didn't have time yesterday, but decided to write a solution, or two, to the problem.
Concurrent REINDEX of all indexes in database
Recent release of new versions of PostgreSQL suggests that you do reindex of all indexes. But this will take a while, and since we don't actually have ‘REINDEX CONCURRENTLY' command – it's a bit tricky.
So, since I will be doing this on several databases, decided to write a script that will handle the work for me.
Continue reading Concurrent REINDEX of all indexes in database
Why I like tmux?
Filling the gaps with window functions
Couple of days ago I had a problem that I couldn't solve after ~ 2 hours, and decided to ask on IRC. Almost immediately after asking, I figured out the solution, but David asked me to write about the solution, even though it's now (for me) completely obvious.
The problem was like this:
I had two tables, with very simple structure: event_when timestamptz, event_count int4, and wanted to show it as a single recordset with columns: event_when, event_count_a, event_count_b, but the problem was that event_when usually didn't match. Here is an example:
Waiting for 9.3 – Implement SQL-standard LATERAL subqueries.
On 7th of August, Tom Lane committed patch:
Implement SQL-standard LATERAL subqueries. This patch implements the standard syntax of LATERAL attached to a sub-SELECT in FROM, and also allows LATERAL attached to a function in FROM, since set-returning function calls are expected to be one of the principal use-cases. The main change here is a rewrite of the mechanism for keeping track of which relations are visible for column references while the FROM clause is being scanned. The parser "namespace" lists are no longer lists of bare RTEs, but are lists of ParseNamespaceItem structs, which carry an RTE pointer as well as some visibility-controlling flags. Aside from supporting LATERAL correctly, this lets us get rid of the ancient hacks that required rechecking subqueries and JOIN/ON and function-in-FROM expressions for invalid references after they were initially parsed. Invalid column references are now always correctly detected on sight. In passing, remove assorted parser error checks that are now dead code by virtue of our having gotten rid of add_missing_from, as well as some comments that are obsolete for the same reason. (It was mainly add_missing_from that caused so much fudging here in the first place.) The planner support for this feature is very minimal, and will be improved in future patches. It works well enough for testing purposes, though. catversion bump forced due to new field in RangeTblEntry.
Continue reading Waiting for 9.3 – Implement SQL-standard LATERAL subqueries.
“PostgreSQL: Up and Running” by Regina Obe and Leo Hsu
Around a week ago, I got mail saying that authors wanted me to have their book. For free. That's relatively important because getting things for free means you should pay for it in other way (that's my position, it was not suggested in any way in the mails). Plus – I generally never read technical books (really – I just don't), so there was no chance I would get it myself, if it wasn't gifted.
After some mails I got mobi version of PostgreSQL: Up and Running.
Continue reading “PostgreSQL: Up and Running" by Regina Obe and Leo Hsu
Pg::SQL::Parser
Some time ago I was looking (warning: post in polish) for someone to teach me proper parsing.
One of really great polish Perl programmers – Dozzie – reached out, and helped me. By the way – thanks a lot, Dozzie.
Based on what he taught me, I started writing module for parsing SQL queries. By that I mean proper parsing, with grammar (using Parse::Eyapp), and not set of regular expressions.
My parser is not ready. To say it lightly. Very lightly.
For now, it just knows how to parse the simplest queries like:
- select 1;
- select ‘a' as b;
I am working very slowly on it, so don't expect any usable version in any defined future. I will get there, eventually, but it is a project that I work on in my free time, after I finish everything else that I could work on in given moment.
This post is intended to announce that I'm working on it (so I will have kind of obligation to do it). And, if anyone is interested – I more than welcome all contributors/reviewers, and perhaps even critics 🙂
Final note – if you'll review the code, and want to comment on ugly list of regexps in Lexer – I know. It will be eventually replaced by one regular expression, but since it will be regular expression built by Regexp::Optimizer – it will not really be readable (though it will be faster than current approach).