I did my part :)

I am enthusiast of Bitcoins. This can be seen in my blog (above each post, in post view page, there is help plea), or on explain.depesz.com – my most known creation.

Aside from these, I try to talk with various people about it, explain what I can (and learn what I don't know), and generally provide positive (but hopefully not overly intrusive) channel of information.

The thing, though, was that so far I didn't actually use much of the currency. Bought test vpn once, and that's about it.

But recently, I got 0.07 btc donations for my depesz.com blogposts (thanks to whoever contributed), and since I learned that relatively close to me there is burger bar that accepts payments in bitcoins – decided to spend some of donation money 🙂

Long story short – been today to White Burger with my S.O., and we got ourselves burgers.

The place is relatively nice. In Warsaw/Ursynow. At the time we were there (around 1pm) it was empty. Rather comfortable, though not cozy. But – it's a bar, not a secluded restaurant, so it's ok.

Prices – rather good. Been in couple of other burger bars in Warsaw, and this was below the average (if memory serves right).

Food – taste was great. I think these are in fact best burgers I had in Warsaw. Like them much more than the famous “Burger Bar" on Olkuska. There were two small(ish) issues, though. One – two (out of three) burgers had a bit cold buns. Not a big problem, but still easily fixable. The other problem was that kitchen mixed up sauces in the burgers. My wife got her burger, with bacon, but with my sauce – tabasco with some additions. This could have been much more problematic, but luckily she managed to eat it. The worst part, but easily avoidable, was coffee – if the owner of the place reads it – please check/fix the coffee machine.

Now for the interesting part. Bitcoin payment. It worked. Cashier (very nice, young lady) didn't know what to do when transaction was showed as “unconfirmed" (it was waiting for 6 blockchain confirmations), but since we were eating in the bar, it was not a problem. She seemed genuinely interested on what's going on, and how it works, and, despite not being sure what to do about unconfirmed transaction, provided us with the food without any delay. Thanks a lot.

I think this is definitely a moment to start a company that would take some of the risk involved in BTC and early confirming for faster confirmations in exchange for small fee.

TL;DR: Bought burgers with bitcoins, in White Burger, Warsaw, Poland. Burgers were great (small issues aside). BTC payment took too long, but it can be improved.

Waiting for 9.4 – Support ordered-set (WITHIN GROUP) aggregates.

On 23rd of December, Tom Lane committed patch:

Support ordered-set (WITHIN GROUP) aggregates.
 
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()).  We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.
 
Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions.  To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c.  This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need.  There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.
 
In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates.  Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.
 
Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane

Continue reading Waiting for 9.4 – Support ordered-set (WITHIN GROUP) aggregates.

Waiting for 9.4 – pg_prewarm, a contrib module for prewarming relationd data.

On 20th of December, Robert Haas committed patch:

pg_prewarm, a contrib module for prewarming relationd data.
 
Patch by me.  Review by Álvaro Herrera, Amit Kapila, Jeff Janes,
Gurjeet Singh, and others.

Continue reading Waiting for 9.4 – pg_prewarm, a contrib module for prewarming relationd data.

Waiting for 9.4 – Add ALTER SYSTEM command to edit the server configuration file.

On 18th of December, Tatsuo Ishii committed patch:

Add ALTER SYSTEM command to edit the server configuration file.
 
Patch contributed by Amit Kapila. Reviewed by Hari Babu, Masao Fujii,
Boszormenyi Zoltan, Andres Freund, Greg Smith and others.

On the next day, Fujii Masao committed patch:

Add tab completion for ALTER SYSTEM SET in psql.

Continue reading Waiting for 9.4 – Add ALTER SYSTEM command to edit the server configuration file.

Waiting for 9.4 – Allow time delayed standbys and recovery

On 12th of December, Simon Riggs committed patch:

Allow time delayed standbys and recovery
 
Set min_recovery_apply_delay to force a delay in recovery apply for commit and
restore point WAL records. Other records are replayed immediately. Delay is
measured between WAL record time and local standby time.
 
Robert Haas, Fabrízio de Royes Mello and Simon Riggs
Detailed review by Mitsumasa Kondo

Continue reading Waiting for 9.4 – Allow time delayed standbys and recovery

OmniPITR 1.3.1

Right after releasing 1.3.0 I realized that I forgot about one thing.

If you're using ext3 (and possibly other, not sure) file system, removal of large file can cause problems due to heavy IO traffic.

We did hit this problem earlier at one of client sites, and devised a way to remove large files by truncating them, bit after bit, and getting them to small enough size to be removed in one go. I wrote about it earlier, of course.

Unfortunately – I forgot about this when releasing 1.3.0, but as soon as I tried to deploy at the client site, I noticed the missing functionality.

So, today I released 1.3.1, which adds two options to omnipitr-backup-cleanup:

  • –truncate
  • –sleep

If truncate is specified, and is more than 0, it will cause omnipitr-backup-slave to remove large files (larger than truncate value) in steps.

In pseudocode:

if param('truncate') {
  file_size = file_to_be_removed.size()
  while ( file_size > param('truncate') ) {
    file_size = file_size - param('truncate')
    file_to_be_removed.truncate_to( file_size )
    sleep( param('sleep') )
  }
}
file_to_be_removed.unlink()

So, for example, specifying –truncate=1000000, will remove the file truncating it first by 1MB blocks.

–sleep parameter is used to delay removal of next part of the file (it's used only in truncating loop, so has no meaning when truncate-loop is not used). It's value is in milliseconds, and defaults to 500 (0.5 second).

Hope you'll find it useful.

Waiting for 9.4 – Add new wal_level, logical, sufficient for logical decoding.

On 11th of December, Robert Haas committed patch:

Add new wal_level, logical, sufficient for logical decoding.
 
When wal_level=logical, we'll log columns from the old tuple as
configured by the REPLICA IDENTITY facility added in commit
<a class="text" href="/gitweb/?p=postgresql.git;a=object;h=07cacba983ef79be4a84fcd0e0ca3b5fcb85dd65">07cacba983ef79be4a84fcd0e0ca3b5fcb85dd65</a>.  This makes it possible
a properly-configured logical replication solution to correctly
follow table updates even if they change the chosen key columns,
or, with REPLICA IDENTITY FULL, even if the table has no key at
all.  Note that updates which do not modify the replica identity
column won't log anything extra, making the choice of a good key
(i.e. one that will rarely be changed) important to performance
when wal_level=logical is configured.
 
Each insert, update, or delete to a catalog table will also log
the CMIN and/or CMAX values of stamped by the current transaction.
This is necessary because logical decoding will require access to
historical snapshots of the catalog in order to decode some data
types, and the CMIN/CMAX values that we may need in order to judge
row visibility may have been overwritten by the time we need them.
 
Andres Freund, reviewed in various versions by myself, Heikki
Linnakangas, KONDO Mitsumasa, and many others.

Continue reading Waiting for 9.4 – Add new wal_level, logical, sufficient for logical decoding.