On 3rd of April 2019, Alvaro Herrera committed patch:
Log all statements from a sample of transactions This is useful to obtain a view of the different transaction types in an application, regardless of the durations of the statements each runs. Author: Adrien Nayrat
Commit message makes it pretty clear, so let's see this in action.
I'll need a sample table:
=$ create table test ( id serial primary key, int_1 int4 not null default 0, int_2 int4 not null default 0 ); CREATE TABLE
Now, let's make some test transactions.
Let's assume that I will want transactions that insert a row, and then update it.
Something like:
=$ BEGIN; =$ insert into test (int_1, int_2) values (random() * 100000, random() * 100000); =$ update test set int_1 = int_1 - 50, int_2 = int_2 - 100 where id = currval('test_id_seq'); =$ COMMIT;
Nothing really fancy. All of these queries will be fast. Now, let's make sure we run quite a lot of them:
$ ( echo "BEGIN;" echo "insert into test (int_1, int_2) values (random() * 100000, random() * 100000);" echo "update test set int_1 = int_1 - 50, int_2 = int_2 - 100 where id = currval('test_id_seq');" echo "COMMIT;" ) > single.sql $ for i in {1..10000}; do cat single.sql ; done > 10k.sql
Now, running this via:
$ psql -f 10k.sql -qAtX
generated logfile with size of ~ 5.5 MB (with log_min_duration_statement = 0).
So, let's now change the settings:
=$ alter system set log_min_duration_statement = 1000; ALTER SYSTEM =$ alter system set log_transaction_sample_rate = 0.001; ALTER SYSTEM =$ select pg_reload_conf(); pg_reload_conf ──────────────── t (1 row)
And now, redoing the 10k.sql file generated only ~ 6.5kB of LOG! With 11 transactions in it, so pretty close to expected 0.1%.
This is great. Thanks, Adrien and Alvaro.
Glad to see postgres’ user happy 🙂
FYI, I made an extension which provide similar feature for previous postgres versions : https://github.com/anayrat/pg_sampletolog