On 21st of October, Robert Haas committed patch:
postgres_fdw: Push down aggregates to remote servers. Now that the upper planner uses paths, and now that we have proper hooks to inject paths into the upper planning process, it's possible for foreign data wrappers to arrange to push aggregates to the remote side instead of fetching all of the rows and aggregating them locally. This figures to be a massive win for performance, so teach postgres_fdw to do it. Jeevan Chalke and Ashutosh Bapat. Reviewed by Ashutosh Bapat with additional testing by Prabhat Sahu. Various mostly cosmetic changes by me.
Description seems to be simple, but let's see how it works in Pg9.6 and Pg10.
First the old version (9.6):
$ CREATE TABLE sample_data AS SELECT i AS id, CAST(random() * 10 AS int4) AS group_id FROM generate_series(1,100000) i; SELECT 100000 $ CREATE extension postgres_fdw; CREATE EXTENSION $ CREATE server origin FOREIGN DATA wrapper postgres_fdw options( dbname 'depesz' ); CREATE SERVER $ CREATE USER mapping FOR depesz server origin options ( USER 'depesz' ); CREATE USER MAPPING $ CREATE FOREIGN TABLE data_from_origin (id int4, group_id int4) server origin options ( TABLE_NAME 'sample_data' ); CREATE FOREIGN TABLE
Sanity check:
$ SELECT group_id, COUNT(*) FROM sample_data GROUP BY group_id ORDER BY group_id; group_id | COUNT ----------+------- 0 | 4935 1 | 10131 2 | 10083 3 | 10036 4 | 9828 5 | 10073 6 | 9912 7 | 10027 8 | 10130 9 | 9816 10 | 5029 (11 ROWS) $ SELECT group_id, COUNT(*) FROM data_from_origin GROUP BY group_id ORDER BY group_id; group_id | COUNT ----------+------- 0 | 4935 1 | 10131 2 | 10083 3 | 10036 4 | 9828 5 | 10073 6 | 9912 7 | 10027 8 | 10130 9 | 9816 10 | 5029 (11 ROWS)
OK. Data sets are the same. Let's see explain analyze for the second query:
$ EXPLAIN (analyze ON, verbose ON) SELECT group_id, COUNT(*) FROM data_from_origin GROUP BY group_id ORDER BY group_id; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ GroupAggregate (cost=100.00..222.22 ROWS=200 width=12) (actual TIME=55.823..119.840 ROWS=11 loops=1) Output: group_id, COUNT(*) GROUP KEY: data_from_origin.group_id -> FOREIGN Scan ON public.data_from_origin (cost=100.00..205.60 ROWS=2925 width=4) (actual TIME=51.703..110.939 ROWS=100000 loops=1) Output: id, group_id Remote SQL: SELECT group_id FROM public.sample_data ORDER BY group_id ASC NULLS LAST Planning TIME: 0.056 ms Execution TIME: 120.534 ms (8 ROWS)
Now, I did all of this also on pg 10. But the final explain analyze looks different:
$ EXPLAIN (analyze ON, verbose ON) SELECT group_id, COUNT(*) FROM data_from_origin GROUP BY group_id ORDER BY group_id; QUERY PLAN ---------------------------------------------------------------------------------------------------------- Sort (cost=167.52..168.02 ROWS=200 width=12) (actual TIME=21.606..21.607 ROWS=11 loops=1) Output: group_id, (COUNT(*)) Sort KEY: data_from_origin.group_id Sort Method: quicksort Memory: 25kB -> FOREIGN Scan (cost=114.62..159.88 ROWS=200 width=12) (actual TIME=21.596..21.597 ROWS=11 loops=1) Output: group_id, (COUNT(*)) Relations: Aggregate ON (public.data_from_origin) Remote SQL: SELECT group_id, COUNT(*) FROM public.sample_data GROUP BY group_id Planning TIME: 0.073 ms Execution TIME: 21.900 ms (10 ROWS)
Please note that “Foreign Scan" in the first explain – it shows actual…rows=100000, but in second explain, it shows actual…rows=11. Given that data has to be transferred over the network (in my case over unix socket on the same machine) it's clear win, which can be also seen in the execution time for both queries – 120ms vs. 21ms. Pretty cool.
I would assume that drivers for foreign servers will have to be modified to support handling aggregates, but it is definitely cool anyway.
I wonder how you did “explain analyze for the second query” on pg 10. Where is pg 10? The latest version is 9.6.
@Kai:
the same way I do all my work for “Waiting for …” blog post series, since 2008 – I test what devs are committing to repository of PostgreSQL source – which is what will become Pg version 10.
Hello, do you know something about timeline of implementation ‘LIMIT’ statement push down.
@Hi:
no.
That’s sad, because this feature looks not too complex to implement, but very powerful. Can you advice how to push it?
@Hi:
write to developers of postgresql? Mailing list pgsql-hackers if you want to implement it, and -general if you want to talk about it.
Is there a way to postgres_fdw send the ORDER BY to the remote server?
I have table that grows about ~5 million rows/day, this table have an index on a timestamp column that tracks when that row was added.
I can easily do “select * from table order by timestamp_column desc limit 1000”
But on the foreign table, the postgres_fdw executes this remote query: “select * from table” and do the sort locally.
I’m even using use_remote_estimate ‘true’, fetch_size ‘10000’ to see if it helps :'(
@Andrei:
well, sure. you’d have to add support to this operation to postgresql. As far as I know there is no support for this yet.
Does this feature relate to this parameter:use_remote_estimate?
When the aggregates are count()/ sum()/ avg(),
No matter “use_remote_estimate” is set to true or false, they will push down to remote side.
But when the aggregates are min()/max(),
when “use_remote_estimate” is set to true ,it will push down to remote side.
when “use_remote_estimate” is set to false,it will not push down to remote side.
Why?
@puq:
this parameter is for estimates, not for actual work.
Check: https://www.postgresql.org/docs/current/static/postgres-fdw.html#idm46428634363056
The feature I described in this blogpost is about actually running the aggregate on remote side.
@depesz
> this parameter is for estimates, not for actual work.
Since the parameter is not for actual work, why does the setting of this parameter will affect the choice of pushing down to remote sides ?
@puq:
I would guess that it might, but you will probably get better help from official support channel for postgresql.