On 22nd of September, Tom Lane committed patch:
Make EXPLAIN ANALYZE report the numbers of rows rejected by filter steps. This provides information about the numbers of tuples that were visited but not returned by table scans, as well as the numbers of join tuples that were considered and discarded within a join plan node. There is still some discussion going on about the best way to report counts for outer-join situations, but I think most of what's in the patch would not change if we revise that, so I'm going to go ahead and commit it as-is. Documentation changes to follow (they weren't in the submitted patch either). Marko Tiikkaja, reviewed by Marc Cousin, somewhat revised by Tom
How does it work?
Let's see. On 9.0, when I run a simple query, I get explain which looks like this:
$ EXPLAIN analyze SELECT * FROM pg_class WHERE relname ~ 'a'; QUERY PLAN --------------------------------------------------------------------------------------------------------- Seq Scan ON pg_class (cost=0.00..10.16 ROWS=202 width=190) (actual TIME=0.013..0.169 ROWS=197 loops=1) FILTER: (relname ~ 'a'::text) Total runtime: 0.200 ms (3 ROWS)
Which is fine – it shows how many rows have been returned. But it doesn't show how many rows were actually scanned. Which is important, as returning 197 rows by seq scan from table of 200 rows is ok, but from table that has 1000000 rows – not so ok.
In 9.2 it looks like this:
$ EXPLAIN analyze SELECT * FROM pg_class WHERE relname ~ 'a'; QUERY PLAN --------------------------------------------------------------------------------------------------------- Seq Scan ON pg_class (cost=0.00..11.60 ROWS=215 width=194) (actual TIME=0.027..0.388 ROWS=236 loops=1) FILTER: (relname ~ 'a'::text) ROWS Removed BY FILTER: 72 Total runtime: 0.486 ms (4 ROWS)
Which means that Seq Scan worked on 236 + 72 rows, and 72 were removed.
Of course in case of whole table scans, it's nearly irrelevant, since we can always check how large the table is. The thing is – this works also in a bit more complex situations:
$ EXPLAIN analyze SELECT * FROM test WHERE i < 100 AND i::text ~ '9'; QUERY PLAN ----------------------------------------------------------------------------------------------------------- INDEX Scan USING q ON test (cost=0.00..10.54 ROWS=91 width=4) (actual TIME=0.057..0.147 ROWS=19 loops=1) INDEX Cond: (i < 100) FILTER: ((i)::text ~ '9'::text) ROWS Removed BY FILTER: 80 Total runtime: 0.171 ms (5 ROWS)
The one thing I didn't understand was – what will be shown in case of multiple runs of given node. So:
$ EXPLAIN analyze SELECT * FROM generate_series(1,4) AS q, test WHERE i < 100 AND i::text ~ q::text AND q< 3; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------- Nested Loop (cost=0.00..1829.17 ROWS=152 width=8) (actual TIME=0.031..0.441 ROWS=38 loops=1) JOIN FILTER: ((test.i)::text ~ (q.q)::text) ROWS Removed BY JOIN FILTER: 160 -> INDEX Scan USING q ON test (cost=0.00..9.85 ROWS=91 width=4) (actual TIME=0.009..0.053 ROWS=99 loops=1) INDEX Cond: (i < 100) -> FUNCTION Scan ON generate_series q (cost=0.00..12.50 ROWS=333 width=4) (actual TIME=0.000..0.001 ROWS=2 loops=99) FILTER: (q < 3) ROWS Removed BY FILTER: 2 Total runtime: 0.478 ms (9 ROWS)
As we see the generate_series was ran 99 times, and each time the filter removed 2 rows, so in total we had 198 rows removed from generate_series' outputs.
Sweet. Will definitely help diagnosing some explains.
Also – there was (short, but still) discussion whether the change will break explain.depesz.com – Well, clearly it doesn't :).