More than once I've seen situation when there is a table, with serial primary key, and rows contain also some kind of creation timestamp, which is usually monotonic, or close to monotonic.
Example of such case are for example comments or posts in forums – each get it's ID, but they also have creation timestamp. And it usually is so that higher ids were added later than the lower ids.
So, let's assume you have such table, and somebody asks you to make a report on data from last month. How?
Let's create such table:
CREATE TABLE posts AS WITH RECURSIVE whatever AS ( SELECT 1000000 AS i, now() AS creation UNION ALL SELECT i-1, creation - '5 minutes'::INTERVAL * random() FROM whatever WHERE i > 1 ) SELECT * FROM whatever; ALTER TABLE posts ADD PRIMARY KEY (i);
Let's see some example rows:
# SELECT * FROM posts ORDER BY i ASC LIMIT 5; i | creation ---+------------------------------- 1 | 2005-07-14 22:03:52.523848+02 2 | 2005-07-14 22:04:09.939084+02 3 | 2005-07-14 22:07:00.015705+02 4 | 2005-07-14 22:09:22.22041+02 5 | 2005-07-14 22:13:20.954915+02 (5 ROWS) (depesz@[LOCAL]:5900) 11:16:55 [depesz] # SELECT * FROM posts ORDER BY i DESC LIMIT 5; i | creation ---------+------------------------------- 1000000 | 2010-04-15 11:16:20.682417+02 999999 | 2010-04-15 11:16:06.188803+02 999998 | 2010-04-15 11:11:09.265581+02 999997 | 2010-04-15 11:10:48.04405+02 999996 | 2010-04-15 11:09:11.24254+02 (5 ROWS)
Now. If I'll ask PostgreSQL for count of posts in last month, it will take long time:
EXPLAIN analyze SELECT COUNT(*) FROM posts WHERE creation > now() - '1 month'::INTERVAL; QUERY PLAN -------------------------------------------------------------------------------------------------------------------- Aggregate (cost=23235.33..23235.35 ROWS=1 width=0) (actual TIME=749.215..749.216 ROWS=1 loops=1) -> Seq Scan ON posts (cost=0.00..22402.00 ROWS=333333 width=0) (actual TIME=0.011..746.232 ROWS=17905 loops=1) FILTER: (creation > (now() - '1 mon'::INTERVAL)) Total runtime: 749.240 ms (4 ROWS)
Of course 750ms is perfectly reasonable for seq scanning of 1M rows.
This query can be optimized in many ways – trigger based counts being the fastest one, but one could simply create index on creation column, and get much better times:
ALTER TABLE posts ADD PRIMARY KEY (i); CREATE INDEX wrong ON posts (creation); EXPLAIN analyze SELECT COUNT(*) FROM posts WHERE creation > now() - '1 month'::INTERVAL; QUERY PLAN -------------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=17812.36..17812.37 ROWS=1 width=0) (actual TIME=8.225..8.225 ROWS=1 loops=1) -> Bitmap Heap Scan ON posts (cost=6243.70..16979.02 ROWS=333333 width=0) (actual TIME=1.587..5.237 ROWS=17903 loops=1) Recheck Cond: (creation > (now() - '1 mon'::INTERVAL)) -> Bitmap INDEX Scan ON wrong (cost=0.00..6160.36 ROWS=333333 width=0) (actual TIME=1.567..1.567 ROWS=17903 loops=1) INDEX Cond: (creation > (now() - '1 mon'::INTERVAL)) Total runtime: 8.252 ms
Not bad. But what can we do if addition of new index is not an option? You're on old Pg that can't create index concurrently, or you don't want index due to writes slowdown?
Well, we can (ab)use the fact that both i and creation are monotonic.
That is – if we'll find maximal i that is older than 1 month – we know we can take all rows with i > “this calculated i".
How to do it?
SELECT MAX(i) FROM posts WHERE creation < now() - '1 month'::INTERVAL
This simple query will return this maximal i that I wrote about. And it's pretty fast:
QUERY PLAN ---------------------------------------------------------------------------------------------------------------------------------------------------- Result (cost=0.15..0.16 rows=1 width=0) (actual time=14.819..14.819 rows=1 loops=1) InitPlan 1 (returns $0) -> Limit (cost=0.00..0.15 rows=1 width=4) (actual time=14.817..14.817 rows=1 loops=1) -> Index Scan Backward using posts_pkey on posts (cost=0.00..50900.34 rows=333333 width=4) (actual time=14.815..14.815 rows=1 loops=1) Index Cond: (i IS NOT NULL) Filter: (creation < (now() - '1 mon'::interval)) Total runtime: 14.838 ms (7 rows)
Now, I can use this in a bit more complex query:
SELECT COUNT(*) FROM posts WHERE i > ( SELECT MAX(i) FROM posts WHERE creation < now() - '1 month'::INTERVAL );
Which is also reasonably fast:
QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------------------ Aggregate (cost=15417.83..15417.84 rows=1 width=0) (actual time=22.282..22.282 rows=1 loops=1) InitPlan 2 (returns $1) -> Result (cost=0.15..0.16 rows=1 width=0) (actual time=14.179..14.179 rows=1 loops=1) InitPlan 1 (returns $0) -> Limit (cost=0.00..0.15 rows=1 width=4) (actual time=14.177..14.177 rows=1 loops=1) -> Index Scan Backward using posts_pkey on posts (cost=0.00..50900.34 rows=333333 width=4) (actual time=14.177..14.177 rows=1 loops=1) Index Cond: (i IS NOT NULL) Filter: (creation < (now() - '1 mon'::interval)) -> Bitmap Heap Scan on posts (cost=5515.67..14584.33 rows=333333 width=0) (actual time=15.698..19.235 rows=17899 loops=1) Recheck Cond: (i > $1) -> Bitmap Index Scan on posts_pkey (cost=0.00..5432.34 rows=333333 width=0) (actual time=15.679..15.679 rows=17899 loops=1) Index Cond: (i > $1) Total runtime: 22.309 ms (13 rows)
This is still slower than scan on dedicated index. And even worse than doing summary over some pre-calculated summary table, updated with triggers. But above trick helped me more than once, so I though you might like it as well.