in previous post i described how to find overlapping ranges in a way that will work with indexes.
now, i'd like to discuss something else.
how to check how many ranges are in given moment?
generally it is simple, let's create test table:
CREATE TABLE test AS SELECT least(a,b) AS a, greatest(a,b) AS b FROM ( SELECT ( '1900-01-01'::DATE + 50000 * random() * '1 day'::INTERVAL)::DATE AS a, ( '1900-01-01'::DATE + 50000 * random() * '1 day'::INTERVAL)::DATE AS b FROM generate_series(1,10000) i ) x WHERE a <> b; CREATE INDEX i_a ON test (a); CREATE INDEX i_b ON test (b); CREATE UNIQUE INDEX i_ab ON test (a,b); CREATE UNIQUE INDEX i_ba ON test (b,a);
for any given day i can:
SELECT COUNT(*) FROM test WHERE '2007-11-01' BETWEEN a AND b;
which of course uses indexes:
QUERY PLAN ------------------------------------------------------------------------------------------------------------------------- Aggregate (cost=181.37..181.38 rows=1 width=0) (actual time=16.394..16.397 rows=1 loops=1) -> Bitmap Heap Scan on test (cost=69.85..172.23 rows=3655 width=0) (actual time=1.433..9.651 rows=3386 loops=1) Recheck Cond: ('2007-11-01'::date <= b) Filter: ('2007-11-01'::date >= a) -> Bitmap Index Scan on i_b (cost=0.00..68.94 rows=3825 width=0) (actual time=1.384..1.384 rows=3828 loops=1) Index Cond: ('2007-11-01'::date <= b) Total runtime: 16.481 ms
but the question is – how do i find dates which match the most ranges?
of course i could:
SELECT t1.a, ( SELECT COUNT(*) FROM test t2 WHERE t1.a BETWEEN t2.a AND t2.b ) AS COUNT FROM test t1 ORDER BY COUNT DESC LIMIT 1;
but it's slow:
QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------------- Limit (cost=1553307.10..1553307.10 rows=1 width=4) (actual time=171289.465..171289.467 rows=1 loops=1) -> Sort (cost=1553307.10..1553332.10 rows=10000 width=4) (actual time=171289.460..171289.460 rows=1 loops=1) Sort Key: ((subplan)) Sort Method: top-N heapsort Memory: 17kB -> Seq Scan on test t1 (cost=0.00..1553257.10 rows=10000 width=4) (actual time=35.276..171259.246 rows=10000 loops=1) SubPlan -> Aggregate (cost=155.30..155.31 rows=1 width=0) (actual time=17.115..17.117 rows=1 loops=10000) -> Bitmap Heap Scan on test t2 (cost=57.53..152.52 rows=1111 width=0) (actual time=2.331..11.220 rows=3313 loops=10000) Recheck Cond: ($0 <= b) Filter: ($0 >= a) -> Bitmap Index Scan on i_b (cost=0.00..57.25 rows=3333 width=0) (actual time=2.292..2.292 rows=8313 loops=10000) Index Cond: ($0 <= b) Total runtime: 171289.581 ms
painful. what's more – it seems that it would scale badly.
is there any other choice?
actually – there is.
let's write a function for this task:
CREATE OR REPLACE FUNCTION get_most_busy_day(OUT start_date DATE, OUT end_date DATE, OUT ranges INT4) AS $BODY$ DECLARE current_count INT4 := 0; current_ok bool := FALSE; temprec record; BEGIN ranges := 0; FOR temprec IN SELECT 's' AS code, a AS x FROM test UNION ALL SELECT 'f' AS code, b AS x FROM test ORDER BY x ASC, code DESC loop IF temprec.code = 'f' THEN current_count := current_count - 1; IF current_ok = TRUE THEN current_ok := FALSE; end_date := temprec.x; END IF; continue; END IF; current_count := current_count + 1; IF current_count > ranges THEN start_date := temprec.x; ranges := current_count; current_ok := TRUE; END IF; END loop; RETURN; END; $BODY$ LANGUAGE plpgsql;
this function returns start and end of “virtual" range of days which have the largest number of concurrent ranges in source table.
example:
# SELECT * FROM get_most_busy_day(); start_date | end_date | ranges ------------+------------+-------- 1968-02-12 | 1968-02-13 | 4993 (1 ROW)
it means that in my data, from 1968-02-12 to 1968-02-13 there are 4993 concurrent matching ranges in test table.
and what about speed?
# EXPLAIN analyze SELECT * FROM get_most_busy_day(); QUERY PLAN --------------------------------------------------------------------------------------------------------------------- FUNCTION Scan ON get_most_busy_day (cost=0.00..0.26 ROWS=1 width=12) (actual TIME=270.127..270.130 ROWS=1 loops=1) Total runtime: 270.175 ms
quite fast. especially when compared to pure-sql based approach 🙂
One thought on “time ranges in postgresql – part 2”
Comments are closed.