Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Push the runtime filter from HashJoin down to SeqScan or AM. #724

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

zhangyue-hashdata
Copy link
Contributor

@zhangyue-hashdata zhangyue-hashdata commented Nov 21, 2024

+----------+ AttrFilter +------+ ScanKey +------------+
| HashJoin | ------------> | Hash | ---------> | SeqScan/AM |
+----------+ +------+ +------------+

If "gp_enable_runtime_filter_pushdown" is on, three steps will be run:

Step 1. In ExecInitHashJoin(), try to find the mapper between the var in
hashclauses and the var in SeqScan. If found we will save the mapper in
AttrFilter and push them to Hash node;

Step 2. We will create the range/bloom filters in AttrFilter during building
hash table, and these filters will be converted to the list of ScanKey
and pushed down to Seqscan when the building finishes;

Step 3. If AM support SCAN_SUPPORT_RUNTIME_FILTER, these ScanKeys will be pushed
down to the AM module further, otherwise will be used to filter slot in
Seqscan;

perf:
CPU E5-2680 v2 10 cores, memory 32GB, 3 segments

  1. tpcds 10s off: 939s on: 779s 17%
  2. tpcds 100s off: 5270s on: 4365s 17%

tpcds 10s details

NO. off on
1 2959.5 1312.9
2 4522.2 2995.9
3 11924.1 9170.6
4 1678.4 1653.4
5-1 17433.1 14723.7
5-2 17244.1 14499.7
6 5541.8 4443.8
7 3144.0 1726.7
8 3895.7 2010.0
9 5991.1 3270.7
10 29981.7 19975.5
11 3113.9 2293.0
12 2166.9 1543.3
13 1258.8 726.0
14 11745.1 7037.3
15 3878.6 2568.8
16 5420.8 3200.1
17 3291.2 2117.3
18 2103.7 979.8
19 4400.1 2371.1
20 14048.3 11135.6
21 7739.5 7489.6
22 5524.1 3083.1
23 5978.4 4041.7
24 2355.7 1993.0
25 5561.1 3355.0
26 6973.0 3180.3
27 3536.0 2134.0
28 6899.6 3141.5
29 6065.5 3766.1
30 15443.3 15137.7
31 9197.8 6387.3
32 3345.3 1815.2
33 2488.1 2451.7
34 10840.0 9291.0
35 52991.2 52504.5
36 3386.1 3381.4
37 1834.0 994.2
38 10047.6 8915.8
39 3035.8 1961.5
40 1591.5 795.5
41 9284.0 7623.0
42 10255.5 10051.7
43 45356.5 44808.1
44 1989.3 1015.8
45 3021.2 1222.5
46 19865.7 18714.8
47 8438.0 4972.7
48 4520.7 3117.3
49 6244.7 6150.8
50 5926.0 4983.4
51 20423.2 20287.7
52 84.5 85.5
53 9273.4 6599.0
54 10214.5 8094.6
55 38920.5 36340.8
56 4248.9 2761.0
57 1980.7 1264.9
58 3954.6 1806.1
59 2982.5 1234.5
60 4099.5 1852.3
61 3080.7 1237.5
62 167.6 162.6
63 691.0 661.1
64 1283.0 549.9
65 2033.5 1017.7
66 8816.5 5342.4
67 10124.9 6664.9
68-1 53775.3 53665.4
68-2 55517.4 54534.5
69-1 20955.6 18578.4
69-2 17315.6 15543.9
70 6269.5 6134.9
71 13798.9 12300.0
72 5199.1 2861.3
73 4017.0 2236.1
74 5578.1 5341.3
75 1056.9 687.7
76 16922.5 16252.2
77 10491.0 8369.5
78 4590.9 2885.0
79 3660.0 2081.5
80 967.8 516.1
81 6620.6 1254.1
82 3072.2 1214.4
83 5111.3 2710.2
84 7458.4 7442.9
85 2989.0 1955.6
86 2546.8 1522.6
87 45634.8 43461.0
88 3555.6 1858.1
89 5630.1 4286.9
90 5321.3 2992.1
91 4748.6 4522.8
92-1 6264.3 4859.7
92-2 6355.4 5334.0
93 31174.3 30890.3
94 3971.3 3955.2
95 5097.4 3004.3
96 1999.7 1100.9
97 5055.0 2974.5
98 7319.9 5527.3
99 1895.3 1838.4
total 939 779 ~17%

tpcds 100s details

NO. off on
1 18410.5 8966.4
2 29278.7 17942.0
3 68332.8 52880.0
4 9753.3 9171.7
5-1 13129.4 11131.0
5-2 13198.6 11041.2
6 33375.2 27429.7
7 19269.8 10077.7
8 22145.1 12336.3
9 64808.8 56969.8
10 241285.3 161582.3
11 12075.9 10860.6
12 9563.9 8114.5
13 2687.4 1484.8
14 8932.8 5325.5
15 28111.9 20017.6
16 30523.0 19822.1
17 3168.7 3126.4
18 8063.8 5925.4
19 19826.5 10147.2
20 85671.7 80051.7
21 42448.4 38791.5
22 27746.9 15857.7
23 38362.9 25001.5
24 14143.9 13982.5
25 38145.1 18836.1
26 21448.6 12437.5
27 22101.6 14043.4
28 20168.0 12036.7
29 25105.1 13663.3
30 129563.3 129378.8
31 13768.9 7181.7
32 19039.5 11568.3
33 10040.4 9460.8
34 66348.7 57294.4
35 202825.5 197756.5
36 28703.6 28536.2
37 7065.0 6596.1
38 82447.6 73465.7
39 16568.5 10866.6
40 5632.2 2801.4
41 65710.0 54836.4
42 63198.1 61843.4
43 154716.5 151916.7
44 10148.0 5314.8
45 18080.4 8917.2
46 92069.5 84997.4
47 29166.7 21165.4
48 23630.6 20430.5
49 43623.1 43465.4
50 33574.3 24155.7
51 140174.1 136511.3
52 121.1 108.4
53 61777.5 53161.0
54 82215.5 70357.6
55 28835.3 26623.2
56 24767.9 15975.6
57 11274.8 7005.6
58 20008.8 11185.9
59 19055.5 8743.5
60 30663.6 13416.8
61 18951.8 8765.2
62 205.8 170.3
63 22338.1 13134.9
64 5539.4 3077.0
65 9817.0 5337.0
66 52164.9 26808.1
67 20848.3 9974.1
68-1 462977.0 458799.8
68-2 468924.8 464181.1
69-1 109512.7 103476.2
69-2 96662.2 88990.1
70 29996.5 28361.0
71 90191.5 75458.1
72 35578.6 20232.3
73 23311.9 13611.2
74 33299.4 30755.5
75 4249.5 3892.5
76 107381.3 102310.4
77 83335.9 72471.8
78 24223.1 15703.0
79 22379.5 14098.0
80 2753.5 1177.6
81 39643.7 15849.0
82 18515.7 8568.5
83 21661.9 11949.9
84 55856.0 55736.4
85 15100.4 9251.1
86 14383.5 9051.7
87 136009.8 134747.4
88 20819.7 12613.0
89 40187.3 31457.0
90 34545.7 12278.3
91 34661.6 32747.8
92-1 102286.6 31937.6
92-2 103260.1 31959.5
93 240605.7 235582.7
94 26150.7 25874.4
95 31199.3 20272.8
96 4154.5 2226.1
97 26195.2 17051.1
98 37977.6 24553.3
99 16624.3 16603.3
total 5270 4365 ~17%

Fixes #ISSUE_Number

What does this PR do?

Type of Change

  • Bug fix (non-breaking change)
  • New feature (non-breaking change)
  • Breaking change (fix or feature with breaking changes)
  • Documentation update

Breaking Changes

Test Plan

  • Unit tests added/updated
  • Integration tests added/updated
  • Passed make installcheck
  • Passed make -C src/test installcheck-cbdb-parallel

Impact

Performance:

User-facing changes:

Dependencies:

Checklist

Additional Context

⚠️ To skip CI: Add [skip ci] to your PR title. Only use when necessary! ⚠️



/* append new runtime filters to target node */
SeqScanState *sss = castNode(SeqScanState, attr_filter->target);
sss->filters = list_concat(sss->filters, scankeys);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we merge filter here on the same attno ?

Copy link
Contributor Author

@zhangyue-hashdata zhangyue-hashdata Nov 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Combining Bloom filters will result in a higher False Positive Rate (FPR) compared to using each of the individual Bloom filters separately, so it is not recommended;
  2. There is the same problem to combine range filters like combining Bloom filters;
  3. There is only one Bloom filter and one range filter on the same attribute in many cases;

Copy link
Member

@yjhjstz yjhjstz Nov 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

create table t1(a int, b int) with(parallel_workers=2);
create table rt1(a int, b int) with(parallel_workers=2);
create table rt2(a int, b int);
create table rt3(a int, b int);
insert into t1 select i, i from generate_series(1, 100000) i;
insert into t1 select i, i+1 from generate_series(1, 10) i;
insert into rt1 select i, i+1 from generate_series(1, 10) i;
insert into rt2 select i, i+1 from generate_series(1, 10000) i;
insert into rt3 select i, i+1 from generate_series(1, 10) i;
analyze t1;
analyze rt1;
analyze rt2;
analyze rt3;

explain analyze select * from rt1 join t1 on rt1.a = t1.b join rt3 on rt3.a = t1.b;

postgres=# explain select * from rt1 join t1 on rt1.a = t1.b join rt3 on rt3.a = t1.b;
                                   QUERY PLAN                                   
--------------------------------------------------------------------------------
 Gather Motion 3:1  (slice1; segments: 3)  (cost=2.45..428.51 rows=17 width=24)
   ->  Hash Join  (cost=2.45..428.29 rows=6 width=24)
         Hash Cond: (t1.b = rt1.a)
         ->  Hash Join  (cost=1.23..427.00 rows=6 width=16)
               Hash Cond: (t1.b = rt3.a)
               ->  Seq Scan on t1  (cost=0.00..342.37 rows=33337 width=8)
               ->  Hash  (cost=1.10..1.10 rows=10 width=8)
                     ->  Seq Scan on rt3  (cost=0.00..1.10 rows=10 width=8)
         ->  Hash  (cost=1.10..1.10 rows=10 width=8)
               ->  Seq Scan on rt1  (cost=0.00..1.10 rows=10 width=8)
 Optimizer: Postgres query optimizer
(11 rows)

you can try this case, will got two range filters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

got it

continue;

val = slot_getattr(slot, sk->sk_attno, &isnull);
if (isnull)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CREATE TABLE distinct_1(a int);
CREATE TABLE distinct_2(a int);
INSERT INTO distinct_1 VALUES(1),(2),(NULL);
INSERT INTO distinct_2 VALUES(1),(NULL);
SELECT * FROM distinct_1, distinct_2 WHERE distinct_1.a IS NOT DISTINCT FROM distinct_2.a;

test got wrong result.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will fix it.

return slot;

if (node->filter_in_seqscan && node->filters &&
!PassByBloomFilter(node, slot))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tpcds 1TB, bloom filter will lose efficacy or create failed due to large rows ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, when creating the Bloom filter, the system evaluates the estimated number of rows that this hash join will process and the amount of available memory during the execution plan generation. It determines whether using a Bloom filter for filtering data would be effective based on this evaluation. If it is assessed that the Bloom filter would not sufficiently enhance performance, then the Bloom filter will not be created.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It determines whether using a Bloom filter for filtering data would be effective based on this evaluation

That makes sense, but where is related code, I just didn't see them in this pr.
Does it compares the number of rows between the output of hashtable and data in the probe table? If the rows of the hashtable are far less than that of the probe table , then use the runtime filter?

src/backend/executor/nodeHashjoin.c Outdated Show resolved Hide resolved
{
match = false;

if (!IsA(lfirst(lc), Var))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could it support other expression, whose one arg is the column attr, and the other is a const?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that expressions like t1.c1 = 5 should be pushed down by the optimizer to operators such as SeqScan for early processing. Therefore, this feature does not handle expressions of the form t1.c1 = 5.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I didn't make it clear. I don't mean the predication on the var. like the below sql

 EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF)
SELECT t1.c3 FROM t1, t2 WHERE t1.c2 = (t2.c2 + 10);
                                        QUERY PLAN
-------------------------------------------------------------------------------------------
 Gather Motion 3:1  (slice1; segments: 3) (actual rows=0 loops=1)
   ->  Hash Join (actual rows=0 loops=1)
         Hash Cond: (t1.c2 = (t2.c2 + 10))
         Extra Text: (seg2)   Hash chain length 8.0 avg, 8 max, using 4 of 524288 buckets.
         ->  Seq Scan on t1 (actual rows=128 loops=1)
         ->  Hash (actual rows=32 loops=1)
               Buckets: 524288  Batches: 1  Memory Usage: 4098kB
               ->  Seq Scan on t2 (actual rows=32 loops=1)
 Optimizer: Postgres query optimizer
(9 rows)

As t2.c2 + 10 is not a Var but a T_OpExpr , the runtime filter cannot handle it.
Could we just iterate the expression tree and check if it only contains var and const ?

@fanfuxiaoran
Copy link
Contributor

Looks interesting. And I have some questions to discuss.

  • Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.
  • Looks only when the hashjoin node and seqscan node run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.

* result (hash filter)
* seqscan on t1, t1 is replicated table
*/
if (!IsA(child, HashJoinState) && !IsA(child, ResultState))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hash Join  (cost=0.00..4019.55 rows=37 width=9) (actual time=3203.012..9927.435 rows=1399 loops=1)
                                                   Hash Cond: (web_sales_1_prt_2.ws_item_sk = item.i_item_sk)
                                                   Join Filter: (web_sales_1_prt_2.ws_ext_discount_amt > ((1.3 * avg(web_sales_1_prt_2_1.ws_ext_discount_amt))))
                                                   Rows Removed by Join Filter: 4763
                                                   Extra Text: (seg2)   Hash chain length 1.0 avg, 1 max, using 198 of 2097152 buckets.
                                                   ->  Append  (cost=0.00..676.44 rows=2399189 width=13) (actual time=16.899..5572.473 rows=3090021 loops=1)
                                                         ->  Seq Scan on web_sales_1_prt_2  (cost=0.00..676.44 rows=2399189 width=13) (actual time=16.895..1138.267 rows=662
149 loops=1)
                                                         ->  Seq Scan on web_sales_1_prt_3  (cost=0.00..676.44 rows=2399189 width=13) (actual time=8.947..1102.409 rows=6621
36 loops=1)
                                                         ->  Seq Scan on web_sales_1_prt_4  (cost=0.00..676.44 rows=2399189 width=13) (actual time=8.822..1100.839 rows=6621
48 loops=1)
                                                         ->  Seq Scan on web_sales_1_prt_5  (cost=0.00..676.44 rows=2399189 width=13) (actual time=11.391..1083.785 rows=662
179 loops=1)
                                                         ->  Seq Scan on web_sales_1_prt_6  (cost=0.00..676.44 rows=2399189 width=13) (actual time=13.030..649.141 rows=4414
09 loops=1)
                                                         ->  Seq Scan on web_sales_1_prt_7  (cost=0.00..676.44 rows=2399189 width=13) (never executed)
                                                         ->  Seq Scan on web_sales_1_prt_others  (cost=0.00..676.44 rows=2399189 width=13) (actual time=1.213..3.203 rows=17
88 loops=1)
                                                   ->  Hash  (cost=2432.09..2432.09 rows=109 width=12) (actual time=3177.768..3177.770 rows=198 loops=1)
                                                         Buckets: 2097152  Batches: 1  Memory Usage: 16392kB
                                                         ->  Broadcast Motion 3:3  (slice3; segments: 3)  (cost=

need to consider partitioned table .

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

support partitioned table in c701092

Comment on lines 4285 to 4286
attr_filter->min = LLONG_MAX;
attr_filter->max = LLONG_MIN;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LLONG_MAX, LLONG_MIN are platform-spec value, i.e. the bound value for unsigned long long, which may not be exactly the same width as Datum. For safety, static assert could be considered.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see StaticAssertDecl(SIZEOF_DATUM == 8, "sizeof datum is not 8"); in postgres.h, so it's better to use INT64_MAX/INT64_MIN here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use LONG_MAX, LONG_MIN instead ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix with 99eabb2

Comment on lines 2194 to 2206
/*
* Only applicatable for inner, right and semi join,
*/
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you give a little more explain about why these join types are supported and others are not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add more message to explain why just inner, right and semi join are allowed with runtime filter.
fix it in 98dac6d

Comment on lines 2283 to 2284
if (!IsA(expr, OpExpr) && !IsA(expr, FuncExpr))
return false;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These 2 lines duplicate with the following if-elseif-else code, could be deleted.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix with 99eabb2

Comment on lines 2302 to 2315
break;

var = lfirst(lc);
if (var->varno == INNER_VAR)
*rattno = var->varattno;
else if (var->varno == OUTER_VAR)
*lattno = var->varattno;
else
break;

match = true;
}

return match;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The match flag gets the code hard(several modifications) to read. The break statement could be replaced by return false;. If the foreach loop ends, all conditions match, so returns true.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use the more intuitive way to refactor the code, like below

/* check the first arg */
...

/* check the second arg */
...

return true;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix with 99eabb2

Comment on lines 106 to 107
if (TupIsNull(slot))
return slot;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will it be true?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that slot is never NULL here, so Assert(!TupIsNull(slot)); is better or remove them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix with 99eabb2

Comment on lines +451 to +470
/*
* SK_EMPYT means the end of the array of the ScanKey
*/
sk[*num].sk_flags = SK_EMPYT;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How to check the boundary of the ScanKey array in rescan? In normal rescan, the number of ScanKeys is the same as begin_scan. If the number of ScanKeys is larger in rescan than that in begin_scan, the boundary value might be invalid and dangerous to access.

@avamingli
Copy link
Contributor

There are codes changed in MultiExecParallelHash, please add some parallel tests with runtime filter.

@zhangyue-hashdata
Copy link
Contributor Author

There are codes changed in MultiExecParallelHash, please add some parallel tests with runtime filter.

got it.

@zhangyue-hashdata
Copy link
Contributor Author

Looks interesting. And I have some questions to discuss.

  • Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.
  • Looks only when the hashjoin node and seqscan node run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.
  • Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.

Theoretically, it is feasible to apply runtime filters to operators such as Index Scan. However, because Index Scan already reduces data volume by leveraging an optimized storage structure, the performance gains from applying runtime filters to Index Scan would likely be minimal. Thus, I think that applying runtime filters to Index Scan would not yield significant performance benefits.

In subsequent work, when we discover that other scan operators can achieve notable performance improvements from pushdown runtime filters, we will support these operators. Our focus will be on operators where runtime filters can substantially decrease the amount of data processed early in the query execution, leading to more pronounced performance enhancements.

  • Looks only when the hashjoin node and seqscan node run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.

Yes, the current pushdown runtime filter only supports in-process pushdown, which means that the Hash Join and SeqScan need to be within the same process. The design and implementation of cross-process pushdown runtime filters are much more complex.

This limitation arises because coordinating and sharing data structures like Bloom filters or other runtime filters across different processes involves additional challenges such as inter-process communication (IPC), synchronization, and ensuring consistency and efficiency of the filters across process boundaries. Addressing these issues requires a more sophisticated design that can handle the complexities of distributed computing environments.

@avamingli
Copy link
Contributor

Hi, with gp_enable_runtime_filter_pushdown = on, execute SQL below will get a crash:

gpadmin=# show gp_enable_runtime_filter_pushdown;
 gp_enable_runtime_filter_pushdown
-----------------------------------
 on
(1 row)
CREATE TABLE test_tablesample (dist int, id int, name text) WITH (fillfactor=10) DISTRIBUTED BY (dist);
-- use fillfactor so we don't have to load too much data to get multiple pages

-- Changed the column length in order to match the expected results based on relation's blocksz
INSERT INTO test_tablesample SELECT 0, i, repeat(i::text, 875) FROM generate_series(0, 9) s(i) ORDER BY i;
INSERT INTO test_tablesample SELECT 3, i, repeat(i::text, 875) FROM generate_series(10, 19) s(i) ORDER BY i;
INSERT INTO test_tablesample SELECT 5, i, repeat(i::text, 875) FROM generate_series(20, 29) s(i) ORDER BY i;
EXPLAIN (COSTS OFF)
  SELECT id FROM test_tablesample TABLESAMPLE SYSTEM (50) REPEATABLE (2);
FATAL:  Unexpected internal error (assert.c:48)
DETAIL:  FailedAssertion("IsA(planstate, SeqScanState)", File: "explain.c", Line: 4154)
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
psql (14.4, server 14.4)
image

@zhangyue-hashdata
Copy link
Contributor Author

```sql
gpadmin=# show gp_enable_runtime_filter_pushdown;
 gp_enable_runtime_filter_pushdown
-----------------------------------
 on
(1 row)
CREATE TABLE test_tablesample (dist int, id int, name text) WITH (fillfactor=10) DISTRIBUTED BY (dist);
-- use fillfactor so we don't have to load too much data to get multiple pages

-- Changed the column length in order to match the expected results based on relation's blocksz
INSERT INTO test_tablesample SELECT 0, i, repeat(i::text, 875) FROM generate_series(0, 9) s(i) ORDER BY i;
INSERT INTO test_tablesample SELECT 3, i, repeat(i::text, 875) FROM generate_series(10, 19) s(i) ORDER BY i;
INSERT INTO test_tablesample SELECT 5, i, repeat(i::text, 875) FROM generate_series(20, 29) s(i) ORDER BY i;
EXPLAIN (COSTS OFF)
  SELECT id FROM test_tablesample TABLESAMPLE SYSTEM (50) REPEATABLE (2);
FATAL:  Unexpected internal error (assert.c:48)
DETAIL:  FailedAssertion("IsA(planstate, SeqScanState)", File: "explain.c", Line: 4154)
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
psql (14.4, server 14.4)

Thanks, I'll reproduce the issue and fix it.

@fanfuxiaoran
Copy link
Contributor

Thanks for your detailed explanation.

Looks interesting. And I have some questions to discuss.

  • Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.
  • Looks only when the hashjoin node and seqscan node run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.
  • Beside the seqscan, can the runtime filter apply to other types of scan? such as the index scan.

Theoretically, it is feasible to apply runtime filters to operators such as Index Scan. However, because Index Scan already reduces data volume by leveraging an optimized storage structure, the performance gains from applying runtime filters to Index Scan would likely be minimal. Thus, I think that applying runtime filters to Index Scan would not yield significant performance benefits.

Make sense. When doing hashjoin, index scan or index only scan are often not used on probe node.

In subsequent work, when we discover that other scan operators can achieve notable performance improvements from pushdown runtime filters, we will support these operators. Our focus will be on operators where runtime filters can substantially decrease the amount of data processed early in the query execution, leading to more pronounced performance enhancements.

  • Looks only when the hashjoin node and seqscan node run in the same process can use the runtime filter. Which means the tables should have same distributed policy on the join columns or one of the table is replicated.

Yes, the current pushdown runtime filter only supports in-process pushdown, which means that the Hash Join and SeqScan need to be within the same process. The design and implementation of cross-process pushdown runtime filters are much more complex.

This limitation arises because coordinating and sharing data structures like Bloom filters or other runtime filters across different processes involves additional challenges such as inter-process communication (IPC), synchronization, and ensuring consistency and efficiency of the filters across process boundaries. Addressing these issues requires a more sophisticated design that can handle the complexities of distributed computing environments.

Exactly, and if there is any lock used to solve the problem may even lead bad performance.

@fanfuxiaoran
Copy link
Contributor

 explain analyze
SELECT count(t1.c3) FROM t1, t3 WHERE t1.c1 = t3.c1 ;
                                                              QUERY PLAN

-----------------------------------------------------------------------------------------
----------------------------------------------
 Finalize Aggregate  (cost=1700.07..1700.08 rows=1 width=8) (actual time=32119.566..32119
.571 rows=1 loops=1)
   ->  Gather Motion 3:1  (slice1; segments: 3)  (cost=1700.02..1700.07 rows=3 width=8) (
actual time=30.967..32119.550 rows=3 loops=1)
         ->  Partial Aggregate  (cost=1700.02..1700.03 rows=1 width=8) (actual time=32119
.131..32119.135 rows=1 loops=1)
               ->  Hash Join  (cost=771.01..1616.68 rows=33334 width=4) (actual time=14.0
59..32116.962 rows=33462 loops=1)
                     Hash Cond: (t3.c1 = t1.c1)
                     Extra Text: (seg0)   Hash chain length 1.0 avg, 3 max, using 32439 o
f 524288 buckets.
                     ->  Seq Scan on t3  (cost=0.00..387.34 rows=33334 width=4) (actual t
ime=0.028..32089.490 rows=33462 loops=1)
                     ->  Hash  (cost=354.34..354.34 rows=33334 width=8) (actual time=13.2
57..13.259 rows=33462 loops=1)
                           Buckets: 524288  Batches: 1  Memory Usage: 5404kB
                           ->  Seq Scan on t1  (cost=0.00..354.34 rows=33334 width=8) (ac
tual time=0.180..4.877 rows=33462 loops=1)
 Planning Time: 0.227 ms

runtime_filter has been pushed down to t3 table seqscan, but 'explain analyze' doesn't print them out.

\d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 c1     | integer |           |          |
 c2     | integer |           |          |
 c3     | integer |           |          |
 c4     | integer |           |          |
 c5     | integer |           |          |
Checksum: t
Indexes:
    "t1_c2" btree (c2)
Distributed by: (c1)
 \d t3
                 Table "public.t3"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 c1     | integer |           |          |
 c2     | integer |           |          |
 c3     | integer |           |          |
 c4     | integer |           |          |
 c5     | integer |           |          |
Distributed by: (c1)

@zhangyue-hashdata zhangyue-hashdata force-pushed the runtime_filter branch 2 times, most recently from 76a003a to 98dac6d Compare December 5, 2024 14:37
@zhangyue-hashdata
Copy link
Contributor Author

 explain analyze
SELECT count(t1.c3) FROM t1, t3 WHERE t1.c1 = t3.c1 ;
                                                              QUERY PLAN

-----------------------------------------------------------------------------------------
----------------------------------------------
 Finalize Aggregate  (cost=1700.07..1700.08 rows=1 width=8) (actual time=32119.566..32119
.571 rows=1 loops=1)
   ->  Gather Motion 3:1  (slice1; segments: 3)  (cost=1700.02..1700.07 rows=3 width=8) (
actual time=30.967..32119.550 rows=3 loops=1)
         ->  Partial Aggregate  (cost=1700.02..1700.03 rows=1 width=8) (actual time=32119
.131..32119.135 rows=1 loops=1)
               ->  Hash Join  (cost=771.01..1616.68 rows=33334 width=4) (actual time=14.0
59..32116.962 rows=33462 loops=1)
                     Hash Cond: (t3.c1 = t1.c1)
                     Extra Text: (seg0)   Hash chain length 1.0 avg, 3 max, using 32439 o
f 524288 buckets.
                     ->  Seq Scan on t3  (cost=0.00..387.34 rows=33334 width=4) (actual t
ime=0.028..32089.490 rows=33462 loops=1)
                     ->  Hash  (cost=354.34..354.34 rows=33334 width=8) (actual time=13.2
57..13.259 rows=33462 loops=1)
                           Buckets: 524288  Batches: 1  Memory Usage: 5404kB
                           ->  Seq Scan on t1  (cost=0.00..354.34 rows=33334 width=8) (ac
tual time=0.180..4.877 rows=33462 loops=1)
 Planning Time: 0.227 ms

runtime_filter has been pushed down to t3 table seqscan, but 'explain analyze' doesn't print them out.

\d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 c1     | integer |           |          |
 c2     | integer |           |          |
 c3     | integer |           |          |
 c4     | integer |           |          |
 c5     | integer |           |          |
Checksum: t
Indexes:
    "t1_c2" btree (c2)
Distributed by: (c1)
 \d t3
                 Table "public.t3"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 c1     | integer |           |          |
 c2     | integer |           |          |
 c3     | integer |           |          |
 c4     | integer |           |          |
 c5     | integer |           |          |
Distributed by: (c1)

Thanks for your test case. Based on these, I rewrote code to ensure that debug info are always displayed even when the number of filtered rows is zero. And add the test case into gp_runtime_filter.sql too.
fix in 98dac6d

@zhangyue-hashdata
Copy link
Contributor Author

zhangyue-hashdata commented Dec 5, 2024

Hi, with gp_enable_runtime_filter_pushdown = on, execute SQL below will get a crash:

gpadmin=# show gp_enable_runtime_filter_pushdown;
 gp_enable_runtime_filter_pushdown
-----------------------------------
 on
(1 row)
CREATE TABLE test_tablesample (dist int, id int, name text) WITH (fillfactor=10) DISTRIBUTED BY (dist);
-- use fillfactor so we don't have to load too much data to get multiple pages

-- Changed the column length in order to match the expected results based on relation's blocksz
INSERT INTO test_tablesample SELECT 0, i, repeat(i::text, 875) FROM generate_series(0, 9) s(i) ORDER BY i;
INSERT INTO test_tablesample SELECT 3, i, repeat(i::text, 875) FROM generate_series(10, 19) s(i) ORDER BY i;
INSERT INTO test_tablesample SELECT 5, i, repeat(i::text, 875) FROM generate_series(20, 29) s(i) ORDER BY i;
EXPLAIN (COSTS OFF)
  SELECT id FROM test_tablesample TABLESAMPLE SYSTEM (50) REPEATABLE (2);
FATAL:  Unexpected internal error (assert.c:48)
DETAIL:  FailedAssertion("IsA(planstate, SeqScanState)", File: "explain.c", Line: 4154)
server closed the connection unexpectedly
        This probably means the server terminated abnormally
        before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
psql (14.4, server 14.4)
image

Thanks for your test case. I fix it in 98dac6d And add the test case into gp_runtime_filter.sql too.

@Smyatkin-Maxim
Copy link

Hi @zhangyue-hashdata
I see that previous runtime filter implementation relies on some cost model at try_runtime_filter(). Do I understand it correctly, that this PR does not do any cost evaluation?
Also for TPC-H/TPC-DS can you provide results for each query separately?

Asking mostly out of curiosity, I see here are quite a few reviewers here already :)

@zhangyue-hashdata
Copy link
Contributor Author

Hi @zhangyue-hashdata I see that previous runtime filter implementation relies on some cost model at try_runtime_filter(). Do I understand it correctly, that this PR does not do any cost evaluation? Also for TPC-H/TPC-DS can you provide results for each query separately?

Asking mostly out of curiosity, I see here are quite a few reviewers here already :)

Basically, you're correct. Because our goal is to filter out as much data as possible right at the point of data generation. However, this will lead to very complex evaluations, so we only made a simple estimation based on rows and work memory when creating the Bloom filter.
Furthermore, I have placed the detailed test results for TPC-DS 10s in PR description.

@zhangyue-hashdata
Copy link
Contributor Author

There are codes changed in MultiExecParallelHash, please add some parallel tests with runtime filter.

fix it in 7ab040a

if (table_scan_getnextslot(scandesc, direction, slot))
while (table_scan_getnextslot(scandesc, direction, slot))
{
if (node->filter_in_seqscan && node->filters &&
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if (!node->filter_in_seqscan ||  !node->filters)
{
     if (table_scan_getnextslot(scandesc, direction, slot))
         return slot;
}
else 
{
     while (table_scan_getnextslot(scandesc, direction, slot))
     {
            .....
     }
}

this make origin path more efficient and readable ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea! I fix in bcf93e6

@yjhjstz
Copy link
Member

yjhjstz commented Dec 20, 2024

from tpcds 10s details table, there are some bad cases.

@zhangyue-hashdata
Copy link
Contributor Author

from tpcds 10s details table, there are some bad cases.

21,24,30,42,49,54,68-1,99

I retested these SQL statements that exhibited performance regression, and the latest test results show no noticeable performance difference when toggling gp_enable_runtime_filter_pushdown. So, I speculate that the performance regression in these SQL statements might be associated with testing method. Previously I tested by running the entire suite of 99 TPC-DS queries with gp_enable_runtime_filter_pushdown enabled, and then again with it disabled.

Therefore, a more appropriate method would be to execute the same SQL statement multiple times with gp_enable_runtime_filter_pushdown both enabled and disabled, respectively, and then take the average of those runs for comparison. I will follow this testing method for retesting and observe if there's any performance regression.

@zhangyue-hashdata
Copy link
Contributor Author

from tpcds 10s details table, there are some bad cases.

21,24,30,42,49,54,68-1,99

I retested these SQL statements that exhibited performance regression, and the latest test results show no noticeable performance difference when toggling gp_enable_runtime_filter_pushdown. So, I speculate that the performance regression in these SQL statements might be associated with testing method. Previously I tested by running the entire suite of 99 TPC-DS queries with gp_enable_runtime_filter_pushdown enabled, and then again with it disabled.

Therefore, a more appropriate method would be to execute the same SQL statement multiple times with gp_enable_runtime_filter_pushdown both enabled and disabled, respectively, and then take the average of those runs for comparison. I will follow this testing method for retesting and observe if there's any performance regression.

I have retested the performance of tpcds 10s using the previously mentioned testing method. Please see the description part for the latest results.

@yjhjstz
Copy link
Member

yjhjstz commented Jan 6, 2025

I have retested the performance of tpcds 10s using the previously mentioned testing method. Please see the description part for the latest results

cool, what about tpcds 100 sf ?

@zhangyue-hashdata
Copy link
Contributor Author

I have retested the performance of tpcds 10s using the previously mentioned testing method. Please see the description part for the latest results

cool, what about tpcds 100 sf ?

Please see the description part for the results of tpcds 100s.

+----------+  AttrFilter   +------+  ScanKey   +------------+
| HashJoin | ------------> | Hash | ---------> | SeqScan/AM |
+----------+               +------+            +------------+

If "gp_enable_runtime_filter_pushdown" is on, three steps will be run:

Step 1. In ExecInitHashJoin(), try to find the mapper between the var in
        hashclauses and the var in SeqScan. If found we will save the mapper in
        AttrFilter and push them to Hash node;

Step 2. We will create the range/bloom filters in AttrFilter during building
        hash table, and these filters will be converted to the list of ScanKey
        and pushed down to Seqscan when the building finishes;

Step 3. If AM support SCAN_SUPPORT_RUNTIME_FILTER, these ScanKeys will be pushed
        down to the AM module further, otherwise will be used to filter slot in
        Seqscan;
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants