Skip to content
This repository has been archived by the owner on Mar 11, 2021. It is now read-only.

Provide join lock down #2211

Merged
merged 4 commits into from
Aug 7, 2018
Merged

Provide join lock down #2211

merged 4 commits into from
Aug 7, 2018

Conversation

kwk
Copy link
Collaborator

@kwk kwk commented Aug 3, 2018

This includes an extra-condition in the ON part of the table JOINS for areas, codebases and iterations to only join those tables filtered by their space ID. I'm not sure though if this really fixes the problem (see #2210 (comment)).

TODO

As of yesterday's (07.08.2018) discussion with @aslakknutsen we did experiments and found that in order to keep the rows in the search small, we have to establish a condition on the final SQL WHERE clause that limits the selection to work items from a particular space. At the moment, the current /api/search endpoint is so generic that it doesn't require a limitation by space on the root of the WHERE clause. That's why @aslakknutsen and I agreed to create a search endpoint under /api/spaces/<SPACE-UUID>/search in order to automatically add the space ID to the query condition.

This will be implemented in another PR and is tracked in openshiftio/openshift.io#4124

See #2210.

@kwk kwk self-assigned this Aug 3, 2018
@kwk kwk requested a review from aslakknutsen August 3, 2018 12:26
@alien-ike alien-ike changed the title WIP: Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Aug 3, 2018
@alien-ike alien-ike changed the title WIP: Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Aug 3, 2018
@alien-ike alien-ike changed the title WIP: Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Aug 3, 2018
@alien-ike
Copy link

alien-ike commented Aug 3, 2018

Ike Plugins (test-keeper)

Thank you @kwk for this contribution!

It appears that no tests have been added or updated in this PR.

Automated tests give us confidence in shipping reliable software. Please add some as part of this change.

If you are an admin or the reviewer of this PR and you are sure that no test is needed then you can use the command /ok-without-tests as a comment to make the status green.

For more information please head over to official documentation. You can find there how to configure the plugin.

@kwk kwk changed the title Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) WIP: Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Aug 3, 2018
@alien-ike alien-ike changed the title WIP: Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Aug 3, 2018
@codecov-io
Copy link

codecov-io commented Aug 3, 2018

Codecov Report

Merging #2211 into master will increase coverage by 0.22%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2211      +/-   ##
==========================================
+ Coverage   69.77%   69.99%   +0.22%     
==========================================
  Files         170      170              
  Lines       15846    15913      +67     
==========================================
+ Hits        11057    11139      +82     
+ Misses       3778     3761      -17     
- Partials     1011     1013       +2
Impacted Files Coverage Δ
workitem/expression_compiler.go 86.76% <100%> (+4.71%) ⬆️
controller/workitem.go 79.64% <0%> (+0.19%) ⬆️
workitem/workitem_repository.go 69.9% <0%> (+0.27%) ⬆️
remoteworkitem/scheduler.go 60.97% <0%> (+7.31%) ⬆️
remoteworkitem/jira.go 100% <0%> (+25%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 18d2d47...e377603. Read the comment docs.

@kwk kwk changed the title Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) WIP: Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Aug 7, 2018
@kwk kwk changed the title WIP: Fix: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) WIP: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Aug 7, 2018
@kwk kwk changed the title WIP: SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Aug 7, 2018
@kwk kwk changed the title SQL causing 100% CPU usage on stage (not seen on prod yet, stage has more data) Provide join lock down Aug 7, 2018
@aslakknutsen
Copy link
Contributor

/ok-without-tests

@kwk kwk merged commit 2661cf8 into fabric8-services:master Aug 7, 2018
@kwk kwk deleted the fix_2210 branch August 7, 2018 13:41
kwk added a commit to openshiftio/saas-openshiftio that referenced this pull request Aug 7, 2018
commit fabric8-services/fabric8-wit@2661cf8
Author: Konrad Kleine <[email protected]>
Date:   Tue Aug 7 15:40:55 2018 +0200

Provide join lock down  (fabric8-services/fabric8-wit#2211)
    
This includes an extra-condition in the `ON` part of the table `JOINS` for areas, codebases and iterations to only join those tables filtered by their space ID. I'm not sure though if this really fixes the problem (see fabric8-services/fabric8-wit#2210 (comment)).
    
## TODO
    
As of yesterday's (07.08.2018) discussion with @aslakknutsen we did experiments and found that in order to keep the rows in the search small, we have to establish a condition on the final SQL `WHERE` clause that limits the selection to work items from a particular space. At the moment, the current `/api/search` endpoint is so generic that it doesn't require a limitation by space on the root of the `WHERE` clause. That's why @aslakknutsen and I agreed to create a search endpoint under `/api/spaces/<SPACE-UUID>/search` in order to automatically add the space ID to the query condition. This will be implemented in another PR and is tracked in openshiftio/openshift.io#4124
    
See fabric8-services/fabric8-wit#2210.
aslakknutsen pushed a commit to openshiftio/saas-openshiftio that referenced this pull request Aug 7, 2018
commit fabric8-services/fabric8-wit@2661cf8
Author: Konrad Kleine <[email protected]>
Date:   Tue Aug 7 15:40:55 2018 +0200

Provide join lock down  (fabric8-services/fabric8-wit#2211)
    
This includes an extra-condition in the `ON` part of the table `JOINS` for areas, codebases and iterations to only join those tables filtered by their space ID. I'm not sure though if this really fixes the problem (see fabric8-services/fabric8-wit#2210 (comment)).
    
## TODO
    
As of yesterday's (07.08.2018) discussion with @aslakknutsen we did experiments and found that in order to keep the rows in the search small, we have to establish a condition on the final SQL `WHERE` clause that limits the selection to work items from a particular space. At the moment, the current `/api/search` endpoint is so generic that it doesn't require a limitation by space on the root of the `WHERE` clause. That's why @aslakknutsen and I agreed to create a search endpoint under `/api/spaces/<SPACE-UUID>/search` in order to automatically add the space ID to the query condition. This will be implemented in another PR and is tracked in openshiftio/openshift.io#4124
    
See fabric8-services/fabric8-wit#2210.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants