-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test racing conditions in bulkblocks API calls #30
Comments
Dennis,
we should not experience lock of db if we properly use transactions. My suggestion is to investigate how sqlite should be initialized that transactions should be applied to tables rather DB itself.
I think we need to triple check that all statements, rows, are closed in bulkblocks API. It would be useful to identify on which part of bulkblocks API the lock occurred, i.e. does the lock appear on specific step, if so which step it is.
Finally, according to this github issue [1] we may try the following options:
```
db.SetMaxOpenConns(1)
```
and `journal_mode=WAL` which seems to address the issue with database lock.
[1] mattn/go-sqlite3#569
|
@vkuznet I implemented the suggestions from the
This states that the block already exists in the database. Is this the type of racing conditions we are trying to resolve? |
I'm glad that we resolve database lock issue and now can move forward. The error you got is not a racing condition. It clearly states that you try to insert a block which is already in DB. The racing condition should happen as following:
The question is how to simulate it. I think you need to create multiple JSONs with different blocks but the same common data, like dataset configuration, and then inject them concurrently. The more HTTP requests you'll send concurrently the likely you will hit the racing condition. |
We should come up with integration tests which will allow to test racing conditions of
bulkblocks
API. They happen when there are competing (concurrent) calls withbulkblocks
API which provides almost identical data (the data only different at block/file level and all other parameters remain the same). In this scenario there are common data such as physics group, dataset access type, processed dataset where we either need to insert or obtain IDs, see https://github.com/dmwm/dbs2go/blob/master/dbs/bulkblocks2.go#L506-L571I identified that for small tables, like physics group or data tiers, the probability of racing conditions is kind of small, while for larger ones, like processed datasets which contains 149970 entries in ORACLE DB, there is a possibility of racing conditions which may happen if two competing HTTP requests tries to insert/check processed dataset ID.
We need to come up with integration test for this use-case.
The text was updated successfully, but these errors were encountered: