-
-
Notifications
You must be signed in to change notification settings - Fork 394
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pool.release() received invalid connection #1811
Comments
@d3QUone thank you for the detailed report! I couldn't reproduce the issue. Is there anything specific about how you configure the database connection or how the database server is configured? Could you provide the code for The runnable script that I used for testing:
|
I ran into a similar issue but with a nested transactions block. The current connection is stored in the context. When you create a coroutine, the current context gets copied, and the coroutine runs in that context. So, when you have nested transaction blocks, the inner coroutine will re-use the same connection as the outer, which is logical. So now you have nested transactions using the same connection. But the problem is on rollback. async def rollback(self) -> None:
if self._finalized:
raise exceptions.TransactionManagementError("Transaction already finalised")
await self._connection.rollback()
await self._connection.set_autocommit(True)
self._finalized = True When you try to set autocommit back to True on the connection already in a transaction, you will get an error:
I was able to reproduce the issue using this code: from asyncio import gather, get_event_loop
from contextlib import asynccontextmanager
import contextvars
from tortoise.connection import connections
from tortoise import Tortoise
from tortoise.transactions import in_transaction, atomic
from tournaments.domains.buy_ins.models import BuyIn
from tournaments.entrypoints.tortoise_init import TORTOISE_CONFIG
loop = get_event_loop()
null_context = contextvars.Context()
async def outer_coro():
async with in_transaction() as conn:
print("outer_coro")
b = await BuyIn.all().select_for_update().using_db(conn)
try:
# coros = [
# loop.create_task(inner_coro(), context=null_context.copy())
# for i in range(9)
# ]
coros = (inner_coro(), inner_coro(),
inner_coro(), inner_coro(),
inner_coro(), inner_coro())
await gather(*coros)
except Exception as exc:
print(f"Inner error {exc!r}")
raise Exception("error2") from exc
async def inner_coro():
print("inner coro")
async with in_transaction() as conn:
b = await BuyIn.all()
raise ValueError("Error")
async def main():
await Tortoise.init(**TORTOISE_CONFIG)
await outer_coro()
loop.run_until_complete(main())
The code will even stuck on one of the inner coroutines if you will try to open one more transaction, even if pool minsize is 50. If you switch to creating tasks with new context -- the inner coroutines will use new connections all run fine. Using the same connection in the nested coroutines is logical and right, but using rollback with resetting auto-commit causes this problem. Moreover, I can't find a proper public API to use a new connection/client for nested blocks. [related https://github.com//issues/1128] |
@andrewshkovskii, thank you for reporting! Although I think it's different issue. Would you mind opening a new issue with your report?
I believe this is going to be fixed with #1810 |
@henadzit, hi! Thank you for your response! I believe the model is pretty simple, actually I can reproduce it with any model in my project: import enum
from tortoise import fields, models
class ServiceName(enum.StrEnum):
METRICS = "metrics"
NOTIFICATIONS = "notifications"
class OrderEventType(enum.StrEnum):
START_INCIDENT = "start_incident"
FINISH_INCIDENT = "finish_incident"
class OrderState(enum.StrEnum):
NEW = "new"
SENT = "sent"
COMPLETED = "completed"
class Incident(models.Model):
id = fields.UUIDField(
primary_key=True,
)
class Order(models.Model):
id = fields.BigIntField(
primary_key=True,
)
incident = fields.ForeignKeyField(
model_name="models.Incident",
related_name="orders",
on_delete=fields.OnDelete.CASCADE,
)
created_at = fields.DatetimeField(
auto_now_add=True,
)
updated_at = fields.DatetimeField(
auto_now=True,
)
sent_at = fields.DatetimeField(
null=True,
)
completed_at = fields.DatetimeField(
null=True,
)
state = fields.CharEnumField(
enum_type=OrderState,
max_length=16,
db_index=True,
)
order_type = fields.CharEnumField(
enum_type=OrderEventType,
max_length=64,
db_index=True,
)
receiver = fields.CharEnumField(
enum_type=ServiceName,
max_length=64,
db_index=True,
)
queue_message = fields.JSONField() May be it related to connection settings. I've specified timeouts and pool-connections there: {
"connections":{
"default":{
"engine":"tortoise.backends.asyncpg",
"credentials":{
"application_name":"not_set",
"minsize":10,
"maxsize":20,
"timeout":1,
"command_timeout":1,
"server_settings": {
"statement_timeout": 1000,
},
"port":5732,
"database":"db_local",
"host":"127.0.0.1",
"user":"admin",
"password":"admin"
}
}
},
"apps":{
"models":{
"models":[
"src.core.incidents.models",
"aerich.models"
],
"default_connection":"default"
}
},
"use_tz":True,
"timezone":"Europe/London"
} |
@d3QUone, I was finally able to reproduce the issue! Basically it happens because multiple tasks are trying to initialize connection pools, however, the connection pool should be one. The fix would be to introduce a lock or initialize the pool in @andrewshkovskii , it's quite a coincidence that at the time of your comment I worked on implementing transaction savepoints that seem to be addressing your concerns. Here's the PR. |
I think initializing the pool inside |
Describe the bug
The situation:
asyncio.create_task
exceptions.InterfaceError
errorTo Reproduce
Minimal code to reproduce the bug:
Exception:
Expected behavior
Additional context
Versions:
How to fix: I've found a workaround, but I think it should be at least documented, at most – fixed:
Run "SELECT" and wait before running any business-logic code:
The text was updated successfully, but these errors were encountered: