Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AIP-72: Handling task retries in task SDK + execution API #45106

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

amoghrajesh
Copy link
Contributor

@amoghrajesh amoghrajesh commented Dec 20, 2024

closes: #44351

"Retries" are majorly handled in airflow 2.x in here: https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L3082-L3101.

The idea here is that in case a task is retry able, defined by https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1054-L1073, the task is marked as "up_for_retry". Rest of the part is taken care by the scheduler loop normally if the ti state is marked correctly.

Coming to task sdk, we cannot perform validations such as https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1054-L1073 in the task runner / sdk side because we do not have/ should not have access to the database.

image (7)

We can use the above state change diagram and handle the retry state while handling failed state. Instead of having API handler and states for "up_for_retry", we can handle it when we are handling failures - which we do by calling the https://github.com/apache/airflow/blob/main/airflow/api_fastapi/execution_api/routes/task_instances.py#L160-L212 API endpoint. If we can send in enough data to the api handler in the execution API, we should be able to handle the cases of retry well.

What needs to be done for porting this to task_sdk?

  1. Defining "try_number", "max_retries" for task instances ---> not needed because this is handled already in the scheduler side of things / parsing time and not at execution time, so we do not need to handle it. It is handled here https://github.com/apache/airflow/blob/main/airflow/models/dagrun.py#L1445-L1471 when a dag run is created and it is initialised with the initial values: max_tries(https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1809) and try_number(https://github.com/apache/airflow/blob/main/airflow/models/taskinstance.py#L1808)

  2. We need to have a mechanism that can send a signal from the task runner if retries are defined. We will send this in this fashion:
    task runner informs the supervisor while failing that it needs to retry -> supervisor sends a normal request to the client (but with task_retries defined) -> client sends a normal API request (TITerminalStatePayload) to the execution API but with task_retries

  3. At the execution API, we receive the request and perform a check to check if the Ti is eligible for retry, if it is, we mark it as "up_for_retry", the rest of things are taken care by the scheduler.

Testing results

Right now the PR is meant to handle BaseException -- will extend to all other eligible TI exceptions in follow ups.

Scenario 1: With retries = 3 defined.

DAG:

import sys
from time import sleep

from airflow import DAG
from airflow.providers.standard.operators.python import PythonOperator
from datetime import datetime, timedelta
from airflow.exceptions import AirflowTaskTimeout


def print_hello():
    1//0

with DAG(
    dag_id="abcd",
    schedule=None,
    catchup=False,
    tags=["demo"],
) as dag:
    hello_task = PythonOperator(
        task_id="say_hello",
        python_callable=print_hello,
        retries=3
    )

Rightly marked as "up_for_retry"
image (3)

TI details with max_tries
image (4)

Try number in grid view
image (5)

Scenario 2: With retries not defined.

DAG:

import sys
from time import sleep

from airflow import DAG
from airflow.providers.standard.operators.python import PythonOperator
from datetime import datetime, timedelta
from airflow.exceptions import AirflowTaskTimeout


def print_hello():
    1//0

with DAG(
    dag_id="abcd",
    schedule=None,
    catchup=False,
    tags=["demo"],
) as dag:
    hello_task = PythonOperator(
        task_id="say_hello",
        python_callable=print_hello,
    )

Rightly marked as "failed"
image

Ti detiails with 0 max_tries:
image

Try number in grid view
image

============

Pending:

  • UT coverage for execution API for various scenarios
  • UT coverage for supervisor and task_runner, client
  • Extending to various other scenarios when retry is needed -- eg: AirflowTaskTimeout / AirflowException

^ Add meaningful description above
Read the Pull Request Guidelines for more information.
In case of fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in a newsfragment file, named {pr_number}.significant.rst or {issue_number}.significant.rst, in newsfragments.

@amoghrajesh amoghrajesh requested review from kaxil and ashb and removed request for ephraimbuddy and pierrejeambrun December 20, 2024 10:26
@amoghrajesh amoghrajesh added the area:task-execution-interface-aip72 AIP-72: Task Execution Interface (TEI) aka Task SDK label Dec 20, 2024
@amoghrajesh
Copy link
Contributor Author

If we agree on the approach, I will work on the tests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:task-execution-interface-aip72 AIP-72: Task Execution Interface (TEI) aka Task SDK area:task-sdk
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Handle Task retries in Task SDK
1 participant