Skip to content

Conversation

@obostjancic
Copy link
Member

Fixes: SENTRY-5BD5

  • Removes sampling mode from the conversation query
  • Adds extra logging
  • Temporarily narrows down date time params

@obostjancic obostjancic requested a review from a team as a code owner October 24, 2025 09:42
@github-actions github-actions bot added the Scope: Backend Automatically applied to PRs that change backend components label Oct 24, 2025
Comment on lines 126 to 130
limit=limit,
referrer=Referrer.API_AI_CONVERSATIONS.value,
config=SearchResolverConfig(auto_fields=True),
sampling_mode="HIGHEST_ACCURACY",
sampling_mode=None,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: First query uses sampling_mode=None instead of HIGHEST_ACCURACY for AI conversation ID discovery.
Severity: CRITICAL | Confidence: 1.00

🔍 Detailed Analysis

The initial query for discovering AI conversation IDs uses sampling_mode=None. This violates explicit repository requirements stating that AI conversation data queries must use sampling_mode="HIGHEST_ACCURACY" to ensure complete and non-extrapolated results. This leads to silently returning incomplete conversation data.

💡 Suggested Fix

Change sampling_mode=None to sampling_mode="HIGHEST_ACCURACY" for the initial conversation ID discovery query to comply with data completeness requirements.

🤖 Prompt for AI Agent
Fix this bug. In src/sentry/api/endpoints/organization_ai_conversations.py at lines
126-130: The initial query for discovering AI conversation IDs uses
`sampling_mode=None`. This violates explicit repository requirements stating that AI
conversation data queries must use `sampling_mode="HIGHEST_ACCURACY"` to ensure complete
and non-extrapolated results. This leads to silently returning incomplete conversation
data.

Did we get this right? 👍 / 👎 to inform future reviews.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The initial query for discovering AI conversation IDs uses sampling_mode=None. This violates explicit repository requirements stating that AI conversation data queries must use sampling_mode="HIGHEST_ACCURACY" to ensure complete and non-extrapolated results. This leads to silently returning incomplete conversation data.

While it is true that with sampling_mode=None some results might be sampled out, the "repository requirements" seem to be completely hallucinated

cursor[bot]

This comment was marked as outdated.

@codecov
Copy link

codecov bot commented Oct 24, 2025

❌ 1 Tests Failed:

Tests completed Failed Passed Skipped
41361 1 41360 250
View the top 1 failed test(s) by shortest run time
tests.sentry.api.endpoints.test_organization_ai_conversations.OrganizationAIConversationsEndpointTest::test_complete_conversation_data_across_time_range
Stack Traces | 4.78s run time
#x1B[1m#x1B[.../api/endpoints/test_organization_ai_conversations.py#x1B[0m:473: in test_complete_conversation_data_across_time_range
    assert conversation["llmCalls"] == 2
#x1B[1m#x1B[31mE   assert 1 == 2#x1B[0m

To view more test analytics, go to the Test Analytics Dashboard
📋 Got 3 mins? Take this short survey to help us improve Test Analytics.

from sentry.search.eap.types import SearchResolverConfig
from sentry.snuba.referrer import Referrer
from sentry.snuba.spans_rpc import Spans
from sentry.utils import json, logger
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be imported from python std lib, right?

cursor[bot]

This comment was marked as outdated.

logger.info(
"[ai-conversations] Collecting traces and flows",
extra={"all_spans_results": json.dumps(all_spans_results)},
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Logging Fails with Non-Serializable Data

The json.dumps calls in logging statements (lines 181, 255, 262) operate on raw Snuba query results. These results can include non-JSON-serializable types like Decimal, which causes a TypeError and crashes the API endpoint.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Scope: Backend Automatically applied to PRs that change backend components

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants