You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The NHL GPT model currently experiences issues when processing large JSON data arrays, leading to the hallucination of data. This issue significantly impacts the accuracy and reliability of the model's output, especially in scenarios involving complex or extensive datasets. Noted by user in issue #1.
Problem Description
When the NHL GPT model processes large JSON data arrays, it tends to generate incorrect or fabricated data, which does not accurately represent the original dataset. This problem appears to be more pronounced with increasing size and complexity of the JSON data.
Problem Analysis
token window limitations
python analysis interpreter seems to assume data at times, especially if for larger lists.
we do not have control of the NHL API, so we cannot modify the response arguments in hopes of reducing the response length.
Expected Behavior
Ideally, the model should accurately interpret and utilize large JSON data arrays without altering, omitting, or fabricating information. The ability to handle complex datasets is essential for maintaining the integrity and usefulness of the model in various NHL data analysis scenarios.
Suggested Enhancement
To address this issue, the following enhancements are proposed:
Optimize Data Parsing: Update the GPT configuration logic to put more emphasis on using only necessary data while parsing.
Create API to transform data: Create custom API to transform NHL API data and update the GPT Actions to use these more specific endpoints. This would act as a buffer or transformation layer to restructure the data.
The text was updated successfully, but these errors were encountered:
Adds BloodLineAlpha api wrapper for game-logs endpoint and removes the NHL Web API endpoint for GPT Actions. Used to improve accuracy for game-log responses and results.
Part of #2
Overview
The NHL GPT model currently experiences issues when processing large JSON data arrays, leading to the hallucination of data. This issue significantly impacts the accuracy and reliability of the model's output, especially in scenarios involving complex or extensive datasets. Noted by user in issue #1.
Problem Description
When the NHL GPT model processes large JSON data arrays, it tends to generate incorrect or fabricated data, which does not accurately represent the original dataset. This problem appears to be more pronounced with increasing size and complexity of the JSON data.
Problem Analysis
Expected Behavior
Ideally, the model should accurately interpret and utilize large JSON data arrays without altering, omitting, or fabricating information. The ability to handle complex datasets is essential for maintaining the integrity and usefulness of the model in various NHL data analysis scenarios.
Suggested Enhancement
To address this issue, the following enhancements are proposed:
The text was updated successfully, but these errors were encountered: