-
-
Notifications
You must be signed in to change notification settings - Fork 596
Add support for Responses API #541
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Full support to the new Responses API https://platform.openai.com/docs/api-reference/responses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I started going through this, but it doesn't really follow the pattern at all :/
Some top level comments
- Place the Contract into src/Contracts/Resources
- Drop the Transporter/Payload iterations - you can reuse that logic.
- Move the Response API into src/Resources
- Split the Response/Resource into src/Resources & src/Responses
- Introduce some heavy tests.
Thanks for reviewing the PR, I will go over it again and let you know |
Can you run some of the tooling locally? Without tests and the bar having to be at 100% and some linting errors - this has a bit further to go.
I put them in order of complexity. |
Please find attached test results, sorry I don't have time to fix all errors. Phpstan already gave me a lot of headaches. test:unit.txt |
@iBotPeaches Please check my last commit 412f35c with above commit I tested live all models and all are working. Thank you for your understanding |
@momostafa - thanks for your work so far. I triggered the pipeline so you can see the failures right now. I'll look for some time if you've tapped out to take this further. |
You are welcome and thank you for your time looking into my PR. I am seriously overloaded but since there only 2 fails I will work on it tonight and I will get back to you. |
Hi, Now test:lint pass on my local machine please check momostafa@29436e8 |
Sorry... I will check the other failing |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting closer! A few things here and there, but the major aspect I see missing is the tests. You produced some fixtures, but nothing to assert any of the work you did - works.
- You'll need a Resource test, ie like Assistants
- You'll need a Response test for all types of responses, ie like Assistants
- Finally, a simple test to assert the mocking/pass-through all works with a more full body Response test - ie w/ Assistants again
src/Testing/Responses/Fixtures/Responses/CreateStreamedResponseFixture.txt
Show resolved
Hide resolved
Yeah getting closer : ) thank you for your patience and detailed inputs on what's needs to be fixed. I will try to resolve it today |
No worries, I'm excited to get this as well. Thanks for continuing to work on it. I know this is a big new endpoint, which I'll probably migrate all my Chat endpoints towards once completed. |
@momostafa , thank you for this great contribution! I am also looking forward to see this merged. Question: Does the documentation in the description reflect the current state of this PR? In your readme file this can be found:
According to the Responses API specification,
Also, it is also not mentioned, what would be the implied role if it is not given in Updating the repository README file as part of this PR would be very helpful. |
I had to modify OpenAI\Testing\Responses\Concerns\Fakeable as $class = str_replace('Responses\\', 'Testing\\Responses\\Fixtures\\', static::class).'Fixture'; was conflicting with newly added Responses folder and added docblock explaining the modification and tested against all files. Updated readme can be found at README-RESPONSES.md Added dedicated ClientFake for Responses tests/Testing/ClientFakeResponses.php
You are most welcome, I am glad to be able to make a small contribution to the community. Sorry for the delay since last update as it was quite a challenge to pass Pest tests as finally I found that fake function at Fakeable trait. I have submitted a review and I hope it will pass this time. Thank you |
@iBotPeaches Hi, Please check last commit caf4413
Please review and I appreciate if you can fix what is left so we can all benefit of using the Response API. Thank you for your understanding and guidance through the process. |
Streaming should be fully typed now. I can take over whatever you have left @momostafa if you push. I have a few hours tonight that I think can finish this up. |
Sorry for the long delay I got a little bit sick I just pushed |
Ah sorry you aren't feeling well. I'll take it from here. Hopefully will get this solved today/tomorrow. |
Thanks, I feel better now. Anything I can help with now to speed up the process |
Okay, I finished all classes. Pint/PHPStan should be 100%. Just working on 100% coverage and tests now. |
Sounds great! good luck with the rest |
Everything is done 💯
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for jump starting this @momostafa - I'm happy with this now between our combined effort. Nearly 100 files touched and ready to roll.
I'm +1.
You are most welcome. Big thanks to you for doing the heavy lifting parts that I wouldn't be able to accomplish without you jumping in. It's Amazing to come that far! I am exited for the merge, I have a large project awaiting... |
Thank you both 🙏🏻 Been following your progress |
Just waiting on another maintainer now. With the power outages across chunks of Europe though - tough to tell who was affected. Looking for a +1 or some feedback @gehrisandro when you have a moment. |
Thanks for the feedback. I hope everyone is safe! |
What:
Adds the new feature Responses API, supporting:
Nerdy Breakdown:
This endpoint was massively larger than any other OpenAI endpoint to date, rivaling the complexity of the Threads feature. This meant as the typed objects expanded we ran into issues with PHPStan due to sealed/unsealed arrays not supported alongside massive unions breaking type checking.
This module introduced
@phpstan-type
and@phpstan-import-type
which allowed the base Type to be defined on a child object, which was imported by the parent to either use/expand. This continued all the way up to the base Response class.Due to the name being
Responses
. It collided with some of the automatic logic for discovering fixtures. A few changes were made to allow this class to exist.Finally, the streaming logic changed once again with no exact common format between Chat, Threads and now Responses. This iteration had no event, but the type of response attached in the data payload. This explains the adaption to the root parser to allow this extraction logic to work without breaking Chat & Threads.
Authored via: @momostafa & @iBotPeaches
Fixes: #535
Laravel PR: openai-php/laravel#147