-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add parameter description to all AI endpoints [20 LPT] #40
Comments
I'll like to take on this bounty. This will provide an excellent opportunity to familiarize myself with some new concepts. |
I have completed this bounty, please review the pull request here |
@EAsuperstar thank you for getting this going! There are a number of the descriptions that could use updates to better apply to the current implementation. See below: For the model_id fields, can you update the example model ids to be the default models set here? For the input files fields (image, audio), we only support input files. Do not support numpy arrays, tensors or latents. Can you update to only reference a file input? For prompts, we only accept strings, Prompt embeds, tensors or latents cannot be passed through this field. There are also a couple other things that could be added to this description.
|
@ad-astra-video @rickstaa I have made changes to the descriptions as requested and made another commit, see here. |
|
This was merged in livepeer/ai-worker#144 and has been paid out on the chain already 🎉. All bounty transactions can be found back on the AI SPE wallet. @EAsuperstar thanks a lot for improving the documentation 🚀. |
Overview
Important
This can be viewed as a retroactive bounty. I have been in discussion with a beginner bounty hunter who asked for beginner tasks, and he already completed this task yesterday (see here). Unfortunately, I didn't have time to post the bounty earlier.
The current route parameters don't have descriptions set, resulting in documentation that lacks explanations for each parameter. To fix this, we are looking for an entry-level Python programmer who can research the underlying models on Hugging Face and add descriptions to each route. This is a great first issue for anyone starting as a Livepeer Bounty Hunter 🪙⛏️ and will significantly improve the user experience of applications built on the subnet. We look forward to your submission and contributions to enhancing our documentation 🚀!
Required Skillset
This bounty is aimed at beginner programmers with little to no experience.
Bounty Requirements
Add descriptions for all parameters in the T2I, I2I, Upscale, I2V, and A2T pipelines. Ensure that these descriptions appear correctly in the documentation.
Implementation Tips
All but two parameters are directly forwarded to the underlying Hugging Face diffusers pipelines. As a result, you can find the descriptions of the route parameters in the Hugging Face documentation. Here’s how to locate the relevant documentation:
self.ldfm
(e.g., T2I pipeline).Note
Some Livepeer AI pipelines use multiple Hugging Face pipelines, so please ensure you check all relevant pipelines.
How to Apply
Thank you for your interest in contributing to our project 💛!
Warning
Please wait for the issue to be assigned to you before starting work. To prevent duplication of effort, submissions for unassigned issues will not be accepted.
The text was updated successfully, but these errors were encountered: