What's Changed
- [curator-viewer] enabled toast instead of alert for copy paste, and fixed streaming toast by @CharlieJCJ in #165
- Use huggingface modified pickler to fix path-dependent caching by @vutrung96 in #230
- Change rpm and tpm to have lower default and allow for manual setting by @RyanMarten in #234
- Various fixes to increase the reliability of batch processing by @vutrung96 in #231
- Graceful error handling for missing requests by @vutrung96 in #244
- OpenAIOnline - if api_key missing, directly error out by @CharlieJCJ in #237
- Increase default values for tpm/rpm, otherwise there is no progress. by @madiator in #245
- refactor: rename Prompter class to LLM by @devin-ai-integration in #242
- Rename prompter. Simplify prompt_formatter and add test. by @madiator in #246
- Raise error on failed responses by @RyanMarten in #251
- Add a SimpleLLM interface, and update documentation. by @madiator in #255
- Cool down when hitting rate limit with online processors by @RyanMarten in #256
- Gemini lower safety constraints by @CharlieJCJ in #259
- Raise on None response message by @RyanMarten in #262
- Add metadata dict + cache verification by @GeorgiosSmyrnis in #257
- Default for all online requests to 10 minutes timeout by @RyanMarten in #265
- Retry only on "max_length" and "content_filter" finish reason by @RyanMarten in #267
- Retry on response format failure by @RyanMarten in #266
- Add prism.js types to dev dependencies by @RyanMarten in #270
New Contributors
- @devin-ai-integration made their first contribution in #242
- @GeorgiosSmyrnis made their first contribution in #257
Full Changelog: v0.1.11...v0.1.12