Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions bin/console
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,8 @@ RubyLLM.configure do |config|
config.openai_api_key = ENV.fetch('OPENAI_API_KEY', nil)
config.openrouter_api_key = ENV.fetch('OPENROUTER_API_KEY', nil)
config.perplexity_api_key = ENV.fetch('PERPLEXITY_API_KEY', nil)
config.replicate_api_key = ENV.fetch('REPLICATE_API_KEY', nil)
config.replicate_webhook_url = ENV.fetch('REPLICATE_WEBHOOK_URL', nil)
config.vertexai_location = ENV.fetch('GOOGLE_CLOUD_LOCATION', nil)
config.vertexai_project_id = ENV.fetch('GOOGLE_CLOUD_PROJECT', nil)
end
Expand Down
4 changes: 2 additions & 2 deletions docs/_core_features/image-generation.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,7 @@ image_portrait = RubyLLM.paint(
)
```

> Not all models support size customization. If a size is specified for a model that doesn't support it (like Google Imagen), RubyLLM may log a debug message indicating the size parameter is ignored. Check the provider's documentation or the [Available Models Guide]({% link _reference/available-models.md %}) for supported sizes.
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve removed the log because we’d now support passing arbitrary params. It’d be up to the user to make sure the model supports what they’re passing in.

It’s a bit of a sharp knife, but given Replicate opens the door to tons of models with tons of input signatures, allowing arbitrary params is the only way I can think of to implement. Open to other ideas though.

I’m wrapping up for today but will add this to the docs later.

> Not all models support size customization. Check the provider's documentation or the [Available Models Guide]({% link _reference/available-models.md %}) for supported sizes.
{: .note }

## Working with Generated Images
Expand Down Expand Up @@ -277,4 +277,4 @@ Image generation can take several seconds (typically 5-20 seconds depending on t

* [Chatting with AI Models]({% link _core_features/chat.md %}): Learn about conversational AI.
* [Embeddings]({% link _core_features/embeddings.md %}): Explore text vector representations.
* [Error Handling]({% link _advanced/error-handling.md %}): Master handling API errors.
* [Error Handling]({% link _advanced/error-handling.md %}): Master handling API errors.
4 changes: 3 additions & 1 deletion docs/_getting_started/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,8 @@ RubyLLM.configure do |config|
config.mistral_api_key = ENV['MISTRAL_API_KEY']
config.perplexity_api_key = ENV['PERPLEXITY_API_KEY']
config.openrouter_api_key = ENV['OPENROUTER_API_KEY']
config.replicate_api_key = ENV['REPLICATE_API_KEY']
config.replicate_webhook_url = ENV['REPLICATE_WEBHOOK_URL']

# Local providers
config.ollama_api_base = 'http://localhost:11434/v1'
Expand Down Expand Up @@ -363,4 +365,4 @@ Now that you've configured RubyLLM, you're ready to:

- [Start chatting with AI models]({% link _core_features/chat.md %})
- [Work with different providers and models]({% link _advanced/models.md %})
- [Set up Rails integration]({% link _advanced/rails.md %})
- [Set up Rails integration]({% link _advanced/rails.md %})
8,546 changes: 8,424 additions & 122 deletions docs/_reference/available-models.md

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions lib/ruby_llm.rb
Original file line number Diff line number Diff line change
Expand Up @@ -93,6 +93,7 @@ def logger
RubyLLM::Provider.register :openrouter, RubyLLM::Providers::OpenRouter
RubyLLM::Provider.register :perplexity, RubyLLM::Providers::Perplexity
RubyLLM::Provider.register :vertexai, RubyLLM::Providers::VertexAI
RubyLLM::Provider.register :replicate, RubyLLM::Providers::Replicate

if defined?(Rails::Railtie)
require 'ruby_llm/railtie'
Expand Down
3 changes: 3 additions & 0 deletions lib/ruby_llm/configuration.rb
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@ class Configuration
:gpustack_api_base,
:gpustack_api_key,
:mistral_api_key,
:replicate_api_key,
:replicate_webhook_url,
:replicate_webhook_events_filter,
# Default models
:default_model,
:default_embedding_model,
Expand Down
6 changes: 3 additions & 3 deletions lib/ruby_llm/image.rb
Original file line number Diff line number Diff line change
Expand Up @@ -35,15 +35,15 @@ def self.paint(prompt, # rubocop:disable Metrics/ParameterLists
model: nil,
provider: nil,
assume_model_exists: false,
size: '1024x1024',
context: nil)
context: nil,
**model_params)
config = context&.config || RubyLLM.config
model ||= config.default_image_model
model, provider_instance = Models.resolve(model, provider: provider, assume_exists: assume_model_exists,
config: config)
model_id = model.id

provider_instance.paint(prompt, model: model_id, size:)
provider_instance.paint(prompt, model: model_id, **model_params)
end
end
end
Loading