Skip to content

Commit

Permalink
Use GenAI package for google (#17939)
Browse files Browse the repository at this point in the history
  • Loading branch information
ex0ns authored Mar 8, 2025
1 parent adfe31d commit 48e5010
Show file tree
Hide file tree
Showing 19 changed files with 2,321 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/docs/examples/llm/gemini.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"**NOTE:** Gemini has largely been replaced by Google GenAI. Visit the [Google GenAI page](https://docs.llamaindex.ai/en/stable/examples/llm/google_genai/) for the latest examples and documentation.\n",
"\n",
"In this notebook, we show how to use the Gemini text models from Google in LlamaIndex. Check out the [Gemini site](https://ai.google.dev/) or the [announcement](https://deepmind.google/technologies/gemini/).\n",
"\n",
"If you're opening this Notebook on colab, you will need to install LlamaIndex 🦙 and the Gemini Python SDK."
Expand Down
693 changes: 693 additions & 0 deletions docs/docs/examples/llm/google_genai.ipynb

Large diffs are not rendered by default.

3 changes: 3 additions & 0 deletions docs/docs/examples/llm/vertex.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@
"metadata": {},
"source": [
"# Vertex AI\n",
"\n",
"**NOTE:** Vertex has largely been replaced by Google GenAI, which supports the same functionality from Vertex using the `google-genai` package. Visit the [Google GenAI page](https://docs.llamaindex.ai/en/stable/examples/llm/google_genai/) for the latest examples and documentation.\n",
"\n",
"## Installing Vertex AI \n",
"To Install Vertex AI you need to follow the following steps\n",
"* Install Vertex Cloud SDK (https://googleapis.dev/python/aiplatform/latest/index.html)\n",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# LlamaIndex Llms Integration: Gemini

**NOTE:** Gemini has largely been replaced by Google GenAI. Visit the [Google GenAI page](https://docs.llamaindex.ai/en/stable/examples/llm/google_genai/) for the latest examples and documentation.

## Installation

1. Install the required Python packages:
Expand Down
153 changes: 153 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-google-genai/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,153 @@
llama_index/_static
.DS_Store
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
bin/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
etc/
include/
lib/
lib64/
parts/
sdist/
share/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
.ruff_cache

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints
notebooks/

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
pyvenv.cfg

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# Jetbrains
.idea
modules/
*.swp

# VsCode
.vscode

# pipenv
Pipfile
Pipfile.lock

# pyright
pyrightconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
poetry_requirements(name="poetry", module_mapping={"google-genai": ["google"]})
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# CHANGELOG — llama-index-llms-genai
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
GIT_ROOT ?= $(shell git rev-parse --show-toplevel)

help: ## Show all Makefile targets.
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[33m%-30s\033[0m %s\n", $$1, $$2}'

format: ## Run code autoformatters (black).
pre-commit install
git ls-files | xargs pre-commit run black --files

lint: ## Run linters: pre-commit (black, ruff, codespell) and mypy
pre-commit install && git ls-files | xargs pre-commit run --show-diff-on-failure --files

test: ## Run tests via pytest.
pytest tests

watch-docs: ## Build and watch documentation.
sphinx-autobuild docs/ docs/_build/html --open-browser --watch $(GIT_ROOT)/llama_index/
112 changes: 112 additions & 0 deletions llama-index-integrations/llms/llama-index-llms-google-genai/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# LlamaIndex Llms Integration: Gemini

## Installation

1. Install the required Python packages:

```bash
%pip install llama-index-llms-genai
!pip install -q llama-index google-genai
```

2. Set the Google API key as an environment variable:

```bash
%env GOOGLE_API_KEY=your_api_key_here
```

## Usage

### Basic Content Generation

To generate a poem using the Gemini model, use the following code:

```python
from llama_index.llms.genai import Gemini

resp = Gemini().complete("Write a poem about a magic backpack")
print(resp)
```

### Chat with Messages

To simulate a conversation, send a list of messages:

```python
from llama_index.core.llms import ChatMessage
from llama_index.llms.genai import Gemini

messages = [
ChatMessage(role="user", content="Hello friend!"),
ChatMessage(role="assistant", content="Yarr what is shakin' matey?"),
ChatMessage(
role="user", content="Help me decide what to have for dinner."
),
]
resp = Gemini().chat(messages)
print(resp)
```

### Streaming Responses

To stream content responses in real-time:

```python
from llama_index.llms.genai import Gemini

llm = Gemini()
resp = llm.stream_complete(
"The story of Sourcrust, the bread creature, is really interesting. It all started when..."
)
for r in resp:
print(r.text, end="")
```

To stream chat responses:

```python
from llama_index.llms.genai import Gemini
from llama_index.core.llms import ChatMessage

llm = Gemini()
messages = [
ChatMessage(role="user", content="Hello friend!"),
ChatMessage(role="assistant", content="Yarr what is shakin' matey?"),
ChatMessage(
role="user", content="Help me decide what to have for dinner."
),
]
resp = llm.stream_chat(messages)
```

### Specific Model Usage

To use a specific model, you can configure it like this:

```python
from llama_index.llms.genai import Gemini

llm = Gemini(model="models/gemini-pro")
resp = llm.complete("Write a short, but joyous, ode to LlamaIndex")
print(resp)
```

### Asynchronous API

To use the asynchronous completion API:

```python
from llama_index.llms.genai import Gemini

llm = Gemini()
resp = await llm.acomplete("Llamas are famous for ")
print(resp)
```

For asynchronous streaming of responses:

```python
resp = await llm.astream_complete("Llamas are famous for ")
async for chunk in resp:
print(chunk.text, end="")
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
python_sources()

resource(
name="py_typed",
source="py.typed",
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from llama_index.llms.google_genai.base import GoogleGenAI

__all__ = ["GoogleGenAI"]
Loading

0 comments on commit 48e5010

Please sign in to comment.