Skip to content

Commit

Permalink
Merge pull request xtekky#2313 from kqlio67/main
Browse files Browse the repository at this point in the history
Major Provider Updates and Documentation Restructuring
  • Loading branch information
xtekky authored Nov 15, 2024
2 parents 5ec0302 + b377931 commit f65ebd9
Show file tree
Hide file tree
Showing 98 changed files with 1,611 additions and 3,597 deletions.
38 changes: 24 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@


![248433934-7886223b-c1d1-4260-82aa-da5741f303bb](https://github.com/xtekky/gpt4free/assets/98614666/ea012c87-76e0-496a-8ac4-e2de090cc6c9)

<a href="https://trendshift.io/repositories/1692" target="_blank"><img src="https://trendshift.io/api/badge/repositories/1692" alt="xtekky%2Fgpt4free | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
Expand Down Expand Up @@ -28,7 +29,7 @@ docker pull hlohaus789/g4f
```

## 🆕 What's New
- **For comprehensive details on new features and updates, please refer to our [Releases](https://github.com/xtekky/gpt4free/releases) page**
- **For comprehensive details on new features and updates, please refer to our** [Releases](https://github.com/xtekky/gpt4free/releases) **page**
- **Installation Guide for Windows (.exe):** 💻 [Installation Guide for Windows (.exe)](#installation-guide-for-windows-exe)
- **Join our Telegram Channel:** 📨 [telegram.me/g4f_channel](https://telegram.me/g4f_channel)
- **Join our Discord Group:** 💬 [discord.gg/XfybzPXPH5](https://discord.gg/5E39JUWUFa)
Expand Down Expand Up @@ -70,6 +71,13 @@ Is your site on this repository and you want to take it down? Send an email to t
- [Interference API](#interference-api)
- [Local Inference](docs/local.md)
- [Configuration](#configuration)
- [Full Documentation for Python API](#full-documentation-for-python-api)
- **New:**
- [Async Client API from G4F](docs/async_client.md)
- [Client API like the OpenAI Python library](docs/client.md)
- **Legacy**
- [Legacy API with python modules](docs/legacy/legacy.md)
- [Legacy AsyncClient API from G4F](docs/legacy/legacy_async_client.md)
- [🚀 Providers and Models](docs/providers-and-models.md)
- [🔗 Powered by gpt4free](#-powered-by-gpt4free)
- [🤝 Contribute](#-contribute)
Expand Down Expand Up @@ -166,7 +174,7 @@ from g4f.client import Client

client = Client()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
# Add any other necessary parameters
)
Expand All @@ -183,7 +191,7 @@ from g4f.client import Client

client = Client()
response = client.images.generate(
model="dall-e-3",
model="flux",
prompt="a white siamese cat",
# Add any other necessary parameters
)
Expand All @@ -194,10 +202,14 @@ print(f"Generated image URL: {image_url}")

[![Image with cat](/docs/cat.jpeg)](docs/client.md)

**Full Documentation for Python API**
- **Async Client API from G4F:** [/docs/async_client](docs/async_client.md)
- **Client API like the OpenAI Python library:** [/docs/client](docs/client.md)
- **Legacy API with python modules:** [/docs/legacy](docs/legacy.md)
#### **Full Documentation for Python API**
- **New:**
- **Async Client API from G4F:** [/docs/async_client](docs/async_client.md)
- **Client API like the OpenAI Python library:** [/docs/client](docs/client.md)

- **Legacy:**
- **Legacy API with python modules:** [/docs/legacy/legacy](docs/legacy/legacy.md)
- **Legacy AsyncClient API from G4F:** [/docs/async_client](docs/legacy/legacy_async_client.md)

#### Web UI
**To start the web interface, type the following codes in python:**
Expand Down Expand Up @@ -290,20 +302,18 @@ To utilize the OpenaiChat provider, a .har file is required from https://chatgpt

- Place the exported .har file in the `./har_and_cookies` directory if you are using Docker. Alternatively, if you are using Python from a terminal, you can store it in a `./har_and_cookies` directory within your current working directory.

Note: Ensure that your .har file is stored securely, as it may contain sensitive information.
> **Note:** Ensure that your .har file is stored securely, as it may contain sensitive information.
#### Using Proxy

If you want to hide or change your IP address for the providers, you can set a proxy globally via an environment variable:

- On macOS and Linux:

**- On macOS and Linux:**
```bash
export G4F_PROXY="http://host:port"
```

- On Windows:

**- On Windows:**
```bash
set G4F_PROXY=http://host:port
```
Expand Down Expand Up @@ -770,10 +780,10 @@ set G4F_PROXY=http://host:port
We welcome contributions from the community. Whether you're adding new providers or features, or simply fixing typos and making small improvements, your input is valued. Creating a pull request is all it takes – our co-pilot will handle the code review process. Once all changes have been addressed, we'll merge the pull request into the main branch and release the updates at a later time.

###### Guide: How do i create a new Provider?
- Read: [Create Provider Guide](docs/guides/create_provider.md)
- **Read:** [Create Provider Guide](docs/guides/create_provider.md)

###### Guide: How can AI help me with writing code?
- Read: [AI Assistance Guide](docs/guides/help_me.md)
- **Read:** [AI Assistance Guide](docs/guides/help_me.md)

## 🙌 Contributors
A list of all contributors is available [here](https://github.com/xtekky/gpt4free/graphs/contributors)
Expand Down
9 changes: 5 additions & 4 deletions docs/async_client.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ client = Client(
**Here’s an improved example of creating chat completions:**
```python
response = await async_client.chat.completions.create(
model="gpt-3.5-turbo",
model="gpt-4o-mini",
messages=[
{
"role": "user",
Expand Down Expand Up @@ -99,7 +99,7 @@ async def main():
client = Client()

response = await client.chat.completions.async_create(
model="gpt-3.5-turbo",
model="gpt-4o-mini",
messages=[
{
"role": "user",
Expand Down Expand Up @@ -230,7 +230,7 @@ async def main():
client = Client()

task1 = client.chat.completions.async_create(
model="gpt-3.5-turbo",
model="gpt-4o-mini",
messages=[
{
"role": "user",
Expand Down Expand Up @@ -262,6 +262,7 @@ The G4F AsyncClient supports a wide range of AI models and providers, allowing y

### Models
- GPT-3.5-Turbo
- GPT-4o-Mini
- GPT-4
- DALL-E 3
- Gemini
Expand Down Expand Up @@ -306,7 +307,7 @@ Implementing proper error handling and following best practices is crucial when
```python
try:
response = await client.chat.completions.async_create(
model="gpt-3.5-turbo",
model="gpt-4o-mini",
messages=[
{
"role": "user",
Expand Down
4 changes: 2 additions & 2 deletions docs/client.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ client = Client(
**Here’s an improved example of creating chat completions:**
```python
response = client.chat.completions.create(
model="gpt-3.5-turbo",
model="gpt-4o-mini",
messages=[
{
"role": "user",
Expand Down Expand Up @@ -104,7 +104,7 @@ from g4f.client import Client
client = Client()

response = client.chat.completions.create(
model="gpt-3.5-turbo",
model="gpt-4o-mini",
messages=[
{
"role": "user",
Expand Down
2 changes: 1 addition & 1 deletion docs/docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ import requests

url = "http://localhost:1337/v1/chat/completions"
body = {
"model": "gpt-3.5-turbo",
"model": "gpt-4o-mini",
"stream": False,
"messages": [
{"role": "assistant", "content": "What can you do?"}
Expand Down
2 changes: 1 addition & 1 deletion docs/git.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ from g4f.client import Client
client = Client()

response = client.chat.completions.create(
model="gpt-3.5-turbo",
model="gpt-4o-mini",
messages=[
{
"role": "user",
Expand Down
6 changes: 3 additions & 3 deletions docs/interference-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ curl -X POST "http://localhost:1337/v1/chat/completions" \
"content": "Hello"
}
],
"model": "gpt-3.5-turbo"
"model": "gpt-4o-mini"
}'
```

Expand Down Expand Up @@ -108,7 +108,7 @@ client = OpenAI(
)

response = client.chat.completions.create(
model="gpt-3.5-turbo",
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Write a poem about a tree"}],
stream=True,
)
Expand All @@ -135,7 +135,7 @@ import requests
url = "http://localhost:1337/v1/chat/completions"

body = {
"model": "gpt-3.5-turbo",
"model": "gpt-4o-mini",
"stream": False,
"messages": [
{"role": "assistant", "content": "What can you do?"}
Expand Down
File renamed without changes.
Loading

0 comments on commit f65ebd9

Please sign in to comment.