diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 2ccd080d189..eefe1da79a1 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -2,7 +2,7 @@
### Please, follow these steps to contribute:
1. Reverse a website from this list: [sites-to-reverse](https://github.com/xtekky/gpt4free/issues/40)
-2. Add it to [./testing](https://github.com/xtekky/gpt4free/tree/main/testing)
+2. Add it to [./etc/unittest/](https://github.com/xtekky/gpt4free/tree/main/etc/unittest/)
3. Refactor it and add it to [./g4f](https://github.com/xtekky/gpt4free/tree/main/g4f)
### We will be grateful to see you as a contributor!
\ No newline at end of file
diff --git a/README.md b/README.md
index 730c87954ed..0ca1578fca3 100644
--- a/README.md
+++ b/README.md
@@ -163,6 +163,7 @@ How do I load the project using git and installing the project requirements?
Read this tutorial and follow it step by step: [/docs/git](docs/git.md)
##### Install using Docker:
+
How do I build and run composer image from source?
Use docker-compose: [/docs/docker](docs/docker.md)
@@ -212,7 +213,9 @@ print(f"Generated image URL: {image_url}")
- **Legacy API with python modules:** [/docs/legacy](docs/legacy.md)
#### Web UI
+
**To start the web interface, type the following codes in python:**
+
```python
from g4f.gui import run_gui
@@ -223,10 +226,15 @@ or execute the following command:
python -m g4f.cli gui -port 8080 -debug
```
-#### Interference API
-You can use the Interference API to serve other OpenAI integrations with G4F.
-**See docs:** [/docs/interference](docs/interference-api.md)
-**Access with:** http://localhost:1337/v1
+### Interference API
+
+The **Interference API** enables seamless integration with OpenAI's services through G4F, allowing you to deploy efficient AI solutions.
+
+- **Documentation**: [Interference API Docs](docs/interference-api.md)
+- **Endpoint**: `http://localhost:1337/v1`
+- **Swagger UI**: Explore the OpenAPI documentation via Swagger UI at `http://localhost:1337/docs`
+
+This API is designed for straightforward implementation and enhanced compatibility with other OpenAI integrations.
### Configuration
@@ -252,7 +260,7 @@ set_cookies(".google.com", {
#### Using .har and Cookie Files
-You can place `.har` and cookie files in the default `./har_and_cookies` directory. To export a cookie file, use the [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg) available on the Chrome Web Store.
+You can place `.har` and cookie files `.json` in the default `./har_and_cookies` directory. To export a cookie file, use the [EditThisCookie Extension](https://chromewebstore.google.com/detail/editthiscookie/fngmhnnpilhplaeedifhccceomclgfbg) available on the Chrome Web Store.
#### Creating .har Files to Capture Cookies
@@ -790,12 +798,14 @@ A list of all contributors is available [here](https://github.com/xtekky/gpt4fre
+
+
@@ -805,19 +815,87 @@ A list of all contributors is available [here](https://github.com/xtekky/gpt4fre
-
-
-
+
+
+
+
-
+
+
-
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
@@ -827,7 +905,7 @@ A list of all contributors is available [here](https://github.com/xtekky/gpt4fre
- The [`Vercel.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/Vercel.py) file contains code from [vercel-llm-api](https://github.com/ading2210/vercel-llm-api) by [@ading2210](https://github.com/ading2210)
- The [`har_file.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/openai/har_file.py) has input from [xqdoo00o/ChatGPT-to-API](https://github.com/xqdoo00o/ChatGPT-to-API)
-- The [`PerplexityLabs.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/openai/har_file.py) has input from [nathanrchn/perplexityai](https://github.com/nathanrchn/perplexityai)
+- The [`PerplexityLabs.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/PerplexityLabs.py) has input from [nathanrchn/perplexityai](https://github.com/nathanrchn/perplexityai)
- The [`Gemini.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/needs_auth/Gemini.py) has input from [dsdanielpark/Gemini-API](https://github.com/dsdanielpark/Gemini-API)
- The [`MetaAI.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/MetaAI.py) file contains code from [meta-ai-api](https://github.com/Strvm/meta-ai-api) by [@Strvm](https://github.com/Strvm)
- The [`proofofwork.py`](https://github.com/xtekky/gpt4free/blob/main/g4f/Provider/openai/proofofwork.py) has input from [missuo/FreeGPT35](https://github.com/missuo/FreeGPT35)
diff --git a/docker/Dockerfile-slim b/docker/Dockerfile-slim
index dfc3344dcd0..b001f7d319f 100644
--- a/docker/Dockerfile-slim
+++ b/docker/Dockerfile-slim
@@ -14,8 +14,6 @@ RUN apt-get update && apt-get upgrade -y \
# Add user and user group
&& groupadd -g $G4F_USER_ID $G4F_USER \
&& useradd -rm -G sudo -u $G4F_USER_ID -g $G4F_USER_ID $G4F_USER \
- && mkdir -p /var/log/supervisor \
- && chown "${G4F_USER_ID}:${G4F_USER_ID}" /var/log/supervisor \
&& echo "${G4F_USER}:${G4F_USER}" | chpasswd \
&& python -m pip install --upgrade pip \
&& apt-get clean \
@@ -32,8 +30,7 @@ RUN mkdir -p $G4F_DIR
COPY requirements-slim.txt $G4F_DIR
# Upgrade pip for the latest features and install the project's Python dependencies.
-RUN pip install --no-cache-dir -r requirements-slim.txt \
- && pip install --no-cache-dir duckduckgo-search>=5.0
+RUN pip install --no-cache-dir -r requirements-slim.txt
# Copy the entire package into the container.
ADD --chown=$G4F_USER:$G4F_USER g4f $G4F_DIR/g4f
\ No newline at end of file
diff --git a/docs/client.md b/docs/client.md
index c318bee3612..5e37880c3de 100644
--- a/docs/client.md
+++ b/docs/client.md
@@ -185,12 +185,15 @@ print(base64_text)
**Create variations of an existing image:**
```python
from g4f.client import Client
+from g4f.Provider import OpenaiChat
-client = Client()
+client = Client(
+ image_provider=OpenaiChat
+)
response = client.images.create_variation(
image=open("cat.jpg", "rb"),
- model="bing"
+ model="dall-e-3",
# Add any other necessary parameters
)
diff --git a/docs/legacy.md b/docs/legacy.md
index 393e3c397b5..ccd8ab6f67c 100644
--- a/docs/legacy.md
+++ b/docs/legacy.md
@@ -7,7 +7,7 @@ import g4f
g4f.debug.logging = True # Enable debug logging
g4f.debug.version_check = False # Disable automatic version checking
-print(g4f.Provider.Gemini.params) # Print supported args for Bing
+print(g4f.Provider.Gemini.params) # Print supported args for Gemini
# Using automatic a provider for the given model
## Streamed completion
diff --git a/etc/tool/contributers.py b/etc/tool/contributers.py
index 76fa461ddcc..31ac64183d4 100644
--- a/etc/tool/contributers.py
+++ b/etc/tool/contributers.py
@@ -1,6 +1,6 @@
import requests
-url = "https://api.github.com/repos/xtekky/gpt4free/contributors"
+url = "https://api.github.com/repos/xtekky/gpt4free/contributors?per_page=100"
for user in requests.get(url).json():
print(f'')
\ No newline at end of file
diff --git a/etc/tool/copilot.py b/etc/tool/copilot.py
index 6e9a42cce90..4732e341ff7 100644
--- a/etc/tool/copilot.py
+++ b/etc/tool/copilot.py
@@ -144,7 +144,7 @@ def analyze_code(pull: PullRequest, diff: str)-> list[dict]:
else:
changed_lines.append(f"{offset_line}:{line}")
offset_line += 1
-
+
return comments
def create_analyze_prompt(changed_lines: list[str], pull: PullRequest, file_path: str):
diff --git a/etc/tool/provider_init.py b/etc/tool/provider_init.py
deleted file mode 100644
index 22f21d4d9ed..00000000000
--- a/etc/tool/provider_init.py
+++ /dev/null
@@ -1,33 +0,0 @@
-from pathlib import Path
-
-
-def main():
- content = create_content()
- with open("g4f/provider/__init__.py", "w", encoding="utf-8") as f:
- f.write(content)
-
-
-def create_content():
- path = Path()
- paths = path.glob("g4f/provider/*.py")
- paths = [p for p in paths if p.name not in ["__init__.py", "base_provider.py"]]
- classnames = [p.stem for p in paths]
-
- import_lines = [f"from .{name} import {name}" for name in classnames]
- import_content = "\n".join(import_lines)
-
- classnames.insert(0, "BaseProvider")
- all_content = [f' "{name}"' for name in classnames]
- all_content = ",\n".join(all_content)
- all_content = f"__all__ = [\n{all_content},\n]"
-
- return f"""from .base_provider import BaseProvider
-{import_content}
-
-
-{all_content}
-"""
-
-
-if __name__ == "__main__":
- main()
diff --git a/g4f/Provider/You.py b/g4f/Provider/You.py
index 095d638fed0..2d4f7ca5551 100644
--- a/g4f/Provider/You.py
+++ b/g4f/Provider/You.py
@@ -139,9 +139,9 @@ async def create_async_generator(
else:
yield ImageResponse(match.group(2), match.group(1))
else:
- yield data["t"]
+ yield data["t"]
else:
- yield data["t"]
+ yield data["t"]
@classmethod
async def upload_file(cls, client: StreamSession, cookies: Cookies, file: bytes, filename: str = None) -> dict:
diff --git a/g4f/Provider/needs_auth/OpenaiChat.py b/g4f/Provider/needs_auth/OpenaiChat.py
index 9378a8c74ff..9ad52a94966 100644
--- a/g4f/Provider/needs_auth/OpenaiChat.py
+++ b/g4f/Provider/needs_auth/OpenaiChat.py
@@ -128,10 +128,13 @@ async def upload_image(
data=data_bytes,
headers={
"Content-Type": image_data["mime_type"],
- "x-ms-blob-type": "BlockBlob"
+ "x-ms-blob-type": "BlockBlob",
+ "x-ms-version": "2020-04-08",
+ "Origin": "https://chatgpt.com",
+ "Referer": "https://chatgpt.com/",
}
) as response:
- await raise_for_status(response, "Send file failed")
+ await raise_for_status(response)
# Post the file ID to the service and get the download URL
async with session.post(
f"{cls.url}/backend-api/files/{image_data['file_id']}/uploaded",
@@ -162,7 +165,7 @@ def create_messages(cls, messages: Messages, image_request: ImageRequest = None,
"id": str(uuid.uuid4()),
"create_time": int(time.time()),
"id": str(uuid.uuid4()),
- "metadata": {"serialization_metadata": {"custom_symbol_offsets": []}, "system_hints": system_hints},
+ "metadata": {"serialization_metadata": {"custom_symbol_offsets": []}, "system_hints": system_hints},
} for message in messages]
# Check if there is an image response
@@ -407,7 +410,8 @@ async def iter_messages_line(cls, session: StreamSession, line: bytes, fields: C
if isinstance(line, dict) and "v" in line:
v = line.get("v")
if isinstance(v, str) and fields.is_recipient:
- yield v
+ if "p" not in line or line.get("p") == "/message/content/parts/0":
+ yield v
elif isinstance(v, list) and fields.is_recipient:
for m in v:
if m.get("p") == "/message/content/parts/0":
@@ -420,7 +424,7 @@ async def iter_messages_line(cls, session: StreamSession, line: bytes, fields: C
fields.conversation_id = v.get("conversation_id")
debug.log(f"OpenaiChat: New conversation: {fields.conversation_id}")
m = v.get("message", {})
- fields.is_recipient = m.get("recipient") == "all"
+ fields.is_recipient = m.get("recipient", "all") == "all"
if fields.is_recipient:
c = m.get("content", {})
if c.get("content_type") == "multimodal_text":
diff --git a/g4f/cookies.py b/g4f/cookies.py
index 0c62d697ecf..52f7a40f403 100644
--- a/g4f/cookies.py
+++ b/g4f/cookies.py
@@ -101,7 +101,7 @@ def load_cookies_from_browsers(domain_name: str, raise_requirements_error: bool
raise MissingRequirementsError('Install "browser_cookie3" package')
return {}
cookies = {}
- for cookie_fn in [_g4f, chrome, chromium, opera, opera_gx, brave, edge, vivaldi, firefox]:
+ for cookie_fn in browsers:
try:
cookie_jar = cookie_fn(domain_name=domain_name)
if len(cookie_jar) and debug.logging:
@@ -188,20 +188,4 @@ def get_domain(v: dict) -> str:
for domain, new_values in new_cookies.items():
if debug.logging:
print(f"Cookies added: {len(new_values)} from {domain}")
- CookiesConfig.cookies[domain] = new_values
-
-def _g4f(domain_name: str) -> list:
- """
- Load cookies from the 'g4f' browser (if exists).
-
- Args:
- domain_name (str): The domain for which to load cookies.
-
- Returns:
- list: List of cookies.
- """
- if not has_platformdirs:
- return []
- user_data_dir = user_config_dir("g4f")
- cookie_file = os.path.join(user_data_dir, "Default", "Cookies")
- return [] if not os.path.exists(cookie_file) else chrome(cookie_file, domain_name)
+ CookiesConfig.cookies[domain] = new_values
\ No newline at end of file
diff --git a/g4f/gui/client/static/css/style.css b/g4f/gui/client/static/css/style.css
index 144de7c31a7..d5546f480c4 100644
--- a/g4f/gui/client/static/css/style.css
+++ b/g4f/gui/client/static/css/style.css
@@ -1064,6 +1064,47 @@ a:-webkit-any-link {
width: 1px;
}
+.hljs-iframe-button, .hljs-iframe-close {
+ position: absolute;
+ bottom: 1rem;
+ right: 1rem;
+ padding: 7px;
+ border-radius: .25rem;
+ border: 1px solid #ffffff22;
+ background-color: #2d2b57;
+ color: #fff;
+ cursor: pointer;
+ width: 32px;
+ height: 32px;
+}
+
+.hljs-iframe-button:hover, .hljs-iframe-close:hover {
+ border-color: #ffffff44;
+ color: #ffffff77;
+}
+
+.hljs-iframe-container {
+ position: fixed;
+ position: absolute;
+ left: 0;
+ width: 100%;
+ height: 100%;
+ z-index: 1000001;
+ background-color: #fff;
+ padding: 0;
+ margin: 0;
+ overflow: hidden;
+}
+
+.hljs-iframe {
+ width: 100%;
+ height: 100%;
+ padding: 0;
+ margin: 0;
+ border: none;
+ overflow: auto;
+}
+
.white {
--blur-bg: transparent;
--accent: #007bff;
diff --git a/g4f/gui/client/static/img/site.webmanifest b/g4f/gui/client/static/img/site.webmanifest
index f8eab4d70e5..7cd220ecd5f 100644
--- a/g4f/gui/client/static/img/site.webmanifest
+++ b/g4f/gui/client/static/img/site.webmanifest
@@ -3,12 +3,12 @@
"short_name": "",
"icons": [
{
- "src": "/assets/img/android-chrome-192x192.png",
+ "src": "/static/img/android-chrome-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
- "src": "/assets/img/android-chrome-512x512.png",
+ "src": "/static/img/android-chrome-512x512.png",
"sizes": "512x512",
"type": "image/png"
}
diff --git a/g4f/gui/client/static/js/chat.v1.js b/g4f/gui/client/static/js/chat.v1.js
index a1975dd0118..d1fd886d1b7 100644
--- a/g4f/gui/client/static/js/chat.v1.js
+++ b/g4f/gui/client/static/js/chat.v1.js
@@ -44,6 +44,9 @@ appStorage = window.localStorage || {
removeItem: (key) => delete self[key],
length: 0
}
+
+appStorage.getItem("darkMode") == "false" ? document.body.classList.add("white") : null;
+
let markdown_render = () => null;
if (window.markdownit) {
const markdown = window.markdownit();
@@ -56,6 +59,7 @@ if (window.markdownit) {
.replaceAll('', '')
}
}
+
function filter_message(text) {
return text.replaceAll(
/[\s\S]+/gm, ""
@@ -81,7 +85,52 @@ function fallback_clipboard (text) {
document.body.removeChild(textBox);
}
+const iframe_container = Object.assign(document.createElement("div"), {
+ className: "hljs-iframe-container hidden",
+});
+const iframe = Object.assign(document.createElement("iframe"), {
+ className: "hljs-iframe",
+});
+iframe_container.appendChild(iframe);
+const iframe_close = Object.assign(document.createElement("button"), {
+ className: "hljs-iframe-close",
+ innerHTML: '',
+});
+iframe_close.onclick = () => iframe_container.classList.add("hidden");
+iframe_container.appendChild(iframe_close);
+chat.appendChild(iframe_container);
+
+class HtmlRenderPlugin {
+ constructor(options = {}) {
+ self.hook = options.hook;
+ self.callback = options.callback
+ }
+ "after:highlightElement"({
+ el,
+ text
+ }) {
+ if (!el.classList.contains("language-html")) {
+ return;
+ }
+ let button = Object.assign(document.createElement("button"), {
+ innerHTML: '',
+ className: "hljs-iframe-button",
+ });
+ el.parentElement.appendChild(button);
+ button.onclick = async () => {
+ let newText = text;
+ if (hook && typeof hook === "function") {
+ newText = hook(text, el) || text
+ }
+ iframe.src = `data:text/html;charset=utf-8,${encodeURIComponent(newText)}`;
+ iframe_container.classList.remove("hidden");
+ if (typeof callback === "function") return callback(newText, el);
+ }
+ }
+}
+
hljs.addPlugin(new CopyButtonPlugin());
+hljs.addPlugin(new HtmlRenderPlugin())
let typesetPromise = Promise.resolve();
const highlight = (container) => {
container.querySelectorAll('code:not(.hljs').forEach((el) => {
@@ -371,16 +420,17 @@ document.querySelector(".media_player .fa-x").addEventListener("click", ()=>{
});
const prepare_messages = (messages, message_index = -1) => {
- if (message_index >= 0) {
- messages = messages.filter((_, index) => message_index >= index);
- }
-
- // Removes none user messages at end
- let last_message;
- while (last_message = messages.pop()) {
- if (last_message["role"] == "user") {
- messages.push(last_message);
- break;
+ if (message_index != null) {
+ if (message_index >= 0) {
+ messages = messages.filter((_, index) => message_index >= index);
+ }
+ // Removes none user messages at end
+ let last_message;
+ while (last_message = messages.pop()) {
+ if (last_message["role"] == "user") {
+ messages.push(last_message);
+ break;
+ }
}
}
@@ -1313,9 +1363,6 @@ async function on_api() {
}
const darkMode = document.getElementById("darkMode");
if (darkMode) {
- if (!darkMode.checked) {
- document.body.classList.add("white");
- }
darkMode.addEventListener('change', async (event) => {
if (event.target.checked) {
document.body.classList.remove("white");
diff --git a/g4f/providers/retry_provider.py b/g4f/providers/retry_provider.py
index 5bd5790db32..b7ed8f802ab 100644
--- a/g4f/providers/retry_provider.py
+++ b/g4f/providers/retry_provider.py
@@ -272,6 +272,7 @@ async def create_async_generator(
timeout=kwargs.get("timeout", DEFAULT_TIMEOUT),
)
if chunk:
+ yield chunk
started = True
elif hasattr(provider, "create_async_generator"):
async for chunk in provider.create_async_generator(model, messages, stream=stream, **kwargs):
diff --git a/g4f/stubs.py b/g4f/stubs.py
deleted file mode 100644
index 49cf8a88ff7..00000000000
--- a/g4f/stubs.py
+++ /dev/null
@@ -1,107 +0,0 @@
-
-from __future__ import annotations
-
-from typing import Union
-
-class Model():
- ...
-
-class ChatCompletion(Model):
- def __init__(
- self,
- content: str,
- finish_reason: str,
- completion_id: str = None,
- created: int = None
- ):
- self.id: str = f"chatcmpl-{completion_id}" if completion_id else None
- self.object: str = "chat.completion"
- self.created: int = created
- self.model: str = None
- self.provider: str = None
- self.choices = [ChatCompletionChoice(ChatCompletionMessage(content), finish_reason)]
- self.usage: dict[str, int] = {
- "prompt_tokens": 0, #prompt_tokens,
- "completion_tokens": 0, #completion_tokens,
- "total_tokens": 0, #prompt_tokens + completion_tokens,
- }
-
- def to_json(self):
- return {
- **self.__dict__,
- "choices": [choice.to_json() for choice in self.choices]
- }
-
-class ChatCompletionChunk(Model):
- def __init__(
- self,
- content: str,
- finish_reason: str,
- completion_id: str = None,
- created: int = None
- ):
- self.id: str = f"chatcmpl-{completion_id}" if completion_id else None
- self.object: str = "chat.completion.chunk"
- self.created: int = created
- self.model: str = None
- self.provider: str = None
- self.choices = [ChatCompletionDeltaChoice(ChatCompletionDelta(content), finish_reason)]
-
- def to_json(self):
- return {
- **self.__dict__,
- "choices": [choice.to_json() for choice in self.choices]
- }
-
-class ChatCompletionMessage(Model):
- def __init__(self, content: Union[str, None]):
- self.role = "assistant"
- self.content = content
-
- def to_json(self):
- return self.__dict__
-
-class ChatCompletionChoice(Model):
- def __init__(self, message: ChatCompletionMessage, finish_reason: str):
- self.index = 0
- self.message = message
- self.finish_reason = finish_reason
-
- def to_json(self):
- return {
- **self.__dict__,
- "message": self.message.to_json()
- }
-
-class ChatCompletionDelta(Model):
- content: Union[str, None] = None
-
- def __init__(self, content: Union[str, None]):
- if content is not None:
- self.content = content
-
- def to_json(self):
- return self.__dict__
-
-class ChatCompletionDeltaChoice(Model):
- def __init__(self, delta: ChatCompletionDelta, finish_reason: Union[str, None]):
- self.delta = delta
- self.finish_reason = finish_reason
-
- def to_json(self):
- return {
- **self.__dict__,
- "delta": self.delta.to_json()
- }
-
-class Image(Model):
- url: str
-
- def __init__(self, url: str) -> None:
- self.url = url
-
-class ImagesResponse(Model):
- data: list[Image]
-
- def __init__(self, data: list) -> None:
- self.data = data
\ No newline at end of file