This repository provides the necessary tools to deploy streaming inference endpoints for large language models (LLMs) hosted on Hugging Face, using FastAPI. It leverages the power of Hugging Face models and Intel's optimization libraries to offer efficient and scalable real-time inference capabilities.
The project includes a Python script (serve.py
) for setting up a FastAPI server that can handle streaming requests to LLMs, providing an efficient way to interact with models for real-time inference. It's optimized for Intel hardware, utilizing the intel_extension_for_transformers
library to enhance performance on compatible CPUs.
To get started, clone this repository and install the required dependencies.
git clone https://github.com/eduand-alvarez/FastAPI_LLM_Streaming.git
cd FastAPI_LLM_Streaming
pip install -r requirements.txt
- Start the FastAPI server:
Run the serve.py script to start the server:
python serve.py
This command starts the FastAPI application on port 5004, making it accessible on your network.
- Interacting with the API:
Once the server is running, you can make HTTP GET requests to the /query-stream/ endpoint to interact with the deployed LLM. The request should include the query parameter and the selected_model parameter to specify the model you wish to use for inference.
Currently, the server is configured to support specific models from Hugging Face. These models are defined within the serve.py script and can be easily extended by modifying the ITREXLoader function within loader.py.