Working from home
-
transwarp
Pinned Loading
-
-
vllm-project/vllm
vllm-project/vllm PublicA high-throughput and memory-efficient inference and serving engine for LLMs
-
opendatalab/MinerU
opendatalab/MinerU PublicTransforms complex documents like PDFs into LLM-ready markdown/JSON for your Agentic workflows.
-
-
k2-fsa/sherpa-onnx
k2-fsa/sherpa-onnx PublicSpeech-to-text, text-to-speech, speaker diarization, speech enhancement, source separation, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Andr…
-
intel/ipex-llm
intel/ipex-llm PublicAccelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V, etc.) on Intel XPU (e.g., local PC with iGPU and NPU, discr…
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.


