Speed up AI development using Intel®-optimized software on the latest Intel® Core™ Ultra processor, Intel® Xeon® processor, Intel® Gaudi® AI Accelerator, and GPU compute. You can get started right away on the Intel® Tiber™ AI Cloud for free.
As a participant in the open source software community since 1989, Intel uses industry collaboration, co-engineering, and open source contributions to deliver a steady stream of code and optimizations that work across multiple platforms and use cases. We push our contributions upstream so developers get the most current and optimized software that works across multiple platforms and maintains security.
Check out the following respositories to jumpstart your development work on Intel:
- OPEA GenAI Examples - Examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
- AI PC Notebooks - A collection of notebooks designed to showcase generative AI workloads on AI PC
- Open3D - A modern library for 3D data processing
- Optimum Intel - Accelerate inference with Intel optimization tools
- Optimum Habana - Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
- Intel Neural Compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
- OpenVINO Notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
- SetFit - Efficient few-shot learning with Sentence Transformers
- FastRAG - Efficient Retrieval Augmentation and Generation Framework
Join us on the Intel DevHub Discord server to chat with other developers in channels like #dev-projects, #gaudi, and #large-language-models.