High-efficiency floating-point neural network inference operators for mobile, server, and Web
-
Updated
Jun 29, 2024 - C
High-efficiency floating-point neural network inference operators for mobile, server, and Web
Expressive Vector Engine - SIMD in C++ Goes Brrrr
Single-header libraries from the Magnum engine
C++11 multiplatform utility library
A fast implementation of single-pattern substring search using SIMD acceleration.
A blazingly fast JSON serializing & deserializing library
DR3 enables users to write vectorised code using generic lambdas and filters. Switch instruction set just by changing enclosing namespace
ncnn is a high-performance neural network inference framework optimized for the mobile platform
A streaming SQL engine, a fast and lightweight alternative to ksqlDB and Apache Flink, 🚀 powered by ClickHouse.
Jlama is a modern LLM inference engine for Java
Performance-portable, length-agnostic SIMD with runtime dispatch
(REOS) Radar and ElectroOptical Simulation Framework written in Fortran.
QuestDB is an open source time-series database for fast ingest and SQL queries
A modern C++17 glTF 2.0 library focused on speed, correctness, and usability
A lightweight platform-accelerated library for biological motif scanning using position weight matrices.
Add a description, image, and links to the simd topic page so that developers can more easily learn about it.
To associate your repository with the simd topic, visit your repo's landing page and select "manage topics."