English | 简体中文
NTCore helps data scientists and machine learning engineers easily version, deploy and monitor AI/ML models.
- Auto-recording models from various ML frameworks, e.g, sklearn, tensorflow and keras etc., with metadata.
- One-click deployment with Docker, Kubernetes and cloud providers, e.g, AWS, Azure, Alicloud etc.
- Clean dashboards to monitor and report ML model performance metrics.
- Easy-to-integrate python clients that automatically versions models from multiple AI/ML frameworks including sklearn, tensorflow and keras etc.
- Model auditability and reproducibility through metadata from training, e.g., recall and precision.
- Out of the box RESTful endpoints that're callable through curl, Postman and HTTP clients.
- One-click deployment in production for models using Docker, Kubernetes and cloud providers, e.g, Amazon EKS, Microsoft AKS etc.
- Easy-to-scale and highly available prediction services with ML models to support state-of-art architectures of web and mobile applications.
- Serving multiple endpoints with one endpoint per model.
- Model performance monitoring with integration to Prometheus (roadmap).
- Clean UI dashboards to manage ML model versions, deployments and performance metrics (roadmap).
- High-level APIs to automate ML workflows with integration to workflow managers, e.g, Apache Airflow.
- Install docker engine with docker compose.
- Clone this repository and start ntcore via docker compose
docker-compose -f docker-compose.yml up
- Install the ntcore client via
pip install ntcore
- Navigate to http://localhost:8000/dsp/console/workspaces and create your first workspace.
- Version an ML model. More examples can be found here.
# Load iris dataset. from sklearn import datasets iris = datasets.load_iris() # Init the model from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(max_depth=2, random_state=0) # Start an experiment run from ntcore import Client client = Client() with client.start_run('my_workspace_id') as exper: clf.fit(iris.data, iris.target_names[iris.target], experiment=exper)
- View the model versions and register one for pre-production deployment.
- Deploy your registered model version and invoke the RESTful endpoint after deployment succeeds.
curl -H "Content-Type: application/json" -X POST --data '{"data": [[5.1,3.5,1.4,0.2]]}' http://localhost:8000/s/{workspace_id}/predict
NTCore documentation: https://docs.thenetron.com/#/.
Imagine you are a data scientist optimizing AI/ML models for 10 different scenarios, each of which requires 100 iterations. How can you retain inputs/outputs of these 1000 experiments, compare them to find the best models and reproduce them? I hear you, it's not easy. But that's not the end of your nightmare. If you want to deploy the "best" models as prediction endpoints, you have to refactor your codes to create APIs before DevOps team can deploy. This process usually takes days. More importantly, the pain becomes worse when the processes are repeated hourly, daily or even monthly.
NTCore is a platform built to relieve the pain. It provides the UI tools as well as the APIs to help data scientists continuously and seamlessly ship their trained models to production environments with minimal interactions with DevOps teams. It also provides the monitoring functionality so that data scientists can quickly access the latest performance metrics of their models.
For Getting Started guides, tutorials, and API reference check out our docs.
To report a bug, file a documentation issue, or submit a feature request, please open a GitHub issue.
NTCore is licensed under Apache 2.0.