Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Training] Using C++ to train ONNX models #23170

Open
tzsz0 opened this issue Dec 20, 2024 · 1 comment
Open

[Training] Using C++ to train ONNX models #23170

tzsz0 opened this issue Dec 20, 2024 · 1 comment
Labels
training issues related to ONNX Runtime training; typically submitted using template

Comments

@tzsz0
Copy link

tzsz0 commented Dec 20, 2024

Describe the issue

Hello,
technically this also includes [Documentation] and [Building]. This just feels the closest.
I'd like to train a ONNX model in my C++ application. For this use case I have encountered multiple examples that seem to do the job. The closest is: MNIST example from the repo . However, the main issue here is that those namespaces/declarations are not usable when using the precompiled archive from the release-page. I downloaded the most recent version 1.20.1.
As far as I am aware, only the onnxruntime_c_api.h and onnxruntime_cxx_api.h are intended for direct use there.

Using those APIs, I have encountered the following documentation: Doxygen doc. The thing about this code sample is that it also doesn't work. There is no header onnxruntime_training_api.h and the following code is returning a nullptr.

    OrtApi const * ortapi = OrtGetApiBase()->GetApi(ORT_API_VERSION);

    OrtTrainingApi const * trainapi = ortapi->GetTrainingApi(ORT_API_VERSION);

    std::cout << (trainapi == nullptr ? "unavailable" : "available") << std::endl;

According to the documentation in the headerfile, this means that the training API is not available in this build.

Now I am left a bit confused because I don't know how I should start training process. Maybe I overlooked some documentation? Most of it is in Python, so it does not really apply to my case.
Primarily I don't understand:

  • Which API should I use? The one from the precompiled binaries or the one that uses the project's internals (namespace onnxruntime)?
  • If I should use GetTrainingApi(), how do I enable it? Do I need to recompile ONNXRuntime?
  • What is the best way to include onnxruntime in a C++ project that uses training?

To reproduce

  • Download the latest precompiled release (here v.1.20.1) of onnxruntime
  • Extract it
  • Link it with the above code

Urgency

The project mentioned before is part of my final/thesis project for university. That makes it not super urgent but I need to get done in the next few months.

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

v1.20.1

PyTorch Version

Didn't get that far

Execution Provider

Other / Unknown

Execution Provider Library Version

No response

@tzsz0 tzsz0 added the training issues related to ONNX Runtime training; typically submitted using template label Dec 20, 2024
@xadupre
Copy link
Member

xadupre commented Dec 20, 2024

onnxruntime-training is released anymore automatically. You can get the latest released version of onnxruntime-training here https://github.com/microsoft/onnxruntime/releases/tag/v1.19.2 (1.19.2). If you need a newer version, you need to build it from source. You can refer to this page https://onnxruntime.ai/docs/build/training.html.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
training issues related to ONNX Runtime training; typically submitted using template
Projects
None yet
Development

No branches or pull requests

2 participants