Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance]Do onednn executors depend on Intel platform #23795

Open
Serenagirl opened this issue Feb 24, 2025 · 0 comments
Open

[Performance]Do onednn executors depend on Intel platform #23795

Serenagirl opened this issue Feb 24, 2025 · 0 comments
Labels
ep:oneDNN questions/issues related to DNNL EP performance issues related to performance regressions

Comments

@Serenagirl
Copy link

Describe the issue

Onednn supports ARM compilation. Why does Intel SDK need to be installed when the onednn executor is used?
https://onnxruntime.ai/docs/build/eps.html#onednn
When will the arm64 version be supported? In addition, what is the principle of executing conv by the default CPU executor? Called neon instructions?

To reproduce

None

Urgency

No response

Platform

Linux

OS Version

openeuler

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.19.0

ONNX Runtime API

Python

Architecture

ARM64

Execution Provider

Default CPU

Execution Provider Library Version

No response

Model File

No response

Is this a quantized model?

Yes

@Serenagirl Serenagirl added the performance issues related to performance regressions label Feb 24, 2025
@github-actions github-actions bot added the ep:oneDNN questions/issues related to DNNL EP label Feb 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:oneDNN questions/issues related to DNNL EP performance issues related to performance regressions
Projects
None yet
Development

No branches or pull requests

1 participant