Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Tensornet backends] Make controlled rank a configurable setting #2446

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

1tnguyen
Copy link
Collaborator

@1tnguyen 1tnguyen commented Dec 3, 2024

Description

  • For small controlled ops, i.e., singly controlled, expand the gate matrix and use cutensornetStateApplyTensorOperator.

  • Add CUDAQ_TENSORNET_CONTROLLED_RANK threshold to determine when cutensornetStateApplyControlledTensorOperator should be used. For MPS, this is fixed at 1 as it cannot handle gate ops with more than 2 qubits.

  • Add doc for the new setting and also remove some stale notes about random seeds (fixed in Support measurement sampling seed for cutensornet backends #2398) in the docs.

1tnguyen and others added 2 commits December 3, 2024 20:37
For small controlled ops, i.e., singly controlled, expand the gate
matrix and use cutensornetStateApplyTensorOperator.

Add CUDAQ_TENSORNET_CONTROLLED_RANK threshold to determine when cutensornetStateApplyControlledTensorOperator is used.
For MPS, this is fixed at 1 as it cannot handle gate ops with more than
2 qubits.

Add doc for the setting and also remove some stale notes about random
seeds in the docs.

Signed-off-by: Thien Nguyen <[email protected]>
Copy link

github-actions bot commented Dec 3, 2024

CUDA Quantum Docs Bot: A preview of the documentation can be found here.

github-actions bot pushed a commit that referenced this pull request Dec 3, 2024
if (iter == m_gateDeviceMemCache.end()) {
void *dMem = allocateGateMatrix(task.matrix);
m_gateDeviceMemCache[gateKey] = dMem;
if (controls.size() <= m_maxControlledRankForFullTensorExpansion) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a test to cover this scenario?

Copy link
Collaborator

@khalatepradnya khalatepradnya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes look good to me. Thanks, Thien!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants