-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix measurement #290
base: dev
Are you sure you want to change the base?
Fix measurement #290
Conversation
Thanks for the correction, and I appreciate adding the tests! I’m not very familiar with the function myself so I figured that if the tests passed, it would be fine, though our tests are certainly not complete. I’ll give a more stylistic/code review shortly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this file be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is an empty file in the examples section, I do not really see what function it is accomplishing, so I removed it, but I can put it back if you do not feel comfortable removing it.
@@ -23,7 +23,6 @@ | |||
"expval", | |||
"MeasureAll", | |||
"MeasureMultipleTimes", | |||
"MeasureMultiPauliSum", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we no longer adding this as a function to export?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I removed the function, because it was not correctly implemented. You can check the example pauli_sum_op.py, changing the coefficients has no effect on the outcome of the measurement (see my first post). The coefficient part of the input is not taken into account.
Thinking about it, I think we could implement this easily by using
torchquantum/torchquantum/algorithm/hamiltonian.py
Lines 93 to 112 in 611cc2a
class Hamiltonian(object): | |
"""Hamiltonian class.""" | |
def __init__(self, | |
coeffs: List[float], | |
paulis: List[str], | |
endianness: str = "big", | |
) -> None: | |
"""Initialize the Hamiltonian. | |
Args: | |
coeffs: The coefficients of the Hamiltonian. | |
paulis: The operators of the Hamiltonian, described in strings. | |
endianness: The endianness of the operators. Default is big. Qubit 0 is the most significant bit. | |
Example: | |
.. code-block:: python | |
coeffs = [1.0, 1.0] | |
paulis = ["ZZ", "ZX"] | |
hamil = tq.Hamiltonian(coeffs, paulis) | |
""" |
and using the matrix of the hamiltonian.
Alternatively, we could change expval_joint_analytical
to not only take the string of observables, but also coefficient info and multiply the coefficients at the point where the kronecker product is taken here:
torchquantum/torchquantum/measurement/measurements.py
Lines 270 to 271 in 611cc2a
for op in observable[1:]: | |
hamiltonian = torch.kron(hamiltonian, pauli_dict[op].to(states.device)) |
But then we would have to change the other expval functions as well to match this. If you think there is a real use case for this measurement and it is worth changing the expval functions we can do that, otherwise just remove as done here.
Yes it also seems the tests are not run in the github actions, only the examples folder, it might be worth fixing that by adding the tests to Edit: thanks for your comment just saw that the test is running correctly! |
This is sort of a follow-up on #288, I misunderstood the functions, the fix there was wrong (as was the original function).
If I understand correctly, the example in the docstring
means that we measure the expectation value of
X x Y x Z x I
tensor product and then ofX x Y x Z x I
(i.e. the same state).We expect this to return
[batch_size, len(obs_list)]
and not[batch_size, len(obs_list), num_wires]
, you cannot get anything separated by wires from here, the expectation value of the tensor product is just a real number, thusexpval_joint_analytical
is needed here. I have removed the prod fromMeasureMultiQubitPauliSum
.I have added a test in test/measurements/ to make sure that the analytical result is recovered, please check for yourself.
In fact, I don't really see any use case for
expval
as implemented here. IMO it should be renamed toexpval_per_wire
and it should be made explicit that the observables are looped over and the i-th return value is the expectation value of the i-th observable w.r.t. to the i-th wire (with the rest traced out). What is the use case for this over just expval_joint_analytical?Further, I have removed MeasureMultiPauliSum, it passes
obs_list
toMeasureMultipleTimes
butMeasureMultipleTimes
does not take into account thecoefficient
entry that is specified inMeasureMultiPauliSum
. Thus,MeasureMultiPauliSum
will not fail (since it is a correct dict with too many keys) but also not return the correct value. Since I have no interest in implementing this correctly, I would remove. You can check that the coefficient entry has no effect by changing the coefficients in the pauli_sum_op.py example file. I have changed this file to useMeasureMultiQubitPauliSum
instead (and also removed the empty example in this directory).