-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Correct changebasis as well as implementing the metrics in #42 #56
base: main
Are you sure you want to change the base?
Conversation
…d down to one loop
…rmation in symplectic error message.
Codecov ReportAttention: Patch coverage is
🚀 New features to boost your workflow:
|
… commits in CHANGELONG.
…the need for matrix multiplcation in the changebasis function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @Hamiltonian-Action, this is great. I left some minor comments in the first round of review. We should add some tests for these new metrics. Could you add a new file test/test_metrics.jl
for these tests?
Will do so as soon as time permits. |
Tidy up QuantumInterface import statements. Co-authored-by: Andrew Kille <[email protected]>
Fix incorrect function invocation. Co-authored-by: Andrew Kille <[email protected]>
Format metric documentation strings. Co-authored-by: Andrew Kille <[email protected]>
…ault partition on first mode, fixed issue with subsequent function calls.
src/metrics.jl
Outdated
end | ||
|
||
_entropy_vn(x) = (x+(1/2)) * log(x+(1/2)) - (x-(1/2)) * log(x-(1/2)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Something I've wondered but haven't every gotten around to checking is whether log( ((x+(1/2))^(x+(1/2))) / ((x-(1/2))^(x-(1/2))) )
would be more/less numerically stable or faster, especially with values just above 1/2+tol.
Also I should note that currently this returns entropy in units of nats. I would guess that bits is the more familiar unit 😁 and might suggest using that convention or at the very least documenting the information units this will return.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
documenting the information units this will return.
The docstring does state that it is indeed the natural logarithm, though whether this should be featured more prominently is a matter of personal preference.
Regarding the expression you provided, I would hazard it might be slower due to the need to perform two exponentials and a logarithm over just two logarithms. One could also explore automated floating point optimisation software such as Herbie for assistance in further enhancing performance/stability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that QuantumOpticsBase.jl also uses units of nats for entropy_vn
, I suggest we go with this convention as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a pick-your-poison kind of situation. On one hand, going against the QuantumOpticsBase.jl precedence creates a considerable possibility for mistakes due to the API mismatch. On the other hand, utilising different logarithms within the same package that are not explicitly clear from the function name also creates another equally heinous tripping hazard.
My vote would be to err towards internal uniformity over external compliance.
While writing some tests, an interesting issue arises with respect to typing and how tightly should the tolerance be specified. Is a tolerance of |
…ical stability of helper functions in metrics.jl, and removed tolerances until a consensus is reached regarding their inclusion and form.
I don't think this is necessary, we can just leave it to the user to specify the tolerance if they want to (as similarly done in QuTiP and QuantumOptics.jl). |
Given that the Float64 epsilon is |
…cally matches zero.
Just to keep this on record, I ended up removing all the tolerances in |
Sorry for being slow on review, I've been traveling all week. I will look over everything more in-depth tonight.
If this still rings true after adding the test file, then let's just have the kwarg |
Don't worry about it. You decide your own pace. The issue with the test file is about the tolerance of the output of the function, since were are matching the generic Gaussian method against a definitive theoretical value and there is absolutely no chance of maintaining every single bit of precision between the two, especially the exponentials (this is why I kept the factor in the squeezed and EPR state tests limited to
|
@Hamiltonian-Action Thanks for clarifying! After reading your helpful comments and playing around with the tests, I agree this is an issue and should be addressed carefully. Regarding the tests, at the risk of stating the obvious, all of the issues you mentioned are avoided if the user invokes the
and sample outside of
Could you give a quick example of such a filter? I'm curious how this would look. |
I am trying to assume the bare minimum regarding the end-user, as well as maintaining type conformity whenever possible. All of the tests actually pass at Without a clear image of who the target audience and what the target use-case are, I cannot condone testing against
function entropy_vn(state::GaussianState, provided_filter::Function = x -> true)
T = symplecticform(state.basis) * state.covar
T = filter(x -> x > 1/2 && provided_filter(x), imag.(eigvals(T)) ./ state.ħ)
return reduce(+, _entropy_vn.(T))
end |
Fair enough. Let's proceed with the user-provided filter then (just make the kwarg
The initial use-case for this package was to serve as a backend tool for QuantumSavory.jl, which is intended to be an all-encompassing quantum networking simulator used by networking scientists within the NSF's Center for Quantum Networks. The overarching idea with that library is that users can dispatch a general networking simulation onto different backend representations (stabilizer tableaus, state vectors, Gaussian formalism, etc). So some of the needs there are the following: clean integration with QuantumInterface.jl and the QuantumSavory ecosystem, ensured automatic differentiation support with ForwardDiff.jl and Zygote.jl, and GPU acceleration. But of course, continuous variable quantum info is a deep research field, and there's a TON of other interesting use-cases to address and tools to implement. Just to name a few: substantial symbolic capabilities, symplectic analysis, large-scale photonic circuit simulations, simulating slightly non-Gaussian states, etc. I appreciate your concern in assuming the bare-minimum regarding the end-user; it's hard to state a "typical-use case" so it's probably best to assume that the user isn't extremely familiar with the Julia language (although of course avoiding architectural limitations that prevents them from using more sophisticated tools). |
Will do so in the coming days, should I get the opportunity. Though I strongly disagree on using the kwarg
May possibly deal with this in the coming months, depending on how my own situation evolves. |
Fixes a bug in changebasis that results in an incorrect operation (conditional expression should have originally readOptimises+ nmodes
rather than+ 2*nmodes
) and collapse down to one loop rather than two (assuming this does not upset the@inbounds
macro)changebasis
to avoid taking any matrix multiplication entirely.Additionally, implements the Von Neumann entropy, joint fidelity, and log negativity metrics for evaluation on Gaussian state objects.