Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correct changebasis as well as implementing the metrics in #42 #56

Open
wants to merge 22 commits into
base: main
Choose a base branch
from

Conversation

Hamiltonian-Action
Copy link

@Hamiltonian-Action Hamiltonian-Action commented Mar 7, 2025

Fixes a bug in changebasis that results in an incorrect operation (conditional expression should have originally read + nmodes rather than + 2*nmodes) and collapse down to one loop rather than two (assuming this does not upset the @inbounds macro) Optimises changebasis to avoid taking any matrix multiplication entirely.

Additionally, implements the Von Neumann entropy, joint fidelity, and log negativity metrics for evaluation on Gaussian state objects.

Copy link

codecov bot commented Mar 7, 2025

Codecov Report

Attention: Patch coverage is 81.81818% with 14 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/metrics.jl 72.54% 14 Missing ⚠️
Files with missing lines Coverage Δ
src/Gabs.jl 100.00% <ø> (ø)
src/states.jl 98.86% <100.00%> (+0.02%) ⬆️
src/metrics.jl 72.54% <72.54%> (-27.46%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@apkille apkille self-requested a review March 7, 2025 16:58
Copy link
Owner

@apkille apkille left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @Hamiltonian-Action, this is great. I left some minor comments in the first round of review. We should add some tests for these new metrics. Could you add a new file test/test_metrics.jl for these tests?

@Hamiltonian-Action
Copy link
Author

Hamiltonian-Action commented Mar 9, 2025

Could you add a new file test/test_metrics.jl for these tests?

Will do so as soon as time permits.

Hamiltonian-Action and others added 6 commits March 9, 2025 20:34
Tidy up QuantumInterface import statements.

Co-authored-by: Andrew Kille <[email protected]>
Fix incorrect function invocation.

Co-authored-by: Andrew Kille <[email protected]>
Format metric documentation strings.

Co-authored-by: Andrew Kille <[email protected]>
…ault partition on first mode, fixed issue with subsequent function calls.
src/metrics.jl Outdated
end

_entropy_vn(x) = (x+(1/2)) * log(x+(1/2)) - (x-(1/2)) * log(x-(1/2))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Something I've wondered but haven't every gotten around to checking is whether log( ((x+(1/2))^(x+(1/2))) / ((x-(1/2))^(x-(1/2))) ) would be more/less numerically stable or faster, especially with values just above 1/2+tol.

Also I should note that currently this returns entropy in units of nats. I would guess that bits is the more familiar unit 😁 and might suggest using that convention or at the very least documenting the information units this will return.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

documenting the information units this will return.

The docstring does state that it is indeed the natural logarithm, though whether this should be featured more prominently is a matter of personal preference.

Regarding the expression you provided, I would hazard it might be slower due to the need to perform two exponentials and a logarithm over just two logarithms. One could also explore automated floating point optimisation software such as Herbie for assistance in further enhancing performance/stability.

Copy link
Owner

@apkille apkille Mar 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that QuantumOpticsBase.jl also uses units of nats for entropy_vn, I suggest we go with this convention as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a pick-your-poison kind of situation. On one hand, going against the QuantumOpticsBase.jl precedence creates a considerable possibility for mistakes due to the API mismatch. On the other hand, utilising different logarithms within the same package that are not explicitly clear from the function name also creates another equally heinous tripping hazard.
My vote would be to err towards internal uniformity over external compliance.

@Hamiltonian-Action
Copy link
Author

While writing some tests, an interesting issue arises with respect to typing and how tightly should the tolerance be specified. Is a tolerance of 1e-10 too tight or too lose for the metric functions? It is fairly okay for Float64 values but less than machine epsilon for Float32. Should the default tolerances be dynamically determined based on parameter types and/or adopt a similar convention to isapprox?

…ical stability of helper functions in metrics.jl, and removed tolerances until a consensus is reached regarding their inclusion and form.
@apkille
Copy link
Owner

apkille commented Mar 13, 2025

1e-10 feels a bit loose to me, although if it only works fairly okay for Float64 in the tests, then we should just stick with that.

Should the default tolerances be dynamically determined based on parameter types and/or adopt a similar convention to isapprox?

I don't think this is necessary, we can just leave it to the user to specify the tolerance if they want to (as similarly done in QuTiP and QuantumOptics.jl).

@Hamiltonian-Action
Copy link
Author

1e-10 feels a bit loose to me, although if it only works fairly okay for Float64 in the tests, then we should just stick with that.

Given that the Float64 epsilon is 2e-16 and the functionality requires a good deal of linear algebra with a possibly quite egregious condition number, it would not be too extraordinary to consider losing about a handful of bits in accuracy over the course of the computation. This is especially true for squeezed states and EPR states which have given me quite a pain due to how numerical accuracy quickly vanishes due to the exponential factor involved.

@apkille apkille self-requested a review March 18, 2025 22:37
@Hamiltonian-Action
Copy link
Author

Just to keep this on record, I ended up removing all the tolerances in metrics.jl since, as far as I understood, no consensus was reached regarding their form. If anyone wishes to re-introduce them then they may do so at their discretion, though I presently reckon the most dynamic implementation would be to include an additional parameter f::Function that accepts a user-provided filter and defaults to x->true.

@apkille
Copy link
Owner

apkille commented Mar 19, 2025

Sorry for being slow on review, I've been traveling all week. I will look over everything more in-depth tonight.

Is a tolerance of 1e-10 too tight or too loose for the metric functions? It is fairly okay for Float64 values but less than machine epsilon for Float32.

If this still rings true after adding the test file, then let's just have the kwarg tol = 1e-10 for the relevant metric functions. It's a concise (and frequently used) approach to handling the numerical tolerances that I'm fine with.

@Hamiltonian-Action
Copy link
Author

Hamiltonian-Action commented Mar 19, 2025

Don't worry about it. You decide your own pace.

The issue with the test file is about the tolerance of the output of the function, since were are matching the generic Gaussian method against a definitive theoretical value and there is absolutely no chance of maintaining every single bit of precision between the two, especially the exponentials (this is why I kept the factor in the squeezed and EPR state tests limited to [0, 1) rather than scaling them like the rest). It has almost nothing to do with the tolerance in the metric functions which are about as best as I could get them without having to implement our own mathematical library.

  • entropy_vn does a sum of (x + (1/2)) * log(x + (1/2)) - (x - (1/2)) * log(x - (1/2)) for x > 1/2. This underflows gradually and only contributes roughly 4.19e-15 when x = nextfloat(0.5) so adding a tolerance x - (1/2) > tolerance only has an impact if the entropy is already small AND there is a tremendous number of such terms. It could certainly help combat noise in some calculations, but unless it is dynamically determined depending on the floating point type then any sensible value will have absolutely no impact on float32 calculations since nextfloat(0.5f0) - 0.5f0 = ~5.96f-8.
  • fidelity does a product of x + sqrt(x^2 - 1) for x > 1. This one does indeed considerably depend on the tolerance due to the square root function. When x = nextfloat(1.0) it is equal to 1.0 + ~2.11e-8 which further compounds quasi-geometrically (not quite exactly since the additional fluff squares to 2eps and so may possibly be dropped). Again, if it is not determined dynamically depending on the floating point type then any sensible value will have absolutely no impact on float32 calculations since nextfloat(1.0f0) - 1.0f0 = eps(Float32) = ~1.19f-7.
  • logarithmic_negativity does a sum of log(x) for 0 < x < 1 (though, theoretically, it ought to include 0 depending on how you wish to view it). Once more, this underflows gradually at the upper end log(prevfloat(1.0)) = ~ 1.11e-16 so adding a tolerance 1 - x > tolerance leads to similar conclusions as with entropy_vn. On the other hand, entropy values near zero are precisely those with the greatest contribution so adding a tolerance is counterproductive depending on your views regarding operators that are not trace-class.

@apkille
Copy link
Owner

apkille commented Mar 21, 2025

@Hamiltonian-Action Thanks for clarifying! After reading your helpful comments and playing around with the tests, I agree this is an issue and should be addressed carefully. Regarding the tests, at the risk of stating the obvious, all of the issues you mentioned are avoided if the user invokes theBigFloat type and uses the GenericLinearAlgebra package (to perform linear algebra routines on Matrix{BigFloat}), i.e., the tests pass if we write

using GenericLinearAlgebra
squeezed(x, y) = squeezedstate(basis1, BigFloat(x), BigFloat(y))
EPR(x, y) = eprstate(basis2, BigFloat(x), BigFloat(y))

and sample outside of [0, 1). So, perhaps naively speaking, a potential solution is to simply document clearly this workaround. Then again, if this issue arises anytime the metrics implemented this PR are computed, then we should probably further implement something under the hood that deals with the numerical instabilities.

If anyone wishes to re-introduce them then they may do so at their discretion, though I presently reckon the most dynamic implementation would be to include an additional parameter f::Function that accepts a user-provided filter and defaults to x->true.

Could you give a quick example of such a filter? I'm curious how this would look.

@Hamiltonian-Action
Copy link
Author

all of the issues you mentioned are avoided if the user invokes theBigFloat type and uses the GenericLinearAlgebra package

I am trying to assume the bare minimum regarding the end-user, as well as maintaining type conformity whenever possible. All of the tests actually pass at 1e-12 but I stuck to 1e-10 so that the guarantees are more stringent than the test themselves, in case any future issues arise. If we exclude the exponentials then all the tests also pass at 1e-14 if fidelity_thermal_thermal is replaced with a more accurate function -- fidelity is actually superior to the function it is being compared against, I could rectify fidelity_thermal_thermal but it would become some horrendous monstrosity with half a dozen branches.
The deal with squeezed and EPR (at this tolerance level) ties back to what I said earlier about entropy_vn, since occasionally a few stray values ever so slightly larger than 1/2 add noise into the result, which is already incredibly small since it would ideally be zero. For even larger values of the squeezing parameter, the functions constructing the states become the limiting factor as their validity is diminished in the region where tanh(r) --> 1, as a matter of fact squeezed(10.0, 0.0) does not have a permissible covariance matrix.

Without a clear image of who the target audience and what the target use-case are, I cannot condone testing against BigFloat as the results would be unrepresentative of what is to be expected from is considered "typical use".

Could you give a quick example of such a filter? I'm curious how this would look.

function entropy_vn(state::GaussianState, provided_filter::Function = x -> true)
    T = symplecticform(state.basis) * state.covar
    T = filter(x -> x > 1/2 && provided_filter(x), imag.(eigvals(T)) ./ state.ħ)
    return reduce(+, _entropy_vn.(T))
end

@apkille
Copy link
Owner

apkille commented Mar 21, 2025

Fair enough. Let's proceed with the user-provided filter then (just make the kwarg filter rather than provided_filter). In the docstrings, could you give an example of a filter that one could pass to the metric functions?

Without a clear image of who the target audience and what the target use-case are, I cannot condone testing against BigFloat as the results would be unrepresentative of what is to be expected from is considered "typical use".

The initial use-case for this package was to serve as a backend tool for QuantumSavory.jl, which is intended to be an all-encompassing quantum networking simulator used by networking scientists within the NSF's Center for Quantum Networks. The overarching idea with that library is that users can dispatch a general networking simulation onto different backend representations (stabilizer tableaus, state vectors, Gaussian formalism, etc). So some of the needs there are the following: clean integration with QuantumInterface.jl and the QuantumSavory ecosystem, ensured automatic differentiation support with ForwardDiff.jl and Zygote.jl, and GPU acceleration. But of course, continuous variable quantum info is a deep research field, and there's a TON of other interesting use-cases to address and tools to implement. Just to name a few: substantial symbolic capabilities, symplectic analysis, large-scale photonic circuit simulations, simulating slightly non-Gaussian states, etc. I appreciate your concern in assuming the bare-minimum regarding the end-user; it's hard to state a "typical-use case" so it's probably best to assume that the user isn't extremely familiar with the Julia language (although of course avoiding architectural limitations that prevents them from using more sophisticated tools).

@Hamiltonian-Action
Copy link
Author

Will do so in the coming days, should I get the opportunity. Though I strongly disagree on using the kwarg filter since it clashes with the core Julia namespace, maybe fltr or some other entirely different name would be more suitable.
Based on the description you provided, one could hope to assume that the user is at least somewhat familiar with the salient floating point issues, whether or not they've programmed in Julia beforehand and as they ought to be if they're working within the intersection of these disciplines, so a slight bias towards Float64 is not the absolute worst thing imaginable.

GPU acceleration
simulating slightly non-Gaussian states

May possibly deal with this in the coming months, depending on how my own situation evolves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants