Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bayesian model dimensionality #37

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open

Conversation

williamjameshandley
Copy link
Contributor

Description

Implementation of Bayesian model dimensionality calculations:

  • lsbi.stats.bmd
  • lsbi.model.LinearModel.bmd
  • lsbi.model.LinearModel.mutual_information
  • lsbi.model.LinearModel.dimensionality

After some effort, I was able to derive the KL divergence, mutual information, Bayesian model dimensionality and average bmd for our general case. This gives accurate and reliable estimates.

@ngm29 may be interested in eqs 10-14 of this cheat sheet.

Feedback on names would probably be helpful here. If we're being consistent with anesthetic we should probably move bmd -> d_G and dkl -> D_KL. Not sure what we should call the mutual information (average DKL over data) or the dimensionality (average d_G over data).

Checklist:

  • I have performed a self-review of my own code
  • My code is black compliant (black . --check)
  • My code is isort compliant (isort . --profile black --filter-files)
  • My code contains compliant docstrings (pydocstyle --convention=numpy lsbi)
  • New and existing unit tests pass locally with my changes (python -m pytest)
  • I have added tests that prove my fix is effective or that my feature works
  • I have appropriately incremented the semantic version number in both README.rst and lsbi/_version.py

Copy link

codecov bot commented Mar 6, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 100.00%. Comparing base (2901f25) to head (ba3abce).

Additional details and impacted files
@@            Coverage Diff            @@
##            master       #37   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files            6         6           
  Lines          546       633   +87     
=========================================
+ Hits           546       633   +87     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@yallup yallup left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Testing numeric approximations of bmd vs the analytic call:

image

My suggestions would be we may be starting to need a guide for some of the more esoteric concepts we introduce, this is probably something to be strategized independently of this PR. My comments on this PR are mostly about when we have to resort to mc estimates for bmd, can we get some kind of error on this as it seems a very sensitive quantity?

Parameters
----------
D : array_like, shape (..., d)
Data to form the posterior
n : int, optional
Number of samples for a monte carlo estimate, defaults to 0
"""
return dkl(self.posterior(D), self.prior(), n)
return bmd(self.posterior(D), self.prior(), N)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should mcerror be accessible from the model?

p = self.posterior(D)
q = self.prior()
x = p.rvs(size=(N, *self.shape[:-1]), broadcast=True)
return (p.logpdf(x, broadcast=True) - q.logpdf(x, broadcast=True)).var(axis=0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is an "mcerror" also possible on these numeric estimates? I guess some kind of resampling of some percentage of N can generate some kind of error but perhaps there is something more principled

C = self._C
return np.broadcast_to(logdet(C + MΣM) / 2 - logdet(C) / 2, self.shape)

def dimensionality(self, N=0, mcerror=False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may need a more informative -- and I would suggest even a more colloquial -- comment (or just a write up of this somewhere!) as it took me a minute to remind myself the bmd averaged over the evidence. Same goes for the mutual information.

@yallup
Copy link
Collaborator

yallup commented Jul 25, 2024

I would expect the bmd estimate from the two models to be consistent

bmd_test

from lsbi.model import LinearModel, MixtureModel
import numpy as np
from matplotlib import pyplot as plt

d = 100
t = 10
k = 1

rng = np.random.RandomState(0)

model_matrix = rng.normal(size=(t, d))

mixture_model = MixtureModel(
    M=model_matrix[None, ...],
)

linear_model = LinearModel(
    M=model_matrix,
)

true_data = mixture_model.evidence().rvs()

bmds =  [] 
for i in range(100):
    bmds.append(mixture_model.bmd(true_data, N=500))

plt.hist(bmds, density=True, label="Mixture estimate BMD")

plt.vlines(linear_model.bmd(true_data), 0,2, color="black", label = "Analytic BMD")
plt.ylim(0,1.1)
plt.legend()
plt.show()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants