You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Evaluation of the hypergeometric function of a matrix argument (Koev & Edelman's algorithm)
Let $(a_1, \ldots, a_p)$ and $(b_1, \ldots, b_q)$ be two vectors of real or
complex numbers, possibly empty, $\alpha > 0$ and $X$ a real symmetric or a
complex Hermitian matrix.
The corresponding hypergeometric function of a matrix argument is defined by
The inner sum is over the integer partitions $\kappa$ of $k$ (which we also
denote by $|\kappa| = k$). The symbol ${(\cdot)}_{\kappa}^{(\alpha)}$ is the
generalized Pochhammer symbol, defined by
when $\kappa = (\kappa_1, \ldots, \kappa_\ell)$.
Finally, $C_{\kappa}^{(\alpha)}$ is a Jack function.
Given an integer partition $\kappa$ and $\alpha > 0$, and a
real symmetric or complex Hermitian matrix $X$ of order $n$,
the Jack function
is a symmetric homogeneous polynomial of degree $|\kappa|$ in the
eigen values $x_1$, $\ldots$, $x_n$ of $X$.
The series defining the hypergeometric function does not always converge.
See the references for a discussion about the convergence.
The inner sum in the definition of the hypergeometric function is over
all partitions $\kappa \vdash k$ but actually
$C_{\kappa}^{(\alpha)}(X) = 0$ when $\ell(\kappa)$, the number of non-zero
entries of $\kappa$, is strictly greater than $n$.
For $\alpha=1$, $C_{\kappa}^{(\alpha)}$ is a Schur polynomial and it is
a zonal polynomial for $\alpha = 2$.
In random matrix theory, the hypergeometric function appears for $\alpha=2$
and $\alpha$ is omitted from the notation, implicitely assumed to be $2$.
Koev and Edelman (2006) provided an efficient algorithm for the evaluation
of the truncated series
Hereafter, $m$ is called the truncation weight of the summation
(because $|\kappa|$ is called the weight of $\kappa$), the vector
$(a_1, \ldots, a_p)$ is called the vector of upper parameters while
the vector $(b_1, \ldots, b_q)$ is called the vector of lower parameters.
The user has to supply the vector $(x_1, \ldots, x_n)$ of the eigenvalues
of $X$.
We said that the hypergeometric function is defined for a real symmetric
matrix or a complex Hermitian matrix $X$. Thus the eigenvalues of $X$
are real. However we do not impose this restriction in pyhypergeomatrix.
The user can enter any list of real or complex numbers for the eigenvalues.
Univariate case
For $n = 1$, the hypergeometric function of a matrix argument is known as the
generalized hypergeometric function.
It does not depend on $\alpha$. The case of $\sideset{_{2\thinspace}^{}}{_1^{}}F$ is the most known,
this is the Gauss hypergeometric function. Let's check a value. It is known that
Plamen Koev and Alan Edelman.
The efficient evaluation of the hypergeometric function of a matrix argument.
Mathematics of computation, vol. 75, n. 254, 833-846, 2006.
Robb Muirhead.
Aspects of multivariate statistical theory.
Wiley series in probability and mathematical statistics.
Probability and mathematical statistics.
John Wiley & Sons, New York, 1982.
A. K. Gupta and D. K. Nagar.
Matrix variate distributions.
Chapman and Hall, 1999.