Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev #283

Open
wants to merge 49 commits into
base: main
Choose a base branch
from
Open

Dev #283

Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
49 commits
Select commit Hold shift + click to select a range
f538629
[Test]Single qubit density matrix test compared with qiskit.
Feb 3, 2024
4f90e92
[Test] Add two qubit gate tests for density matrix.
Feb 3, 2024
70e4224
[Test] Add three qubit gate tests for density matrix.
Feb 3, 2024
9b441c1
[Fix] Test code for density matrix operation on arbitrary num of qubits.
Feb 3, 2024
dfde51e
[Test] Add one,two,three qubit gate random layer tests.
Feb 3, 2024
0593e68
[Test] Mix random layer test for density matrix module.
Feb 3, 2024
d529c51
[Test] Add trace preserving test for density matrix.
Feb 3, 2024
c42dbb9
[Bug]Fix a small bug. The mat_dict reference in sx.py should be _sx_m…
Feb 3, 2024
3b38095
[Example] Add the minist example that run on noisedevice
Feb 3, 2024
135446b
[Bug] Fix a minor bug in batch multiplication of density matrix.
Feb 3, 2024
8f0bbd0
[minor] update OneQubitEulerDecomposer
01110011011101010110010001101111 Feb 4, 2024
d39dece
[Fix] Fix a bug of matrix conjugation.
Feb 10, 2024
09f3452
[Test] Add test for 1 and 2 qubit parameter gates with only one param…
Feb 10, 2024
1f472ec
[File] Add a density-measurements.py file.
Feb 10, 2024
00e6dab
[Func] Implement three measurement for density matrix.
Feb 11, 2024
e077d2f
[Fix] Fix the dimension bug of expeval of density matrix.
Feb 16, 2024
204b706
[Feat] Add noise model to the density device.
Feb 16, 2024
88b67c8
[FIX] Pass a parameter numer to the gate wrapper
Feb 18, 2024
81a181f
Change measurements.py
Feb 18, 2024
ad88a79
[Test] Add test for density matrix measurement
Feb 25, 2024
865b836
[Rename] Rename.
Feb 25, 2024
b48c966
[Test] Density measurement test pass.
Feb 25, 2024
a1c4012
[Fix] Encoding for density matrix. An example in mnist_new_noise.py
Feb 25, 2024
fb35f20
[Examples] Add many noise examples.
Feb 25, 2024
264dbe3
Merge branch 'main' into density-measure
Feb 25, 2024
c43c4bf
Merge branch 'dev' of https://github.com/mit-han-lab/torchquantum int…
Feb 25, 2024
590ab51
Merge branch 'dev' into density-measure
Feb 25, 2024
f1c3dfe
[Fix] Fix some bugs.
Feb 26, 2024
a35aa2c
[minor] added the correct version of a dependency
01110011011101010110010001101111 Mar 10, 2024
e2b190f
Add support for fixed params in GeneralEncoder
nikhilkhatri Mar 24, 2024
090e2dc
[minor] adding aer as well
01110011011101010110010001101111 Mar 24, 2024
37bc46e
Merge pull request #247 from GenericP3rson/bugfix
01110011011101010110010001101111 Mar 24, 2024
a5d47d2
Add pre-parameterised example to GeneralEncoder
nikhilkhatri Mar 25, 2024
b7aa9f2
Fix minor typo
singular-value Mar 28, 2024
69f0699
Add unit tests for GeneralEncoder
nikhilkhatri Mar 31, 2024
8e9de4e
Merge pull request #255 from singular-value/patch-1
01110011011101010110010001101111 Apr 19, 2024
81d995f
Update pypi dependancies
not-lain Jun 19, 2024
0bd54f7
update links
not-lain Jun 19, 2024
ba8321c
Merge pull request #280 from not-lain/improvements
01110011011101010110010001101111 Jul 3, 2024
c6f8e8b
[minor] rm the __all__ to more cleanly fix operator alias bug (#257)
01110011011101010110010001101111 Jul 3, 2024
8555b83
Unitary hack (#281)
01110011011101010110010001101111 Jul 17, 2024
a69808e
Fix MeasureMulti*PauliSum
yannick-couzinie Nov 12, 2024
46aa4e0
Use numpy<2
yannick-couzinie Nov 12, 2024
f62d372
set quantum device batch size
ankhoa1212 Nov 18, 2024
d054208
import CliffordQuantizer
ankhoa1212 Nov 18, 2024
79ce996
Merge pull request #288 from yannick-couzinie/patch-1
01110011011101010110010001101111 Nov 18, 2024
f6a074b
Merge pull request #252 from nikhilkhatri/main
01110011011101010110010001101111 Nov 18, 2024
3f76570
make import lazy
ankhoa1212 Nov 19, 2024
6ff80a8
Merge pull request #289 from ankhoa1212/main
01110011011101010110010001101111 Nov 19, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions .github/workflows/functional_tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
name: Python package

on:
push:
push:
pull_request:

jobs:
Expand All @@ -17,16 +17,16 @@ jobs:
python-version: ["3.8", "3.9", "3.10", "3.11", "3.12"]

steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
python -m pip install flake8 pytest qiskit-aer qiskit_ibm_runtime
python -m pip install flake8 pytest
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/lint.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ jobs:
lint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- name: Setup Python 3.8
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Update pip and install lint utilities
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pull_request.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,8 @@ jobs:
pre-commit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- uses: pre-commit/[email protected]
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
<p align="center">
<img src="torchquantum_logo.jpg" alt="torchquantum Logo" width="450">
<img src="https://github.com/mit-han-lab/torchquantum/blob/main/torchquantum_logo.jpg?raw=true" alt="torchquantum Logo" width="450">
</p>

<h2><p align="center">Quantum Computing in PyTorch</p></h2>
Expand Down Expand Up @@ -55,7 +55,7 @@ Simulate quantum computations on classical hardware using PyTorch. It supports s
Researchers on quantum algorithm design, parameterized quantum circuit training, quantum optimal control, quantum machine learning, quantum neural networks.
#### Differences from Qiskit/Pennylane

Dynamic computation graph, automatic gradient computation, fast GPU support, batch model tersorized processing.
Dynamic computation graph, automatic gradient computation, fast GPU support, batch model tensorized processing.

## News
- Torchquantum is used in the winning team for ACM Quantum Computing for Drug Discovery Challenge.
Expand Down
6 changes: 3 additions & 3 deletions examples/ICCAD22_tutorial/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# ICCAD 2022 Tutorial [[slides]](./iccad_tutorial.pdf)
## Section 1: TorchQuantum Basic Usage: [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mit-han-lab/torchquantum/blob/master/ICCAD22_tutorial/sec1_basic.ipynb)
## Section 1: TorchQuantum Basic Usage: [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mit-han-lab/torchquantum/blob/master/examples/ICCAD22_tutorial/sec1_basic.ipynb)

## Section 2: Use TorchQuantum on Pulse Level Optimizations: [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mit-han-lab/torchquantum/blob/master/ICCAD22_tutorial/sec2_pulse.ipynb)
## Section 2: Use TorchQuantum on Pulse Level Optimizations: [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mit-han-lab/torchquantum/blob/master/examples/ICCAD22_tutorial/sec2_pulse.ipynb)

## Section 3: Use TorchQuantum on Gate Level Optimizations: [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mit-han-lab/torchquantum/blob/master/ICCAD22_tutorial/sec3_gate.ipynb)
## Section 3: Use TorchQuantum on Gate Level Optimizations: [![](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mit-han-lab/torchquantum/blob/master/examples/ICCAD22_tutorial/sec3_gate.ipynb)
Empty file.
42 changes: 42 additions & 0 deletions examples/QCBM/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Quantum Circuit Born Machine
(Implementation by: [Gopal Ramesh Dahale](https://github.com/Gopal-Dahale))

Quantum Circuit Born Machine (QCBM) [1] is a generative modeling algorithm which uses Born rule from quantum mechanics to sample from a quantum state $|\psi \rangle$ learned by training an ansatz $U(\theta)$ [1][2]. In this tutorial we show how `torchquantum` can be used to model a Gaussian mixture with QCBM.

## Setup

Below is the usage of `qcbm_gaussian_mixture.py` which can be obtained by running `python qcbm_gaussian_mixture.py -h`.

```
usage: qcbm_gaussian_mixture.py [-h] [--n_wires N_WIRES] [--epochs EPOCHS] [--n_blocks N_BLOCKS] [--n_layers_per_block N_LAYERS_PER_BLOCK] [--plot] [--optimizer OPTIMIZER] [--lr LR]

options:
-h, --help show this help message and exit
--n_wires N_WIRES Number of wires used in the circuit
--epochs EPOCHS Number of training epochs
--n_blocks N_BLOCKS Number of blocks in ansatz
--n_layers_per_block N_LAYERS_PER_BLOCK
Number of layers per block in ansatz
--plot Visualize the predicted probability distribution
--optimizer OPTIMIZER
optimizer class from torch.optim
--lr LR
```

For example:

```
python qcbm_gaussian_mixture.py --plot --epochs 100 --optimizer RMSprop --lr 0.01 --n_blocks 6 --n_layers_per_block 2 --n_wires 6
```

Using the command above gives an output similar to the plot below.

<p align="center">
<img src ='./assets/sample_output.png' width-500 alt='sample output of QCBM'>
</p>


## References

1. Liu, Jin-Guo, and Lei Wang. “Differentiable learning of quantum circuit born machines.” Physical Review A 98.6 (2018): 062324.
2. Gili, Kaitlin, et al. "Do quantum circuit born machines generalize?." Quantum Science and Technology 8.3 (2023): 035021.
Binary file added examples/QCBM/assets/sample_output.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
255 changes: 255 additions & 0 deletions examples/QCBM/qcbm_gaussian_mixture.ipynb

Large diffs are not rendered by default.

129 changes: 129 additions & 0 deletions examples/QCBM/qcbm_gaussian_mixture.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,129 @@
import matplotlib.pyplot as plt
import numpy as np
import torch
from torchquantum.algorithm import QCBM, MMDLoss
import torchquantum as tq
import argparse
import os
from pprint import pprint


# Reproducibility
def set_seed(seed: int = 42) -> None:
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
# When running on the CuDNN backend, two further options must be set
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# Set a fixed value for the hash seed
os.environ["PYTHONHASHSEED"] = str(seed)
print(f"Random seed set as {seed}")


def _setup_parser():
parser = argparse.ArgumentParser()
parser.add_argument(
"--n_wires", type=int, default=6, help="Number of wires used in the circuit"
)
parser.add_argument(
"--epochs", type=int, default=10, help="Number of training epochs"
)
parser.add_argument(
"--n_blocks", type=int, default=6, help="Number of blocks in ansatz"
)
parser.add_argument(
"--n_layers_per_block",
type=int,
default=1,
help="Number of layers per block in ansatz",
)
parser.add_argument(
"--plot",
action="store_true",
help="Visualize the predicted probability distribution",
)
parser.add_argument(
"--optimizer", type=str, default="Adam", help="optimizer class from torch.optim"
)
parser.add_argument("--lr", type=float, default=1e-2)
return parser


# Function to create a gaussian mixture
def gaussian_mixture_pdf(x, mus, sigmas):
mus, sigmas = np.array(mus), np.array(sigmas)
vars = sigmas**2
values = [
(1 / np.sqrt(2 * np.pi * v)) * np.exp(-((x - m) ** 2) / (2 * v))
for m, v in zip(mus, vars)
]
values = np.sum([val / sum(val) for val in values], axis=0)
return values / np.sum(values)


def main():
set_seed()
parser = _setup_parser()
args = parser.parse_args()

print("Configuration:")
pprint(vars(args))

# Create a gaussian mixture
n_wires = args.n_wires
assert n_wires >= 1, "Number of wires must be at least 1"

x_max = 2**n_wires
x_input = np.arange(x_max)
mus = [(2 / 8) * x_max, (5 / 8) * x_max]
sigmas = [x_max / 10] * 2
data = gaussian_mixture_pdf(x_input, mus, sigmas)

# This is the target distribution that the QCBM will learn
target_probs = torch.tensor(data, dtype=torch.float32)

# Ansatz
layers = tq.RXYZCXLayer0(
{
"n_blocks": args.n_blocks,
"n_wires": n_wires,
"n_layers_per_block": args.n_layers_per_block,
}
)

qcbm = QCBM(n_wires, layers)

# To train QCBMs, we use MMDLoss with radial basis function kernel.
bandwidth = torch.tensor([0.25, 60])
space = torch.arange(2**n_wires)
mmd = MMDLoss(bandwidth, space)

# Optimization
optimizer_class = getattr(torch.optim, args.optimizer)
optimizer = optimizer_class(qcbm.parameters(), lr=args.lr)

for i in range(args.epochs):
optimizer.zero_grad(set_to_none=True)
pred_probs = qcbm()
loss = mmd(pred_probs, target_probs)
loss.backward()
optimizer.step()
print(i, loss.item())

# Visualize the results
if args.plot:
with torch.no_grad():
pred_probs = qcbm()

plt.plot(x_input, target_probs, linestyle="-.", label=r"$\pi(x)$")
plt.bar(x_input, pred_probs, color="green", alpha=0.5, label="samples")
plt.xlabel("Samples")
plt.ylabel("Prob. Distribution")

plt.legend()
plt.show()


if __name__ == "__main__":
main()
74 changes: 74 additions & 0 deletions examples/QuantumGan/ README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Quantum Generative Adversarial Network (QGAN) Example

This repository contains an example implementation of a Quantum Generative Adversarial Network (QGAN) using PyTorch and TorchQuantum. The example is provided in a Jupyter Notebook for interactive exploration.

## Overview

A QGAN consists of two main components:

1. **Generator:** This network generates fake quantum data samples.
2. **Discriminator:** This network tries to distinguish between real and fake quantum data samples.

The goal is to train the generator to produce quantum data that is indistinguishable from real data, according to the discriminator. This is achieved through an adversarial training process, where the generator and discriminator are trained simultaneously in a competitive manner.

## Repository Contents

- `qgan_notebook.ipynb`: Jupyter Notebook demonstrating the QGAN implementation.
- `qgan_script.py`: Python script containing the QGAN model and a main function for initializing the model with command-line arguments.

## Installation

To run the examples, you need to have the following dependencies installed:

- Python 3
- PyTorch
- TorchQuantum
- Jupyter Notebook
- ipywidgets

You can install the required Python packages using pip:

```bash
pip install torch torchquantum jupyter ipywidgets
```


Running the Examples
Jupyter Notebook
Open the qgan_notebook.ipynb file in Jupyter Notebook.
Execute the notebook cells to see the QGAN model in action.
Python Script
You can also run the QGAN model using the Python script. The script uses argparse to handle command-line arguments.

bash
Copy code
python qgan_script.py <n_qubits> <latent_dim>
Replace <n_qubits> and <latent_dim> with the desired number of qubits and latent dimensions.

Notebook Details
The Jupyter Notebook is structured as follows:

Introduction: Provides an overview of the QGAN and its components.
Import Libraries: Imports the necessary libraries, including PyTorch and TorchQuantum.
Generator Class: Defines the quantum generator model.
Discriminator Class: Defines the quantum discriminator model.
QGAN Class: Combines the generator and discriminator into a single QGAN model.
Main Function: Initializes the QGAN model and prints its structure.
Interactive Model Creation: Uses ipywidgets to create an interactive interface for adjusting the number of qubits and latent dimensions.
Understanding QGANs
QGANs are a type of Generative Adversarial Network (GAN) that operate in the quantum domain. They leverage quantum circuits to generate and evaluate data samples. The adversarial training process involves two competing networks:

The Generator creates fake quantum data samples from a latent space.
The Discriminator attempts to distinguish these fake samples from real quantum data.
Through training, the generator improves its ability to create realistic quantum data, while the discriminator enhances its ability to identify fake data. This process results in a generator that can produce high-quality quantum data samples.


## QGAN Implementation for CIFAR-10 Dataset
This implementation trains a QGAN on the CIFAR-10 dataset to generate fake images. It follows a similar structure to the TorchQuantum QGAN, with the addition of data loading and processing specific to the CIFAR-10 dataset.
Generated images can be seen in the folder

This `README.md` file explains the purpose of the repository, the structure of the notebook, and how to run the examples, along with a brief overview of the QGAN concept for those unfamiliar with it.


## Reference
- [ ] https://arxiv.org/abs/2312.09939
Loading
Loading