Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QUESTION] Example for warp.sparse.BsrMatrix #371

Open
SangHunHan92 opened this issue Nov 29, 2024 · 1 comment
Open

[QUESTION] Example for warp.sparse.BsrMatrix #371

SangHunHan92 opened this issue Nov 29, 2024 · 1 comment
Assignees
Labels
question The issue author requires information

Comments

@SangHunHan92
Copy link

SangHunHan92 commented Nov 29, 2024

This link shows source code of warp sparse matrix :
https://nvidia.github.io/warp/modules/sparse.html
However, I am having difficulty using it because there are no examples.

How do I declare(or set inital value) a BsrMatrix to assign values ​​to the matrix and do matrix multiplication with it?
I want to create a Jacobian matrix and calculate the Hessian using BsrMatrix.
I see functions like "bsr_diag" or "bsr_zeros", but I wonder if there are any functions other than these that can efficiently make Jacobian matrix.
Can you give me a simple example code for this?

Additionally, is it possible to set values ​​in BsrMatrix in parallel (wp.launch) via wp.atomic_add() used inside @wp.kernel function?

# Example code
wp.launch(
            kernel=set_Jacobian,
            dim=num_of_error,
            inputs=[Jacobian],
            device=device,
        )

@wp.kernel
def set_Jacobian(Jacobian: BsrMatrix):
    tid = wp.tid()
    Jacobian_element_matrix = ~~~           # calculated matrix with format like [wp.array]

    Jacobian[tid] = Jacobian_element_matrix                # Is it working? 1
    wp.atomic_add(Jacobian, tid, Jacobian_element_matrix)  # Is it working? 2

Maybe, Jacobian matrix seems like this :
Image

@SangHunHan92 SangHunHan92 added the question The issue author requires information label Nov 29, 2024
@gdaviet
Copy link
Contributor

gdaviet commented Nov 29, 2024

Hi @SangHunHan92 , the easiest way to build a BSR matrix is from three warp arrays (row_indices, columns_indices, block_values). Those arrays can be populated in parallel using the kernel of you choice. Then call bsr_set_from_triplets to convert to the compressed BSR representation which can be used to perform matrix-vector and matrix-matrix products, among other things.

Here is an example snippet:

import numpy as np
import warp as wp
import warp.sparse as sp

block_shape = (2, 3)
block_type = wp.mat(block_shape, dtype=float)

rows_of_blocks = 3
cols_of_blocks = 4

# initialize BSR matrix
bsr_mat = sp.bsr_zeros(rows_of_blocks, cols_of_blocks, block_type)

# populate from COO triplets
row_indices = wp.array([0, 1, 1, 2], dtype=int)
col_indices = wp.array([1, 2, 2, 3], dtype=int)
values = wp.array(np.random.rand(4, *block_shape), dtype=block_type)
sp.bsr_set_from_triplets(bsr_mat, row_indices, col_indices, values)

# now matrix is in compressed format, duplicate indices have been merged
assert bsr_mat.nnz_sync() == 3

# it can be used to perform matrix-vector and matrix-matrix products
row_vec = wp.ones(shape=(rows_of_blocks, block_shape[0]), dtype=float)
col_vec = wp.ones(shape=(cols_of_blocks, block_shape[1]), dtype=float)

print(bsr_mat @ col_vec)
print(row_vec @ bsr_mat)
print((bsr_mat.transpose() @ bsr_mat) @ col_vec)

Note that the size of the blocks needs to be constant in the matrix, so if you want to merge matrices with different block sizes, you need to convert them to their greatest common divisor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question The issue author requires information
Projects
None yet
Development

No branches or pull requests

2 participants