Skip to content

MLP in Warp with wp.tile_matmul #817

Discussion options

You must be logged in to vote

Thank you for your respond. I managed to make it work. The trick is to load both the input and the (weight, bias) in small size tile as you suggested. However, I notice that there is some minor differences between the calculation made by Warp and Numpy. Why is this the case? Below is the code.

import warp as wp 
import numpy as np 

DIM_IN = 19
DIM_HID1 = 512
DIM_HID2 = 256
DIM_HID3 = 128
DIM_OUT = 12

NUM_THREADS = 64
batch_size = 1024
TILESIZE = 8

dtype = wp.float32

# MLP ---------------------------
@wp.func
def relu(x: dtype):
    return wp.max(x, dtype(0.0))

@wp.kernel
def mlp_layer1(
    inputs: wp.array2d(dtype=dtype),     # (DIM_IN, batch_size)
    weights: wp.array2d(dtype=dtyp…

Replies: 4 comments 4 replies

Comment options

You must be logged in to vote
1 reply
@DINHQuangDung1999
Comment options

Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
0 replies
Comment options

You must be logged in to vote
3 replies
@shi-eric
Comment options

@daedalus5
Comment options

@DINHQuangDung1999
Comment options

Answer selected by DINHQuangDung1999
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants