Replies: 1 comment 4 replies
-
Hi @ecarl65, thanks for stopping by!
The first is important because using C++17 we're able to convert large algebraic expressions into a single kernel that should be as good or better than what you'd write yourself. It's also a lot more modular because writing a single kernel limits you to data types, sizes, strides, etc, whereas writing it generically in MatX allows you to change those parameters without changing the expression itself. The second is important because in almost all cases you will not beat the CCCL or CUDA Math libraries, so we can utilize the same types you had in your algebraic expressions but translate it into the CUDA libraries. A good example of this is something like doing a This GTC talk goes into more depth: https://www.nvidia.com/en-us/on-demand/session/gtcfall21-a31410/
|
Beta Was this translation helpful? Give feedback.
-
I'm a total newb to MatX and CUDA in general. I'm a DSP engineer and program in C++ and Python, but just have a hole in my experience regarding GPUs. Anyway, I thought MatX might be a good way to dip my toes in the water.
I'm just wondering a little bit about the model. It's awesome that it has a lot of tools like filtering, FFTs, etc. Let's say I wanted to make a new algorithm, like a Hough transform, for example. I guess there are a couple of ways of going about it?
hough
transform algorithm with appropriate low-level CUDA calls for addition into the baseline.Do I have the general idea right?
Beta Was this translation helpful? Give feedback.
All reactions