Description
At the driver level, users are free to create as many
cudaMempool_t
as they like, because each pool could be configured differently. So the cuda.core semantics ofdev.memory_resource
reads: "Give me the memory resource that wraps the device's current (or default, if no one touches it) mempool." However, users can create a sideDeviceMemoryResource
instance wrapping a new instance ofcudaMempool_t
(say by passing a pool config or by wrapping a foreigncudaMempool_t
pointer) and allocate from there, instead of fromdev.memory_resource
, just like how they could do the same at the C level with raw driver APIs. In fact, #446 demonstrates such a use case: Allocating a new mempool for IPC purposes.
Originally posted by @leofang in #717 (comment)
I think this has always been the intent but I realize we don't have a dedicated tracking issue.
Metadata
Metadata
Assignees
Labels
Type
Projects
Status