-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add allocated CPU and GPU memory reporting #81
base: main
Are you sure you want to change the base?
Conversation
It looks fine, but without knowledge of the use case it's hard to review. This is only giving you the amount of memory tracked by the RuntimeClient at any given point in time. I hope you wouldn't be calling this often (since you're adding it up on the fly). If you need specific memory stats we could build something a bit more robust |
@@ -773,6 +773,9 @@ class AllocTracker { | |||
/// Return true if the tracker's map contains `ptr`. | |||
bool contains(uintptr_t ptr) const; | |||
|
|||
/// Report total CPU and GPU memory allocated by runtime client. | |||
std::pair<int64_t, int64_t> reportAllocatedMemory() const; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There actually could are more types than just these two, so I'd prefer if we separate it into a struct or array. Array could be indexed by all the potential values of PointerType
.
@@ -429,6 +429,24 @@ PointerInfo AllocTracker::lookupOrDefault(uintptr_t ptr) const { | |||
return map.at(ptr); | |||
} | |||
|
|||
std::pair<int64_t, int64_t> AllocTracker::reportAllocatedMemory() const { | |||
int64_t totalCpuMemory = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should use uint64_t
here
MTRT_Status s = mtrtReportAllocatedMemory(self, &totalCpuMemory, &totalGpuMemory); | ||
THROW_IF_MTRT_ERROR(s); | ||
py::object namedtuple = py::module::import("collections").attr("namedtuple"); | ||
py::object MemoryUsage = namedtuple("MemoryUsage", "cpu_memory gpu_memory"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'll need to update the stubs so users can see this type information in the IDE.
MLIR_CAPI_EXPORTED MTRT_RuntimeClient | ||
mtrtMemRefGetClient(MTRT_MemRefValue memref); | ||
|
||
/// Retrieve the runtime client allocated cpu and gpu memory. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't quite accurate. You're reporting the CPU/GPU memory that is being tracked by the RuntimeClient. It can track buffers that are externally allocated.
/// Retrieve the runtime client allocated cpu and gpu memory. | ||
MTRT_Status mtrtReportAllocatedMemory(MTRT_RuntimeClient client, | ||
int64_t *totalCpuMemory, | ||
int64_t *totalGpuMemory); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use uint64_t
or size_t
here.
|
||
for (const auto &entry : map) { | ||
const PointerInfo &info = entry.second; | ||
if (info.isExternallyManaged()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@christopherbate Is this sufficient for tracking only internally managed/allocated pointers?
Add RuntimeClient python bindings to report allocated CPU and GPU memory usage.