Skip to content

Commit

Permalink
Added some cautionary comments about not implementing an LruVariableS…
Browse files Browse the repository at this point in the history
…labCache.
  • Loading branch information
LTLA committed Sep 27, 2024
1 parent 3642863 commit 5440086
Showing 1 changed file with 16 additions and 0 deletions.
16 changes: 16 additions & 0 deletions include/tatami_chunked/LruSlabCache.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,22 @@ class LruSlabCache {
}
};

// COMMENT:
// As tempting as it is to implement an LruVariableSlabCache, this doesn't work out well in practice.
// This is because the Slab_ objects are re-used, and in the worst case, each object would have to be large enough to fit the largest slab.
// At this point, we have several options:
//
// - Pre-allocate each Slab_ instance to have enough memory to fit the largest slab, in which case the slabs are not variable.
// - Allow Slab_ instances to grow/shrink their memory allocation according to the size of its assigned slab.
// This reduces efficiency due to repeated reallocations, and memory usage might end up exceeding the nominal limit anyway due to fragmentation.
// - Share a single memory pool across slabs, and manually handle defragmentation to free up enough contiguous memory for each new slab.
// Unlike the oracular case, we don't have the luxury of defragmenting once for multiple slabs.
// Instead, we might potentially need to defragment on every newly requested slab, which is computationally expensive.
//
// See also https://softwareengineering.stackexchange.com/questions/398503/is-lru-still-a-good-algorithm-for-a-cache-with-diferent-size-elements.
// This lists a few methods for dealing with fragmentation, but none of them are particularly clean.
// It's likely that just using the existing LruSlabCache with the maximum possible slab size is good enough for most applications.

}

#endif

0 comments on commit 5440086

Please sign in to comment.