Skip to content

Feature: increase capacity according to the actual size returned by the allocator #489

Open
@morrisonlevi

Description

@morrisonlevi

I have some similar allocators:

  1. One gives out whole pages only. On Linux this is commonly 4KiB and on Apple silicon MacBook Pros this is commonly 16KiB.
  2. One which has one big static chunk. That's it, no subdivision (think no-std).

They are similar in that if I call HashMap::with_capacity_in or other function with a capacity of c:

  • Knowing that HashMap can and will often round up c to a bigger size.
  • Knowing that allocators in general are allowed to over-allocate and often will.
  • Knowing that my allocators in particular are going to significantly over-allocate the number of bytes.

I would hope that HashMap would attempt to use this extra allocation space, but it never inspects the returned size.

Are there any reservations about doing this? It will add cost on every allocation, but that's about the only reason I could think of. The benefit is that for some allocators, they will better utilize the allocation, and for some allocators this would be an extreme improvement. With that said, I am not a hash table expert in general, and certainly a noob in hash_brown's internals, so there may be things I am unaware of.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions