Skip to content

Commit

Permalink
increase allocated model architecture size to 64MB
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 693390872
  • Loading branch information
zichuan-wei authored and copybara-github committed Nov 5, 2024
1 parent 7de9eb7 commit 4a5d2ad
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion ai_edge_quantizer/model_modifier.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,8 @@ def modify_model(
instructions, quantized_model
)
constant_buffer_size = self._process_constant_map(quantized_model)
if constant_buffer_size > 2**31 - 2**20:
# we leave 64MB for the model architecture.
if constant_buffer_size > 2**31 - 2**26:
return self._serialize_large_model(quantized_model)
else:
return self._serialize_small_model(quantized_model)
Expand Down

0 comments on commit 4a5d2ad

Please sign in to comment.