You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to profile the memory usage for all op during training. Here is the code for profiling, But I found the result of profiling only records 1000 snapshots of memory allocation/deallocation. How to profile the memoryProfileSnapshots more than 1000?
`loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
y_pred = model(x, training=True)
loss = loss_fn(y, y_pred)
gradients = tape.gradient(loss, model.trainable_weights)
return gradients
# dummy training data
x = tf.random.normal((batch_size, input_shape[0], input_shape[1], input_shape[2]))
y = tf.ones((batch_size,))
print("Warmup...")
for k in tqdm(range(1)):
train_step(x, y)
t0 = time.time()
print("Profiling the model...")
tf.profiler.experimental.start(logdir)
for k in range(num_iterations):
with tf.profiler.experimental.Trace('train', step_num=k):
train_step(x, y)
tf.profiler.experimental.stop()`
The text was updated successfully, but these errors were encountered:
I want to profile the memory usage for all op during training. Here is the code for profiling, But I found the result of profiling only records 1000 snapshots of memory allocation/deallocation. How to profile the memoryProfileSnapshots more than 1000?
`loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
The text was updated successfully, but these errors were encountered: