I'm a bit confused why the powered_effect is not calculated in the StudentsTTest but it's provided in ZTest.

The above is the data frame which I passed into both
stat_res_df = confidence.ZTest(
stats_df,
numerator_column='conversions',
numerator_sum_squares_column=None,
denominator_column='total',
categorical_group_columns='variant_id',
correction_method='bonferroni')
and
stat_res_df = confidence.StudentsTTest(
stats_df,
numerator_column='conversions',
numerator_sum_squares_column=None,
denominator_column='total',
categorical_group_columns='variant_id',
correction_method='bonferroni')
but when I called stat_res_df.difference(level_1='control', level_2='treatment') I found the result from z-test provides the powered_effect column as below

but it's missing from the t-test result. Another question, why is the required_sample_size missing? Is there a way to also provide the sample size estimation in the result? Thanks!
I'm a bit confused why the
powered_effectis not calculated in theStudentsTTestbut it's provided inZTest.The above is the data frame which I passed into both
and
but when I called

stat_res_df.difference(level_1='control', level_2='treatment')I found the result from z-test provides thepowered_effectcolumn as belowbut it's missing from the t-test result. Another question, why is the
required_sample_sizemissing? Is there a way to also provide the sample size estimation in the result? Thanks!