Congrats on the release!
Not an issue as such (but couldn't make a discussion), but would be great to trial these models in ctranslate2 format using faster-whisper.
I'm particularly interested in running these low-resource models on CPU only, faster-whisper is still quickest for ASR on CPU. Have quite a few use-cases for this and always looking to squeeze a little more transcription quality out of limited resources.
I should have time to take a look at converting next week, but in case anyone's already done the work...