You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello
in part 3.2.2 of the main paper related to this CRAFT-Reimplementation, the authors say :
" When a real image with word-level annotations is provided, the learned interim model predicts the character region score of the cropped word images to generate character-level bounding boxes"
As far as I realized this model is used for generating character-level bounding boxes for word images that do not have character annotation.
My question is what is the interim model architecture and how is this model trained?
Is the interim model trained with the cropped words in the synth-text images?
The text was updated successfully, but these errors were encountered:
Hello
in part 3.2.2 of the main paper related to this CRAFT-Reimplementation, the authors say :
" When a real image with word-level annotations is provided, the learned interim model predicts the character region score of the cropped word images to generate character-level bounding boxes"
As far as I realized this model is used for generating character-level bounding boxes for word images that do not have character annotation.
My question is what is the interim model architecture and how is this model trained?
Is the interim model trained with the cropped words in the synth-text images?
The text was updated successfully, but these errors were encountered: