Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The test accuracy is lower than README.md mentioned #74

Open
Laeglaur opened this issue Dec 21, 2020 · 5 comments
Open

The test accuracy is lower than README.md mentioned #74

Laeglaur opened this issue Dec 21, 2020 · 5 comments

Comments

@Laeglaur
Copy link

Thanks for reimplementation.
I use the release weights to run the model, but the test results are lower than README.md mentioned.

text_threshold=0.7 low_text=0.4 link_threshold=0.4
Syndata+IC13+IC17 test on icdar2013: "precision": 0.8733264675592173, "recall": 0.7744292237442922, "hmean": 0.8209099709583736
Syndata+IC15 test on icdar2015: "precision": 0.8037280701754386, "recall": 0.705825710158883, "hmean": 0.7516021532940271

Is there something wrong with the weights?

@madajie9
Copy link

Thanks for reimplementation.
I use the release weights to run the model, but the test results are lower than README.md mentioned.

text_threshold=0.7 low_text=0.4 link_threshold=0.4
Syndata+IC13+IC17 test on icdar2013: "precision": 0.8733264675592173, "recall": 0.7744292237442922, "hmean": 0.8209099709583736
Syndata+IC15 test on icdar2015: "precision": 0.8037280701754386, "recall": 0.705825710158883, "hmean": 0.7516021532940271

Is there something wrong with the weights?

I test Syndata.pth on ICDAR13 test set, and the score is lower than README.md mentioned
{"precision": 0.5976377952755906, "recall": 0.6931506849315069, "hmean": 0.641860465116279, "AP": 0}

I tried to train Syndata.pth and tested it on ICDAR13 test set, the result is {"precision": 0.6335740072202166, "recall": 0.6410958904109589, "hmean": 0.6373127553336361, "AP": 0}

PS, one epoch's log with 10 validation outputs is as follows:

2021/08/16 01:22:16 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5422477440525021, "recall": 0.6036529680365297, "hmean": 0.571305099394987, "AP": 0}
2021/08/16 07:16:50 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6194690265486725, "recall": 0.5753424657534246, "hmean": 0.5965909090909091, "AP": 0}
2021/08/16 13:40:54 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5865470852017938, "recall": 0.5972602739726027, "hmean": 0.5918552036199095, "AP": 0}
2021/08/16 20:29:51 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6377799415774099, "recall": 0.5981735159817352, "hmean": 0.6173421300659755, "AP": 0}
2021/08/17 03:35:45 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6593291404612159, "recall": 0.5744292237442923, "hmean": 0.6139580283064909, "AP": 0}
2021/08/17 11:02:21 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6335740072202166, "recall": 0.6410958904109589, "hmean": 0.6373127553336361, "AP": 0}
2021/08/17 18:36:44 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5971760797342193, "recall": 0.6566210045662101, "hmean": 0.6254893431926924, "AP": 0}
2021/08/18 02:12:47 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5579029733959311, "recall": 0.6511415525114155, "hmean": 0.6009270965023177, "AP": 0}
2021/08/18 09:55:10 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.594, "recall": 0.5424657534246575, "hmean": 0.5670644391408114, "AP": 0}
2021/08/18 17:39:25 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5416963649322879, "recall": 0.6940639269406392, "hmean": 0.6084867894315452, "AP": 0}

@Lovegood-1
Copy link

Hi,

I think this is not authors' fault. I use pretrained model provided by authors on Synthtext and continue training on icidar2015. I get

{"precision": 0.8537688442211055, "recall": 0.8180067404910929, "hmean": 0.8355052864519302, "AP": 0}

@madajie9
Copy link

madajie9 commented Nov 2, 2021

Hi,

I think this is not authors' fault. I use pretrained model provided by authors on Synthtext and continue training on icidar2015. I get

{"precision": 0.8537688442211055, "recall": 0.8180067404910929, "hmean": 0.8355052864519302, "AP": 0}

thank you very much for replying!
do you use "new gaussian map method" option in the training script?

@Lovegood-1
Copy link

Hi,
I think this is not authors' fault. I use pretrained model provided by authors on Synthtext and continue training on icidar2015. I get

{"precision": 0.8537688442211055, "recall": 0.8180067404910929, "hmean": 0.8355052864519302, "AP": 0}

thank you very much for replying! do you use "new gaussian map method" option in the training script?

Can you show me the Line in code where "new gaussian map method" is used? I just run official training code 'trainic15data.py' without any change. So, emm, maybe I use the option if they set it as default.

@wangbi0912
Copy link

Thanks for reimplementation.
I use the release weights to run the model, but the test results are lower than README.md mentioned.
text_threshold=0.7 low_text=0.4 link_threshold=0.4
Syndata+IC13+IC17 test on icdar2013: "precision": 0.8733264675592173, "recall": 0.7744292237442922, "hmean": 0.8209099709583736
Syndata+IC15 test on icdar2015: "precision": 0.8037280701754386, "recall": 0.705825710158883, "hmean": 0.7516021532940271
Is there something wrong with the weights?

I test Syndata.pth on ICDAR13 test set, and the score is lower than README.md mentioned {"precision": 0.5976377952755906, "recall": 0.6931506849315069, "hmean": 0.641860465116279, "AP": 0}

I tried to train Syndata.pth and tested it on ICDAR13 test set, the result is {"precision": 0.6335740072202166, "recall": 0.6410958904109589, "hmean": 0.6373127553336361, "AP": 0}

PS, one epoch's log with 10 validation outputs is as follows:

2021/08/16 01:22:16 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5422477440525021, "recall": 0.6036529680365297, "hmean": 0.571305099394987, "AP": 0} 2021/08/16 07:16:50 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6194690265486725, "recall": 0.5753424657534246, "hmean": 0.5965909090909091, "AP": 0} 2021/08/16 13:40:54 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5865470852017938, "recall": 0.5972602739726027, "hmean": 0.5918552036199095, "AP": 0} 2021/08/16 20:29:51 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6377799415774099, "recall": 0.5981735159817352, "hmean": 0.6173421300659755, "AP": 0} 2021/08/17 03:35:45 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6593291404612159, "recall": 0.5744292237442923, "hmean": 0.6139580283064909, "AP": 0} 2021/08/17 11:02:21 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.6335740072202166, "recall": 0.6410958904109589, "hmean": 0.6373127553336361, "AP": 0} 2021/08/17 18:36:44 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5971760797342193, "recall": 0.6566210045662101, "hmean": 0.6254893431926924, "AP": 0} 2021/08/18 02:12:47 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5579029733959311, "recall": 0.6511415525114155, "hmean": 0.6009270965023177, "AP": 0} 2021/08/18 09:55:10 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.594, "recall": 0.5424657534246575, "hmean": 0.5670644391408114, "AP": 0} 2021/08/18 17:39:25 - main - INFO - 370 - rrc_evaluation_funcs - {"precision": 0.5416963649322879, "recall": 0.6940639269406392, "hmean": 0.6084867894315452, "AP": 0}

hi, could you teach me how evaluate my model, i'm so noob and can't understand the eval/script.py. thank u very very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants