Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用albert.base(英文)finetuning的时候,--gradient_accumulation_steps设置为大于1时直接进入evaluating而不training #24

Open
YuxiangLu opened this issue Nov 12, 2019 · 1 comment

Comments

@YuxiangLu
Copy link

No description provided.

@lonePatient
Copy link
Owner

@YuxiangLu 你的意思是说当step 小于gradient_accumulation_steps时,简单看了样,训练跟eval应该都同时进行了,应该时:

            if args.local_rank in [-1, 0] and args.logging_steps > 0 and global_step % args.logging_steps == 0:
                #Log metrics
                if args.local_rank == -1:  # Only evaluate when single GPU otherwise metrics may not average well
                    results = evaluate(args, model, tokenizer)

对结果应该没多大影响,等下我改下,谢了。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants