Skip to content

Pull requests: huggingface/transformers

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Sort

Pull requests list

Fix failling GGML test
#34871 opened Nov 22, 2024 by MekkCyber Loading…
Fix support for image processors modifications in modular
#34866 opened Nov 21, 2024 by yonigozlan Loading…
1 of 5 tasks
Rename OLMo November to OLMo2
#34864 opened Nov 21, 2024 by 2015aroras Loading…
3 of 5 tasks
Bitnet test fix to avoid using gated model
#34863 opened Nov 21, 2024 by MekkCyber Loading…
Fix import structure for Fast Image processors
#34859 opened Nov 21, 2024 by yonigozlan Loading…
smol improvements to support more flexible usage
#34857 opened Nov 21, 2024 by andimarafioti Loading…
1 of 4 tasks
Remove quantization related config from dequantized model
#34856 opened Nov 21, 2024 by konradkalita Loading…
3 of 5 tasks
Grounding DINO Processor standardization Processing run-slow Vision
#34853 opened Nov 21, 2024 by qubvel Loading…
3 of 5 tasks
Gemma flex attention
#34851 opened Nov 21, 2024 by dame-cell Draft
BLIP: enable device map
#34850 opened Nov 21, 2024 by zucchini-nlp Loading…
Watermarking: fix order
#34849 opened Nov 21, 2024 by zucchini-nlp Loading…
Comments update for better reading
#34844 opened Nov 21, 2024 by JohannFaust666 Loading…
5 tasks done
Update Mistral conversion script
#34829 opened Nov 20, 2024 by Cyrilvallez Loading…
Tiny typos in gemma2_modular.py after flex_attention introduction
#34828 opened Nov 20, 2024 by MekkCyber Loading…
1 of 5 tasks
[WIP] Add flex attention for qwen2
#34827 opened Nov 20, 2024 by MekkCyber Loading…
ProTip! Adding no:label will show everything without a label.