You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Mass-Editing Memory with Attention in Transformers: A cross-lingual exploration of knowledge
Published Date
2024-08-01
Source
ACL
Head Name
Memory Head
Summary
Innovation: The paper introduces Mass-Editing Memory with Attention in Transformers (MEMAT), a method that significantly improves cross-lingual knowledge editing in transformer models by minimally modifying parameters and optimizing specific attention heads.
Tasks: The study uses cross-lingual datasets in English and Catalan to evaluate the effectiveness of MEMAT, focusing on factual knowledge editing in language models through experiments that assess the efficacy, generalization, and specificity of knowledge edits using various prompts and interventions.
Significant Result: MEMAT achieves more than a 10% improvement in magnitude metrics compared to previous methods, demonstrating enhanced portability and computational efficiency in cross-lingual knowledge editing without significant degradation in existing knowledge.
The text was updated successfully, but these errors were encountered:
Title
Mass-Editing Memory with Attention in Transformers: A cross-lingual exploration of knowledge
Published Date
2024-08-01
Source
ACL
Head Name
Memory Head
Summary
Innovation: The paper introduces Mass-Editing Memory with Attention in Transformers (MEMAT), a method that significantly improves cross-lingual knowledge editing in transformer models by minimally modifying parameters and optimizing specific attention heads.
Tasks: The study uses cross-lingual datasets in English and Catalan to evaluate the effectiveness of MEMAT, focusing on factual knowledge editing in language models through experiments that assess the efficacy, generalization, and specificity of knowledge edits using various prompts and interventions.
Significant Result: MEMAT achieves more than a 10% improvement in magnitude metrics compared to previous methods, demonstrating enhanced portability and computational efficiency in cross-lingual knowledge editing without significant degradation in existing knowledge.
The text was updated successfully, but these errors were encountered: