Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

接入一个新的模型需要满足哪些条件 #44

Open
klxqlehua opened this issue Jun 17, 2024 · 2 comments
Open

接入一个新的模型需要满足哪些条件 #44

klxqlehua opened this issue Jun 17, 2024 · 2 comments

Comments

@klxqlehua
Copy link

klxqlehua commented Jun 17, 2024

你好,这篇文章和代码实现细看了下,如果我想接入一个新的模型来支持infLLM,需要哪些满足条件,个人理解:

  1. 位置编码:新的模型attention内部也必须是RotaryEmbeddingESM编码方式,否则训练的模型与infLLM推理在位置编码等效性上就不一致了;
  2. 新模型的model.model.forward与InfLLM的model_forward的实现逻辑必须完全一样;
  3. 新模型的Attention的推理计算入参格式必须满足如下
    def forward(
    self,
    hidden_states: torch.Tensor,
    attention_mask: Optional[torch.Tensor] = None,
    position_ids: Optional[torch.LongTensor] = None,
    past_key_value: Optional[Cache] = None,
    output_attentions: bool = False,
    use_cache: bool = False,
    **kwargs,
    ) ,目的是为了与InfLLM定义的hf_forward的入参完全保持一致;
    感觉应该够了,对吧,还需要其他硬性满足条件吗?
    如确实如此,那么接入一个新的开源模型应该很容易对吧?为什么我看你这边只接入了LlamaForCausalLM,MistralForCausalLM,Qwen2ForCausalLM这三个?
@klxqlehua klxqlehua changed the title 模型训练与infLLM模型推理的位置编码的等价问题 . Jun 17, 2024
@klxqlehua klxqlehua changed the title . 接入一个新的模型需要满足哪些条件 Jun 18, 2024
@guyan364
Copy link
Collaborator

你的理解是对的,基本上目前使用 rope 的模型都可以使用 infLLM.
我们没有太多时间维护这个仓库,现在主要用于提供论文结果复现.
如果你需要适配其他开源模型,可以参照 patch.py 中的实现,加入其他模型的 attention forward 替换.

@klxqlehua
Copy link
Author

好的,感谢你的回复。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants