Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Same generation for different N for K>1 #19

Open
UmbertoTomasini opened this issue Aug 13, 2024 · 1 comment
Open

Same generation for different N for K>1 #19

UmbertoTomasini opened this issue Aug 13, 2024 · 1 comment

Comments

@UmbertoTomasini
Copy link

I have a question about the role of N (the number of particles), and K (the factor) in the task of prompt intersection. I am trying to replicate Fig. 4 in the workshop paper, with the same 2 prompts.

I am observing that, for N>1 and K=1, the obtained continuations are different for different N. Instead, as soon as K>1, they are equal. I attach a couple of examples and my code for the mode.

Does the fact that I chose batch_size=1 matter? I stop generations after 20 tokens.

Example N =2, K=1
20 and has never spoken to me. (Though we may be in the same lecture
compression of time and the expansion of space. Are you aware of the work of John Arch

Example N =2, K=2
19th century English physicist James Clerk Maxwell. His work on elect
19th century English physicist James Clerk Maxwell. His work on elect

import asyncio
import os
import string
import torch
from hfppl import CachedCausalLM
from hfppl import LMContext
from hfppl import Model
from hfppl import smc_standard, smc_steer
from hfppl.distributions import transformer



if "HF_AUTH_TOKEN" in os.environ:
    HF_AUTH_TOKEN = os.environ["HF_AUTH_TOKEN"]

# Load the language model.
# Mistral and Vicuna are open models; to use a model with restricted access, like LLaMA 2,
# pass your HuggingFace API key as the optional `auth_token` argument:
#LLM = CachedCausalLM.from_pretrained(
#    "meta-llama/Meta-Llama-3-8B", auth_token=HF_AUTH_TOKEN
#)
LLM = CachedCausalLM.from_pretrained("lmsys/vicuna-7b-v1.5")
# LLM = CachedCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1")
LLM.batch_size = 1


class PromptIntersection(Model):
    # Initialize
    def __init__(self, prompts,max_tokens):
        super().__init__()
        self.s = ""
        self.prompts = prompts
        self.x = [LMContext(LLM, p)
                    for p in prompts]
        self.max_tokens = max_tokens

    # Generate
    async def step(self):
        w = await self.sample(self.x[0].next_token())

        # Reduce number of max tokens remaining
        self.max_tokens -= 1

        #(self.transformer(self.x[0]))
        for x in self.x[1:]:
            await self.observe(x.next_token(), w)

        if w == LLM.tokenizer.eos_token_id or self.max_tokens == 0:
            self.finish()
        else:
            self.s += w



prompts = ["My favorite physicist is probably ", "My favorite writer is probably "]


async def main():

    constraint_model = PromptIntersection(prompts,20)
    particles = await smc_steer(
        constraint_model, 2,3
    )
    for p in particles:
        print(f"{p.s}")


asyncio.run(main())

@alex-lew
Copy link
Contributor

alex-lew commented Dec 8, 2024

Hi @UmbertoTomasini, sorry I missed this!

The without-replacement sampling guarantees that when shrinking NK options down to N options at each step, we will choose N distinct indices from [1, 2, ..., NK]. However, it can still be the case that some of those options contain the same exact string. In the N=2, K=2 case, you might end up with something like this:

State before extension:
Particle 1: 19th century English physicist James Clerk
Particle 2: 20 and has never spoken to

State after extension:
Particle 1a: 19th century English physicist James Clerk Maxwell
Particle 1b: 19th century English physicist James Clerk Maxwell
Particle 2a: 20 and has never spoken to me
Particle 2b: 20 and has never spoken to a

Chosen particles (2 out of 4): 1a, 1b

That is, during the extension step, the K=2 extensions both happen to sample the same next token. The without-replacement resampling scheme does not account for this--it only ensures that different particle indices (1a and 1b, rather than say, 1a and 1a) will be chosen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants