Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: wip #640

Closed
wants to merge 1 commit into from
Closed

chore: wip #640

wants to merge 1 commit into from

Conversation

zeeshanlakhani
Copy link
Contributor

No description provided.

Copy link

codecov bot commented May 1, 2024

Codecov Report

Attention: Patch coverage is 0.82645% with 120 lines in your changes are missing coverage. Please review.

Project coverage is 69.19%. Comparing base (2d0cfe9) to head (bf786db).
Report is 1 commits behind head on main.

❗ Current head bf786db differs from pull request most recent head 528c69b. Consider uploading reports for the commit 528c69b to get more accurate results

Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main     #640      +/-   ##
==========================================
- Coverage   69.85%   69.19%   -0.67%     
==========================================
  Files          98       98              
  Lines       13394    13515     +121     
==========================================
- Hits         9357     9352       -5     
- Misses       4037     4163     +126     
Files Coverage Δ
homestar-wasm/src/wasmtime/world.rs 87.94% <100.00%> (+0.05%) ⬆️
homestar-functions/subtract/src/bindings.rs 0.00% <0.00%> (ø)
homestar-wasm/src/wasmtime/host/helpers.rs 9.86% <0.00%> (-31.80%) ⬇️

... and 2 files with indirect coverage changes

example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
example-llm-workflows/README.md Outdated Show resolved Hide resolved
curl localhost:3000/run --json @map_reduce.json
```

To more applicability encode the [MapReduce][map-reduce] example from the [Crowdforge][crowdforge] paper, I implemented a `prompt_chain` Wasm/WASI function registered on the Host that takes in a system prompt (e.g. "You are journalist writing about cities."), an input (e.g. an ongoing article), a map step prompt with a `{{text}}` placeholder that is filled in, a reduce step, which folds over (combines) the generated text(s) from the map step, and then the optional LLaMA model stored as a [`gguf`][gguf]. If the optional model path is not provided, the Host will fall back to the default `Meta-Llama-3-8B-Instruct.Q4_0.gguf` model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
To more applicability encode the [MapReduce][map-reduce] example from the [Crowdforge][crowdforge] paper, I implemented a `prompt_chain` Wasm/WASI function registered on the Host that takes in a system prompt (e.g. "You are journalist writing about cities."), an input (e.g. an ongoing article), a map step prompt with a `{{text}}` placeholder that is filled in, a reduce step, which folds over (combines) the generated text(s) from the map step, and then the optional LLaMA model stored as a [`gguf`][gguf]. If the optional model path is not provided, the Host will fall back to the default `Meta-Llama-3-8B-Instruct.Q4_0.gguf` model.
To more applicably encode the [MapReduce][map-reduce] example from the [Crowdforge][crowdforge] paper, I implemented a `prompt_chain` Wasm/WASI function registered on the Host that takes in a system prompt (e.g. "You are journalist writing about cities."), an input (e.g. an ongoing article), a map step prompt with a `{{text}}` placeholder that is filled in, a reduce step, which folds over (combines) the generated text(s) from the map step, and then the optional LLaMA model stored as a [`gguf`][gguf]. If the optional model path is not provided, the Host will fall back to the default `Meta-Llama-3-8B-Instruct.Q4_0.gguf` model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants