Skip to content

Actions: frost-beta/llm.js

Actions

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
86 workflow runs
86 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Allow kv cache to persist between steps
build #36: Commit 0853457 pushed by zcbenz
September 26, 2024 05:51 4m 6s main
September 26, 2024 05:51 4m 6s
Test llava with quantizied weights
build #35: Commit c960a33 pushed by zcbenz
September 26, 2024 03:36 5m 11s main
September 26, 2024 03:36 5m 11s
Make step accept options
build #34: Commit 6ebc13c pushed by zcbenz
September 26, 2024 02:37 26m 58s main
September 26, 2024 02:37 26m 58s
Split utilities from llm.ts
build #33: Commit 73164d6 pushed by zcbenz
September 26, 2024 02:12 30m 4s main
September 26, 2024 02:12 30m 4s
Fix weights sanitizing
build #32: Commit 5e7b4ac pushed by zcbenz
September 25, 2024 08:26 1h 22m 12s main
September 25, 2024 08:26 1h 22m 12s
Fix missing spaces when streaming output
build #31: Commit 366e120 pushed by zcbenz
September 25, 2024 08:15 1m 37s main
September 25, 2024 08:15 1m 37s
Fix quantization
build #30: Commit 09022b2 pushed by zcbenz
September 25, 2024 06:44 1m 20s main
September 25, 2024 06:44 1m 20s
Make llm-chat work with llava
build #29: Commit 15f6c16 pushed by zcbenz
September 25, 2024 06:35 1m 25s main
September 25, 2024 06:35 1m 25s
Initial support for llava
build #28: Commit a4418be pushed by zcbenz
September 25, 2024 02:26 4m 40s main
September 25, 2024 02:26 4m 40s
No more need to limit memory cache
build #27: Commit 73fe54e pushed by zcbenz
September 24, 2024 12:16 3m 54s v0.2.2
September 24, 2024 12:16 3m 54s
No more need to limit memory cache
build #26: Commit 73fe54e pushed by zcbenz
September 24, 2024 12:16 3m 46s main
September 24, 2024 12:16 3m 46s
Fix errors in converted code
build #25: Commit 0cdc1bb pushed by zcbenz
September 24, 2024 10:00 1m 40s v0.2.1
September 24, 2024 10:00 1m 40s
Do not print download progress in CI
build #24: Commit d68791d pushed by zcbenz
September 24, 2024 08:48 3m 15s main
September 24, 2024 08:48 3m 15s
Fix errors in converted code
build #23: Commit 0cdc1bb pushed by zcbenz
September 24, 2024 08:38 4m 8s main
September 24, 2024 08:38 4m 8s
Implement RotatingKVCache
build #22: Commit c05b95f pushed by zcbenz
September 24, 2024 06:48 38m 7s main
September 24, 2024 06:48 38m 7s
Expose APIs
build #21: Commit f606cab pushed by zcbenz
September 23, 2024 11:38 3m 42s v0.2.0
September 23, 2024 11:38 3m 42s
Expose APIs
build #20: Commit f606cab pushed by zcbenz
September 23, 2024 11:38 3m 28s main
September 23, 2024 11:38 3m 28s
Recognize null in config files
build #19: Commit 4d5dd43 pushed by zcbenz
September 23, 2024 11:11 4m 32s main
September 23, 2024 11:11 4m 32s
Make the tokenizer a class
build #18: Commit 0e1674a pushed by zcbenz
September 23, 2024 10:48 1m 28s main
September 23, 2024 10:48 1m 28s
Fix failings by the conversion
build #17: Commit 53efa67 pushed by zcbenz
September 23, 2024 10:24 1m 30s main
September 23, 2024 10:24 1m 30s
Convert code to TypeScript
build #16: Commit a83a1ab pushed by zcbenz
September 23, 2024 10:13 2m 2s main
September 23, 2024 10:13 2m 2s
Fix issues with tokenizer of llama3.1
build #15: Commit 7c21371 pushed by zcbenz
August 31, 2024 01:32 3m 27s v0.1.3
August 31, 2024 01:32 3m 27s
Fix issues with tokenizer of llama3.1
build #14: Commit 7c21371 pushed by zcbenz
August 31, 2024 01:32 3m 8s main
August 31, 2024 01:32 3m 8s
Add createAttentionMask helper
build #13: Commit 1a2a86e pushed by zcbenz
August 4, 2024 04:02 4m 15s v0.1.2
August 4, 2024 04:02 4m 15s
Add createAttentionMask helper
build #12: Commit 1a2a86e pushed by zcbenz
August 4, 2024 04:01 3m 29s main
August 4, 2024 04:01 3m 29s