You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SlickGPT uses gpt3-tokenizer which is one of the few libs I found that "just runs" in the browser. It's close enough but not perfect as the calculations in
Another and bigger problem is performance. The gpt3-tokenizer has a huge payload and it's pretty slow. Other solutions use advanced stuff like Node Buffer structures and are much faster. The problem is that they don't run easily in the browser without Node.
Any ideas how to calculate the tokens per Chat (context, prompts, completions) more accurately and faster?
The text was updated successfully, but these errors were encountered:
Yeah, unfortunately this one requires a full node environment to run. We COULD use Vercel functions for that but I don't think it's worth it and would rather leave the token calculation to the client.
SlickGPT uses
gpt3-tokenizer
which is one of the few libs I found that "just runs" in the browser. It's close enough but not perfect as the calculations inslickgpt/src/misc/openai.ts
Line 69 in af5d5e7
Another and bigger problem is performance. The
gpt3-tokenizer
has a huge payload and it's pretty slow. Other solutions use advanced stuff like Node Buffer structures and are much faster. The problem is that they don't run easily in the browser without Node.Any ideas how to calculate the tokens per Chat (context, prompts, completions) more accurately and faster?
The text was updated successfully, but these errors were encountered: