You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Troubleshoot system stability problems - often scanning system logs
Analyze application log files that are sometimes gigs in size
Analyze data in spreadsheets
I've also been talking to people in non-engineering areas such as:
Legal: Analyzing contracts stored on a network share. Conversing with my documents that sometimes are pretty large.
Sales & Markting: Analyzing market data
Services: analyzing and creating statements of work and and project plans.
In these scenarios, I can ask open interpreter to analyze data and files. But due to the restricted context windows, open-interpreter currently leverages python scripts very heavily where it will use python to grep, search for keywords, and other scenarios that lack the capabilities of natural language search and a viable long term memory of conversations.
Being able to load a documents ( log files, crash dumps, spreadsheets, contracts, market analysis, project plans ) into your memory and then being able to interpret it effectively holds huge promise. Right now I can get exceptional results when I can fit my problem into gtp 4's 32k context window. But in the above cases that 32k tokens run out very quickly.
The text was updated successfully, but these errors were encountered:
I've been using open interpreter ( https://github.com/KillianLucas/open-interpreter ) quite a bit. I have an engineering/development background so I've been using it to
I've also been talking to people in non-engineering areas such as:
In these scenarios, I can ask open interpreter to analyze data and files. But due to the restricted context windows, open-interpreter currently leverages python scripts very heavily where it will use python to grep, search for keywords, and other scenarios that lack the capabilities of natural language search and a viable long term memory of conversations.
Being able to load a documents ( log files, crash dumps, spreadsheets, contracts, market analysis, project plans ) into your memory and then being able to interpret it effectively holds huge promise. Right now I can get exceptional results when I can fit my problem into gtp 4's 32k context window. But in the above cases that 32k tokens run out very quickly.
The text was updated successfully, but these errors were encountered: