Flowise; Powerful but Can be Confusing - Topology Question #599
reconrad48
started this conversation in
General
Replies: 1 comment
-
Hey thanks for the details walkthrough! I think the missing piece from your screenshot is how to connect chain to agent, and to do that, you can use ChainTool. However I don't think its possible yet to like loop over the csv file and for each keyword, run this flow, and get the output. Perhaps you can use the API of this flow, and combine with other automation tools like Zapier to do this. Having said that, we do plan to implement new feature to cater this kind of workflow based situation as opposed to just creating a chatbot using the current flow |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I thrive on learning through practical examples. Unfortunately, there's a scarcity of expansive YouTube tutorials that fully demonstrate the capabilities of Flowise. Currently, I'm self-hosting Flowise, utilizing it as a personal tool, with the primary objective of designing a workflow for article/content creation. The envisioned workflow would be activated by the insertion of a keyword, question, or entity, leading it to execute a series of subsequent prompts. The end result would not only be presented in a conversation agent but also saved as a .txt file on a local level.
I've tried to elaborate this procedure through multiple attempts:
Firstly, the process involves reading an .xlsl file that contains a list of keywords in the first column. The workflow is designed to select one keyword at a time, complete the operation for the chosen keyword, and then return to pick the next keyword, and so on.
The second step, which is my current configuration (yet nonfunctional, as shown in the attached screenshot), consists of ChatOpenAi and Prompt Template, layered over each other (as shown in the screenshot). The Template carries instructions along with a command, where the Prompt Value is formatted as "objective": "{{question}}". These two are then plugged into the LLM Chain node, with the LLM Chain output set to "output prediction". The instructions, for instance, could be, "As an expert copywriter, upon receiving a keyword, produce an article title no more than 80 characters long", and so forth.
The third step is essentially the same configuration as the second one - ChatOpenAi layered over Prompt Template, both plugged into the LLM Chain, followed by the Conversation Agent for testing purposes. The goal here is to take the newly formed article title and create eight corresponding subheadings/outlines. The final output, after being completely elaborated, will be directed to the conversation agent and also saved as a local .txt file. Naturally, there would be additional prompts in the chain; for example, each of the eight subheadings would have their own prompts to prevent token exhaustion. A typical prompt could be something like, "Given Subheading 1, compose 2 to 5 paragraphs about the subheading" along with further instructions. When subheading 1 is finished, workflow moves to subheading 2 etc.
I presume that each output generated from the individual node sets will be stored in the memory before being passed on to the succeeding node set. This process will continue until the final output is consolidated in the conversation agent, and then downloaded and saved as a .txt file locally.
I ask for your assistance in understanding the topology and suitable nodes/settings needed to achieve this workflow. Alternatively, I would greatly appreciate being directed towards a tutorial or documentation that details this specific process.
Thank you for your time and consideration.
Beta Was this translation helpful? Give feedback.
All reactions