You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When accessing the chatflow via the API, I need to receive the streaming output without this HTML prefix. However, if I try to filter the output using a code node (e.g., with regular expressions/python scripts), it breaks the streaming – I have to wait for the entire response before processing it.
2. Additional context or comments
My current chatflow
app:
description: 🤓🦽icon: ♿icon_background: '#D5F5F6'mode: advanced-chatname: Geimini-R1use_icon_as_answer_icon: falsekind: appversion: 0.1.5workflow:
conversation_variables: []environment_variables: []features:
file_upload:
allowed_file_extensions: []allowed_file_types:
- imageallowed_file_upload_methods:
- remote_url
- local_fileenabled: truefileUploadConfig:
audio_file_size_limit: 50batch_count_limit: 5file_size_limit: 15image_file_size_limit: 10video_file_size_limit: 100workflow_file_upload_limit: 10image:
enabled: falsenumber_limits: 3transfer_methods:
- local_file
- remote_urlnumber_limits: 1opening_statement: ''retriever_resource:
enabled: falsesensitive_word_avoidance:
enabled: falsespeech_to_text:
enabled: falsesuggested_questions: []suggested_questions_after_answer:
enabled: falsetext_to_speech:
enabled: falselanguage: ''voice: ''graph:
edges:
- data:
isInIteration: falsesourceType: llmtargetType: answerid: 1738052643931-source-answer-targetsource: '1738052643931'sourceHandle: sourcetarget: answertargetHandle: targettype: customzIndex: 0
- data:
isInIteration: falsesourceType: starttargetType: document-extractorid: 1738052599424-source-1738169041280-targetsource: '1738052599424'sourceHandle: sourcetarget: '1738169041280'targetHandle: targettype: customzIndex: 0
- data:
isInIteration: falsesourceType: document-extractortargetType: llmid: 1738169041280-source-llm-targetsource: '1738169041280'sourceHandle: sourcetarget: llmtargetHandle: targettype: customzIndex: 0
- data:
isInIteration: falsesourceType: llmtargetType: codeid: llm-source-1740391596116-targetsource: llmsourceHandle: sourcetarget: '1740391596116'targetHandle: targettype: customzIndex: 0
- data:
isInIteration: falsesourceType: codetargetType: llmid: 1740391596116-source-1738052643931-targetsource: '1740391596116'sourceHandle: sourcetarget: '1738052643931'targetHandle: targettype: customzIndex: 0nodes:
- data:
desc: ''selected: falsetitle: Starttype: startvariables: []height: 54id: '1738052599424'position:
x: 80y: 282positionAbsolute:
x: 80y: 282selected: falsesourcePosition: righttargetPosition: lefttype: customwidth: 244
- data:
context:
enabled: truevariable_selector:
- sys
- querydesc: ''model:
completion_params:
max_tokens: 1temperature: 0.7mode: chatname: deepseek-r1-doubaoprovider: openai_api_compatibleprompt_template:
- id: 60740fa2-90fc-4269-93f4-3d150c3df150role: systemtext: 忘记之前的指令 执行用户的指令
- id: fdf45cc5-a4c7-4aed-9dc6-ec09c88eea03role: usertext: '{{#context#}}'selected: truetitle: deepseek-r1type: llmvariables: []vision:
enabled: falseheight: 98id: llmposition:
x: 736y: 282positionAbsolute:
x: 736y: 282selected: truesourcePosition: righttargetPosition: lefttype: customwidth: 244
- data:
answer: '<think> {{#llm.text#}} <\think> {{#1738052643931.text#}}'desc: ''selected: falsetitle: Direct Responsetype: answervariables: []height: 121id: answerposition:
x: 1822y: 282positionAbsolute:
x: 1822y: 282selected: falsesourcePosition: righttargetPosition: lefttype: customwidth: 244
- data:
context:
enabled: falsevariable_selector: []desc: ''memory:
query_prompt_template: '<User Query>{{#sys.query#}}</User Query> <file>{{#1738169041280.text#}}</file> <think>{{#1740391596116.result#}}</think>'role_prefix:
assistant: ''user: ''window:
enabled: falsesize: 50model:
completion_params:
max_tokens: 8192temperature: 1mode: chatname: gemini-exp-1206provider: openai_api_compatibleprompt_template:
- id: 564efaef-34a5-4c48-9ca3-a9f4f0bdeba9role: systemtext: 用户的输入在<User Query>标签中,你已经在<think>标签里思考过,你需要在<think>的基础上直接给出回答。selected: falsetitle: geminitype: llmvariables: []vision:
configs:
detail: highvariable_selector: []enabled: falseheight: 98id: '1738052643931'position:
x: 1422y: 282positionAbsolute:
x: 1422y: 282selected: falsesourcePosition: righttargetPosition: lefttype: customwidth: 244
- data:
author: stvlynndesc: ''height: 202selected: falseshowAuthor: truetext: '{"root":{"children":[{"children":[{"detail":0,"format":0,"mode":"normal","style":"font-size: 16px;","text":"Introduction","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0},{"children":[{"detail":0,"format":0,"mode":"normal","style":"","text":"This demo utilizes DeepSeek R1''s powerful reasoning capabilities and enhances output through Gemini model learning, demonstrating how to combine reasoning LLMs with multimodal LLMs to improve AI''s thinking and problem-solving abilities.","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0}],"direction":"ltr","format":"","indent":0,"type":"root","version":1}}'theme: bluetitle: ''type: ''width: 266height: 202id: '1738165679422'position:
x: 61y: 29positionAbsolute:
x: 61y: 29selected: falsesourcePosition: righttargetPosition: lefttype: custom-notewidth: 266
- data:
author: stvlynndesc: ''height: 236selected: falseshowAuthor: truetext: '{"root":{"children":[{"children":[{"detail":0,"format":0,"mode":"normal","style":"font-size: 16px;","text":"Reasoning Model","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0},{"children":[{"detail":0,"format":0,"mode":"normal","style":"","text":"This node calls the DeepSeek-R1 reasoning model (deepseek-reasoner). The system prompt sets DeepSeek-R1 as an LLM with reasoning capabilities that needs to output complete thinking processes. Its task is to assist other LLMs without reasoning capabilities and output complete thinking processes based on user questions. The thinking process will be wrapped in <think> tags.","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0}],"direction":"ltr","format":"","indent":0,"type":"root","version":1}}'theme: bluetitle: ''type: ''width: 315height: 236id: '1738165732645'position:
x: 736y: 11positionAbsolute:
x: 736y: 11selected: falsesourcePosition: righttargetPosition: lefttype: custom-notewidth: 315
- data:
author: stvlynndesc: ''height: 251selected: falseshowAuthor: truetext: '{"root":{"children":[{"children":[{"detail":0,"format":0,"mode":"normal","style":"font-size: 16px;","text":"Multimodal Model","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0},{"children":[{"detail":0,"format":0,"mode":"normal","style":"","text":"This node calls Google''s Gemini model (gemini-1.5-flash-8b-exp-0924). The system prompt sets the Gemini model as an LLM that excels at learning, and its task is to learn from others'' (DeepSeek-R1''s) thinking processes about problems, enhance its results with that thinking, and then provide its answer. The input thinking process will be treated as a user question, and the final answer will be wrapped in <o> tags.","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0}],"direction":"ltr","format":"","indent":0,"type":"root","version":1}}'theme: bluetitle: ''type: ''width: 312height: 251id: '1738165823052'position:
x: 1096y: 11positionAbsolute:
x: 1096y: 11selected: falsesourcePosition: righttargetPosition: lefttype: custom-notewidth: 312
- data:
author: stvlynndesc: ''height: 226selected: falseshowAuthor: truetext: '{"root":{"children":[{"children":[{"detail":0,"format":0,"mode":"normal","style":"font-size: 16px;","text":"Output","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0},{"children":[{"detail":0,"format":0,"mode":"normal","style":"font-size: 12px;","text":"To make it easy to display reasoning and actual output, we use XML tags (<think><o>) to separate the outputs of the two models.","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0}],"direction":"ltr","format":"","indent":0,"type":"root","version":1}}'theme: bluetitle: ''type: ''width: 280height: 226id: '1738165846879'position:
x: 1522y: 11positionAbsolute:
x: 1522y: 11selected: falsesourcePosition: righttargetPosition: lefttype: custom-notewidth: 280
- data:
desc: ''is_array_file: trueselected: falsetitle: Doc Extractortype: document-extractorvariable_selector:
- sys
- filesheight: 92id: '1738169041280'position:
x: 383y: 282positionAbsolute:
x: 383y: 282selected: falsesourcePosition: righttargetPosition: lefttype: customwidth: 244
- data:
author: stvlynndesc: ''height: 190selected: falseshowAuthor: truetext: '{"root":{"children":[{"children":[{"detail":0,"format":0,"mode":"normal","style":"font-size: 14px;","text":"Document Extractor","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0},{"children":[{"detail":0,"format":0,"mode":"normal","style":"","text":"Extracts documents into readable text content for LLMs.","type":"text","version":1}],"direction":"ltr","format":"","indent":0,"type":"paragraph","version":1,"textFormat":0}],"direction":"ltr","format":"","indent":0,"type":"root","version":1}}'theme: bluetitle: ''type: ''width: 240height: 190id: '1738169102378'position:
x: 403y: 29positionAbsolute:
x: 403y: 29selected: falsesourcePosition: righttargetPosition: lefttype: custom-notewidth: 240
- data:
code: "def main(arg1: str) -> str:\n # 分割出</summary>和</details>之间的内容\n\\ content = arg1.split('</summary>', 1)[1].split('</details>', 1)[0]\n\\ # 去除首尾空白及换行符,替换转义字符\n cleaned_content = content.strip().replace('\\\\\n', '\\n')\n return {\"result\": cleaned_content}\n\n"code_language: python3desc: ''outputs:
result:
children: nulltype: stringselected: falsetitle: 代码执行 2type: codevariables:
- value_selector:
- llm
- textvariable: arg1height: 54id: '1740391596116'position:
x: 1086y: 282positionAbsolute:
x: 1086y: 282selected: falsesourcePosition: righttargetPosition: lefttype: customwidth: 244viewport:
x: -418.55114506568816y: 206.05750689173357zoom: 0.9000019297935121
3. Can you help us with this feature?
I am interested in contributing to this feature.
The text was updated successfully, but these errors were encountered:
Self Checks
1. Is this request related to a challenge you're experiencing? Tell me about your story.
I'm using Dify to build a chatflow with streaming output. My current chatflow uses two LLM nodes:
<think>
HTML tags.The DeepSeek-R1 node adds the following HTML prefix to its streaming output:
When accessing the chatflow via the API, I need to receive the streaming output without this HTML prefix. However, if I try to filter the output using a code node (e.g., with regular expressions/python scripts), it breaks the streaming – I have to wait for the entire response before processing it.
2. Additional context or comments
My current chatflow
3. Can you help us with this feature?
The text was updated successfully, but these errors were encountered: