Nodes Browser

About_us
AmapRegeoTool
AmapWeatherTool
Browser_display
CLIPTextEncode_party
Combine_Videos_party
Dingding
Dingding_tool
EasyOCR_advance
EasyOCR_choose
FeishuDownloadAudio
FeishuDownloadImage
FeishuGetHistory
FeishuSendMsg
FileOnlineDelete_gitee
FileOnlineStorage_gitee
FilePathExists
FolderCleaner
GGUFLoader
GeocodeTool
Image2Video_party
Images2Image
KG_csv_toolkit_developer
KG_csv_toolkit_user
KG_json_toolkit_developer
KG_json_toolkit_user
KG_neo_toolkit_developer
KG_neo_toolkit_user
KSampler_party
LLM
LLM_api_loader
LLM_local
LLM_local_loader
LLavaLoader
LorapathLoader
Lorebook
Mcp_tool
RSS_loader
RSS_tool
SpeedChange
URL2IMG
VAEDecode_party
accuweather_tool
advance_ebd_tool
aisuite_loader
any2str
any_switcher
api_function
api_tool
arxiv_tool
bing_loader
bing_tool
bool_logic
browser_use_tool
check_text
check_web_tool
classify_function
classify_function_plus
classify_persona
classify_persona_plus
clear_file
clear_model
custom_persona
custom_string_format
dall_e_tool
discord_bot
discord_file_monitor
discord_send
duckduckgo_loader
duckduckgo_tool
easy_GGUFLoader
easy_LLM_api_loader
easy_LLM_local_loader
easy_LLavaLoader
easy_load_llm_lora
easy_vlmLoader
ebd_tool
embeddings_function
end_anything
end_dialog
end_workflow
extra_parameters
feishu
feishu_tool
file_combine
file_combine_plus
file_path_iterator
files_read_tool
fish_tts
fish_whisper
flux_persona
genai_api_loader
get_string
github_tool
google_loader
google_tool
got_ocr
gpt_sovits
graph_md_to_html
html2img_function
ic_lora_persona
image_iterator
img2path
img_hosting
interpreter_function
interpreter_tool
interrupt_loop
json2text
json_extractor
json_get_value
json_iterator
json_parser
json_writing
keyword_tool
list_append
list_append_plus
list_extend
list_extend_plus
listen_audio
load_SQL_memo
load_bool
load_ebd
load_excel
load_file
load_file_folder
load_float
load_img_path
load_int
load_keyword
load_llm_lora
load_memo
load_name
load_openai_ebd
load_persona
load_redis_memo
load_url
load_wikipedia
md_to_excel
md_to_html
mini_error_correction
mini_flux_prompt
mini_flux_tag
mini_intent_recognition
mini_ocr
mini_party
mini_sd_prompt
mini_sd_tag
mini_story
mini_summary
mini_translate
none2false
omost_decode
omost_json2py
omost_setting
open_url_function
open_url_tool
openai_dall_e
openai_ebd_tool
openai_tts
openai_whisper
parameter_combine
parameter_combine_plus
parameter_function
path2img_tool
red_book_text_persona
replace_string
save_SQL_memo
save_ebd_database
save_memo
save_openai_ebd
save_redis_memo
savepersona
searxng_tool
send_to_wechat_official
show_text_party
sql_tool
srt2txt
start_anything
start_dialog
start_workflow
story_json_tool
str2float
str2int
string_combine
string_combine_plus
string_logic
substring
svg2html
svg2img_function
text2json
text2parameters
text_iterator
text_writing
time_sleep
time_tool
tool_combine
tool_combine_plus
translate_persona
txt2srt
url2img_tool
vlmLoader
weekday_tool
whisper_local
wikipedia_tool
work_wechat
work_wechat_tool
workflow_tool
workflow_transfer
workflow_transfer_v2

ComfyDeploy: How comfyui_LLM_party works in ComfyUI?

What is comfyui_LLM_party?

A set of block-based LLM agent node libraries designed for ComfyUI.This project aims to develop a complete set of nodes for LLM workflow construction based on comfyui. It allows users to quickly and conveniently build their own LLM workflows and easily integrate them into their existing SD workflows.

How to install it in ComfyDeploy?

Head over to the machine page

  1. Click on the "Create a new machine" button
  2. Select the Edit build steps
  3. Add a new step -> Custom Node
  4. Search for comfyui_LLM_party and select it
  5. Close the build step dialig and then click on the "Save" button to rebuild the machine

图片

<div align="center"> <a href="https://space.bilibili.com/26978344">bilibili</a> · <a href="https://www.youtube.com/@comfyui-LLM-party">youtube</a> · <a href="https://github.com/heshengtao/Let-LLM-party">text tutorial</a> · <a href="https://pan.quark.cn/s/190b41f3bbdb">Cloud disk address</a> · <a href="img/Q群.jpg">QQ group</a> · <a href="https://discord.gg/f2dsAKKr2V">discord</a> · <a href="https://dcnsxxvm4zeq.feishu.cn/wiki/IyUowXNj9iH0vzk68cpcLnZXnYf">About us</a> </div>

<div align="center"> <a href="./README_ZH.md"><img src="https://img.shields.io/badge/简体中文-d9d9d9"></a> <a href="./README.md"><img src="https://img.shields.io/badge/English-d9d9d9"></a> <a href="./README_RU.md"><img src="https://img.shields.io/badge/Русский-d9d9d9"></a> <a href="./README_FR.md"><img src="https://img.shields.io/badge/Français-d9d9d9"></a> <a href="./README_DE.md"><img src="https://img.shields.io/badge/Deutsch-d9d9d9"></a> <a href="./README_JA.md"><img src="https://img.shields.io/badge/日本語-d9d9d9"></a> <a href="./README_KO.md"><img src="https://img.shields.io/badge/한국어-d9d9d9"></a> <a href="./README_AR.md"><img src="https://img.shields.io/badge/العربية-d9d9d9"></a> <a href="./README_ES.md"><img src="https://img.shields.io/badge/Español-d9d9d9"></a> <a href="./README_PT.md"><img src="https://img.shields.io/badge/Português-d9d9d9"></a> </div>

C‌‌​‎​‎‏​‍‎​‎​‎‏​‌‎​‎‍​‍‏​‍‌​‌‏omfyui_llm_party aims to develop a complete set of nodes for LLM workflow construction based on comfyui as the front end. It allows users to quickly and conveniently build their own LLM workflows and easily integrate them into their existing image workflows.

Effect display

https://github.com/user-attachments/assets/945493c0-92b3-4244-ba8f-0c4b2ad4eba6

Project Overview

ComfyUI LLM Party, from the most basic LLM multi-tool call, role setting to quickly build your own exclusive AI assistant, to the industry-specific word vector RAG and GraphRAG to localize the management of the industry knowledge base; from a single agent pipeline, to the construction of complex agent-agent radial interaction mode and ring interaction mode; from the access to their own social APP (QQ, Feishu, Discord) required by individual users, to the one-stop LLM + TTS + ComfyUI workflow required by streaming media workers; from the simple start of the first LLM application required by ordinary students, to the various parameter debugging interfaces commonly used by scientific researchers, model adaptation. All of this, you can find the answer in ComfyUI LLM Party.

Quick Start

  1. If you have never used ComfyUI and encounter some dependency issues while installing the LLM party in ComfyUI, please click here to download the Windows portable package that includes the LLM party. Please note that this portable package contains only the party and manager plugins, and is exclusively compatible with the Windows operating system.(If you need to install LLM party into an existing comfyui, this step can be skipped.)
  2. Drag the following workflows into your comfyui, then use comfyui-Manager to install the missing nodes.
  1. If you are using API, fill in your base_url (it can be a relay API, make sure it ends with /v1/), for example: https://api.openai.com/v1/ and api_key in the API LLM loader node.
  2. If you are using ollama, turn on the is_ollama option in the API LLM loader node, no need to fill in base_url and api_key.
  3. If you are using a local model, fill in your model path in the local model loader node, for example: E:\model\Llama-3.2-1B-Instruct. You can also fill in the Huggingface model repo id in the local model loader node, for example: lllyasviel/omost-llama-3-8b-4bits.
  4. Due to the high usage threshold of this project, even if you choose the quick start, I hope you can patiently read through the project homepage.

Latest update

  1. The VLM local loader node now supports deepseek-ai/Janus-Pro, with an example workflow: Janus-Pro.
  2. The VLM local loader node now supports Qwen/Qwen2.5-VL-3B-Instruct, but you will need to update the transformers to the version on GitHub (pip install git+https://github.com/huggingface/transformers), example workflow: qwen-vl
  3. A brand new image hosting node has been added, currently supporting the image hosting services at https://sm.ms (with the regional domain for China being https://smms.app) and https://imgbb.com. More image hosting services will be supported in the future. Sample workflow: Image Hosting
  4. ~~The imgbb image hosting service, which is compatible by default with the party, has been updated to the domain imgbb. The previous image hosting service was replaced due to its unfriendliness towards users in mainland China.~~ I sincerely apologize, as it seems that the API service for the image hosting at https://imgbb.io has been discontinued. Therefore, the code has reverted to the original https://imgbb.com. Thank you for your understanding. In the future, I will update a node that supports more image hosting services.
  5. The MCP tool has been updated. You can modify the configuration in the 'mcp_config.json' file located in the party project folder to connect to your desired MCP server. You can find various MCP server configuration parameters that you may want to add here: modelcontextprotocol/servers. The default configuration for this project is the Everything server, which serves as a testing MCP server to verify its functionality. Reference workflow: start_with_MCP. Developer note: The MCP tool node can connect to the MCP server you have configured and convert the tools from the server into tools that can be directly used by LLMs. By configuring different local or cloud servers, you can experience all LLM tools available in the world.

User Guide

  1. For the instructions for using the node, please refer to: how to use nodes

  2. If there are any issues with the plugin or you have other questions, feel free to join the QQ group: 931057213 | discord:discord.

  3. More workflows please refer to the workflow folder.

Vedio tutorial

<a href="https://space.bilibili.com/26978344"> <img src="img/B.png" width="100" height="100" style="border-radius: 80%; overflow: hidden;" alt="octocat"/> </a> <a href="https://www.youtube.com/@comfyui-LLM-party"> <img src="img/YT.png" width="100" height="100" style="border-radius: 80%; overflow: hidden;" alt="octocat"/> </a>

Model support

  1. Support all API calls in openai format(Combined with oneapi can call almost all LLM APIs, also supports all transit APIs), base_url selection reference config.ini.example, which has been tested so far:
  1. Support for all API calls compatible with aisuite:
  1. Compatible with most local models in the transformer library (the model type on the local LLM model chain node has been changed to LLM, VLM-GGUF, and LLM-GGUF, corresponding to directly loading LLM models, loading VLM models, and loading GGUF format LLM models). If your VLM or GGUF format LLM model reports an error, please download the latest version of llama-cpp-python from llama-cpp-python. Currently tested models include:
  1. Model download

Download

  • You can configure the language in config.ini, currently only Chinese (zh_CN) and English (en_US), the default is your system language.
  • Install using one of the following methods:

Method 1:

  1. Search for comfyui_LLM_party in the comfyui manager and install it with one click.
  2. Restart comfyui.

Method 2:

  1. Navigate to the custom_nodes subfolder under the ComfyUI root folder.
  2. Clone this repository with git clone https://github.com/heshengtao/comfyui_LLM_party.git.

Method 3:

  1. Click CODE in the upper right corner.
  2. Click download zip.
  3. Unzip the downloaded package into the custom_nodes subfolder under the ComfyUI root folder.

Environment Deployment

  1. Navigate to the comfyui_LLM_party project folder.
  2. Enter pip install -r requirements.txt in the terminal to deploy the third-party libraries required by the project into the comfyui environment. Please ensure you are installing within the comfyui environment and pay attention to any pip errors in the terminal.
  3. If you are using the comfyui launcher, you need to enter path_in_launcher_configuration\python_embeded\python.exe -m pip install -r requirements.txt in the terminal to install. The python_embeded folder is usually at the same level as your ComfyUI folder.
  4. If you have some environment configuration problems, you can try to use the dependencies in requirements_fixed.txt.

Configuration

APIKEY can be configured using one of the following methods

Method 1:

  1. Open the config.ini file in the project folder of the comfyui_LLM_party.
  2. Enter your openai_api_key, base_url in config.ini.
  3. If you are using an ollama model, fill in http://127.0.0.1:11434/v1/ in base_url, ollama in openai_api_key, and your model name in model_name, for example: llama3.
  4. If you want to use Google search or Bing search tools, enter your google_api_key, cse_id or bing_api_key in config.ini.
  5. If you want to use image input LLM, it is recommended to use image bed imgbb and enter your imgbb_api in config.ini.
  6. Each model can be configured separately in the config.ini file, which can be filled in by referring to the config.ini.example file. After you configure it, just enter model_name on the node.

Method 2:

  1. Open the comfyui interface.
  2. Create a Large Language Model (LLM) node and enter your openai_api_key and base_url directly in the node.
  3. If you use the ollama model, use LLM_api node, fill in http://127.0.0.1:11434/v1/ in base_url node, fill in ollama in api_key, and fill in your model name in model_name, for example: llama3.
  4. If you want to use image input LLM, it is recommended to use graph bed imgbb and enter your imgbb_api_key on the node.

Changelog

Click here

Next Steps Plan:

  1. More model adaptations;
  2. More ways to build agents;
  3. More automation features;
  4. More knowledge base management features;
  5. More tools, more personas.

Disclaimer:

This open-source project and its contents (hereinafter referred to as "Project") are provided for reference purposes only and do not imply any form of warranty, either expressed or implied. The contributors of the Project shall not be held responsible for the completeness, accuracy, reliability, or suitability of the Project. Any reliance you place on the Project is strictly at your own risk. In no event shall the contributors of the Project be liable for any indirect, special, or consequential damages or any damages whatsoever resulting from the use of the Project.

Special thanks:

<a href="https://github.com/bigcat88"> <img src="https://avatars.githubusercontent.com/u/13381981?v=4" width="50" height="50" style="border-radius: 50%; overflow: hidden;" alt="octocat"/> </a> <a href="https://github.com/guobalove"> <img src="https://avatars.githubusercontent.com/u/171540731?v=4" width="50" height="50" style="border-radius: 50%; overflow: hidden;" alt="octocat"/> </a> <a href="https://github.com/SpenserCai"> <img src="https://avatars.githubusercontent.com/u/25168945?v=4" width="50" height="50" style="border-radius: 50%; overflow: hidden;" alt="octocat"/> </a>

loan list

Some of the nodes in this project have borrowed from the following projects. Thank you for your contributions to the open-source community!

  1. pythongosssss/ComfyUI-Custom-Scripts
  2. lllyasviel/Omost

Support:

Join the community

If there is a problem with the plugin or you have any other questions, please join our community.

  1. discord:discord link
  2. QQ group: 931057213
<div style="display: flex; justify-content: center;"> <img src="img/Q群.jpg" style="width: 48%;" /> </div>
  1. WeChat group: we_glm (enter the group after adding the small assistant WeChat)

Follow us

  1. If you want to continue to pay attention to the latest features of this project, please follow the Bilibili account: 派酱
  2. youtube@comfyui-LLM-party

Donation support

If my work has brought value to your day, consider fueling it with a coffee! Your support not only energizes the project but also warms the heart of the creator. ☕💖 Every cup makes a difference!

<div style="display:flex; justify-content:space-between;"> <img src="img/zhifubao.jpg" style="width: 48%;" /> <img src="img/wechat.jpg" style="width: 48%;" /> </div>

Star History

Star History Chart