Nodes Browser
ComfyDeploy: How ComfyUI Upscaler TensorRT works in ComfyUI?
What is ComfyUI Upscaler TensorRT?
This project provides a Tensorrt implementation for fast image upscaling inside ComfyUI (3-4x faster)
How to install it in ComfyDeploy?
Head over to the machine page
- Click on the "Create a new machine" button
- Select the
Edit
build steps - Add a new step -> Custom Node
- Search for
ComfyUI Upscaler TensorRT
and select it - Close the build step dialig and then click on the "Save" button to rebuild the machine
ComfyUI Upscaler TensorRT ⚡
</div> <p align="center"> <img src="assets/node_v3.png" /> </p>This project provides a Tensorrt implementation for fast image upscaling inside ComfyUI (2-4x faster)
This project is licensed under CC BY-NC-SA, everyone is FREE to access, use, modify and redistribute with the same license.
For commercial purposes, please contact me directly at [email protected]
If you like the project, please give me a star! ⭐
⏱️ Performance
Note: The following results were benchmarked on FP16 engines inside ComfyUI, using 100 identical frames
| Device | Model | Input Resolution (WxH) | Output Resolution (WxH) | FPS | | :----: | :-----------: | :--------------------: | :---------------------: | :-: | | RTX5090 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 12.7 | | RTX5090 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 2.0 | | RTX4090 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 6.7 | | RTX4090 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 1.1 | | RTX3060 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 2.2 | | RTX3060 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 0.35 |
🚀 Installation
- Install via the manager
- Or, navigate to the
/ComfyUI/custom_nodes
directory
git clone https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt.git
cd ./ComfyUI-Upscaler-Tensorrt
pip install -r requirements.txt
🛠️ Supported Models
-
These upscaler models have been tested to work with Tensorrt. Onnx are available here
-
The exported tensorrt models support dynamic image resolutions from 256x256 to 1280x1280 px (e.g 960x540, 512x512, 1280x720 etc).
☀️ Usage
- Load example workflow
- Choose the appropriate model from the dropdown
- The tensorrt engine will be built automatically
- Load an image of resolution between 256-1280px
- Set
resize_to
to resize the upscaled images to fixed resolutions
🔧 Custom Models
- To export other ESRGAN models, you'll have to build the onnx model first, using export_onnx.py
- Place the onnx model in
/ComfyUI/models/onnx/YOUR_MODEL.onnx
- Then, add your model to this list as shown: https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt/blob/8f7ef5d1f713af3b4a74a64fa13a65ee5c404cd4/init.py#L77
- Finally, run the same workflow and choose your model
- If you've tested another working tensorrt model, let me know to add it officially to this node
🚨 Updates
4 March 2025 (breaking)
- Automatic tensorrt engines are built from the workflow itself, to simplify the process for non-technical people
- Separate model loading and tensorrt processing into different nodes
- Optimise post processing
- Update onnx export script
⚠️ Known issues
- If you upgrade tensorrt version, you'll have to rebuild the engines
- Only models with ESRGAN architecture are currently working
- High ram usage when exporting
.pth
to.onnx
🤖 Environment tested
- Ubuntu 22.04 LTS, Cuda 12.4, Tensorrt 10.8, Python 3.10, H100 GPU
- Windows 11
👏 Credits
License
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)