Nodes Browser

ComfyDeploy: How ComfyUI Upscaler TensorRT works in ComfyUI?

What is ComfyUI Upscaler TensorRT?

This project provides a Tensorrt implementation for fast image upscaling inside ComfyUI (3-4x faster)

How to install it in ComfyDeploy?

Head over to the machine page

  1. Click on the "Create a new machine" button
  2. Select the Edit build steps
  3. Add a new step -> Custom Node
  4. Search for ComfyUI Upscaler TensorRT and select it
  5. Close the build step dialig and then click on the "Save" button to rebuild the machine
<div align="center">

ComfyUI Upscaler TensorRT ⚡

python cuda trt by-nc-sa/4.0

</div> <p align="center"> <img src="assets/node_v3.png" /> </p>

This project provides a Tensorrt implementation for fast image upscaling inside ComfyUI (2-4x faster)

This project is licensed under CC BY-NC-SA, everyone is FREE to access, use, modify and redistribute with the same license.

For commercial purposes, please contact me directly at [email protected]

If you like the project, please give me a star! ⭐


⏱️ Performance

Note: The following results were benchmarked on FP16 engines inside ComfyUI, using 100 identical frames

| Device | Model | Input Resolution (WxH) | Output Resolution (WxH) | FPS | | :----: | :-----------: | :--------------------: | :---------------------: | :-: | | RTX5090 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 12.7 | | RTX5090 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 2.0 | | RTX4090 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 6.7 | | RTX4090 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 1.1 | | RTX3060 | 4x-UltraSharp | 512 x 512 | 2048 x 2048 | 2.2 | | RTX3060 | 4x-UltraSharp | 1280 x 1280 | 5120 x 5120 | 0.35 |

🚀 Installation

  • Install via the manager
  • Or, navigate to the /ComfyUI/custom_nodes directory
git clone https://github.com/yuvraj108c/ComfyUI-Upscaler-Tensorrt.git
cd ./ComfyUI-Upscaler-Tensorrt
pip install -r requirements.txt

🛠️ Supported Models

☀️ Usage

  • Load example workflow
  • Choose the appropriate model from the dropdown
  • The tensorrt engine will be built automatically
  • Load an image of resolution between 256-1280px
  • Set resize_to to resize the upscaled images to fixed resolutions

🔧 Custom Models

🚨 Updates

4 March 2025 (breaking)

  • Automatic tensorrt engines are built from the workflow itself, to simplify the process for non-technical people
  • Separate model loading and tensorrt processing into different nodes
  • Optimise post processing
  • Update onnx export script

⚠️ Known issues

  • If you upgrade tensorrt version, you'll have to rebuild the engines
  • Only models with ESRGAN architecture are currently working
  • High ram usage when exporting .pth to .onnx

🤖 Environment tested

  • Ubuntu 22.04 LTS, Cuda 12.4, Tensorrt 10.8, Python 3.10, H100 GPU
  • Windows 11

👏 Credits

License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)