Comfyui upscale model download reddit. 5=1024). with a denoise setting of 0. - image upscale is less detailed, but more faithful to the image you upscale. Still working on the the whole thing but I got the idea down I believe it should work with 8GB vram provided your SDXL Model and Upscale model are not super huge E. 5 model) >> FaceDetailer. safetensors (SD 4X Upscale Model) I decided to pit the two head to head, here are the results, workflow pasted below (did not bind to image metadata because I am using a very custom weird Welcome to the unofficial ComfyUI subreddit. so I made one! Rn it installs the nodes through Comfymanager and has a list of about 2000 models (checkpoints, Loras, embeddings, etc. But I think simply typing the file name in the search panel of comyUI manager will get you the file. If it's the best way to install control net because when I tried manually doing it . Step 2: Download this sample Image. 5), with an ESRGAN model. so i. There are also other upscale models that can upscale latents with less distortion, the standard ones are going to be bucubic, billinear, and bislerp. It didn't work out. Step 1: Download SDXL Turbo checkpoint. That's because of the model upscale. so my question is, is there a way to upscale a already existing image in comfy or do i need to do that in a1111? I've been using Stability Matrix and also installed ComfyUI portable. PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). I rarely use upscale by model on its own because of the odd artifacts you can get. Simply save and then drag and drop relevant image into your ComfyUI interface window with ControlNet Tile model installed, load image (if applicable) you want to upscale/edit, modify some prompts, press "Queue Prompt" and wait for the AI generation to complete. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. But it's weird. Solution: click the node that calls the upscale model and pick one. In the Load Video node, click on choose video to upload and select the video you want. Search the sub for what you need and download the . It's a lot faster that tiling but outputs aren't detailed. For comparison, in a1111 i drop the reactor output image in the img2img tab, keep the same latent size, use a tile controlnet model and choose the ultimate sd upscale script and scale it by i. Welcome to the unofficial ComfyUI subreddit. For SD 1. this is just a simple node build off what's given and some of the newer nodes that have come out. in a1111 the controlnet * If you are going for fine details don't upscale in 1024x1024 Tiles on an SD15 model, unless the model is specifically trained on such large sizes. After generating my images I usually do Hires. From what I've generated so far, the model upscale edges slightly better than the Ultimate Upscale. No attempts to fix jpg artifacts, etc. Will be interesting seeing LDSR ported to comfyUI OR any other powerful upscaler. That's because latent upscale turns the base image into noise (blur). pth or 4x_foolhardy_Remacri. 5 to get a 1024x1024 final image (512 *4*0. Hi, is there a tutorial how to do a workflow with face restoration on COMFY UI? I downloaded the impact pack, but I really don't know how to go from… If you have comfyUI ,manager you can directly download all the models from it. json or drag and drop the workflow image (I think the image has to not be from reddit, reddit removes metadata, I believe) into the UI. Hope someone can advise. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. Reply reply Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. I haven't been able to replicate this in Comfy. now i have made a workflow that has a upscaler in it and it works fine only thing is that it upscales everything and that is not worth the wait with most outputs. 25 i get a good blending of the face without changing the image to much. So in those other UIs I can use my favorite upscaler (like NMKD's 4xSuperscalers) but I'm not forced to have them only multiply by 4x. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. We are just using Ultimate SD upscales with a few control nets and tile sizes ~1024px. Upscaling on larger tiles will be less detailed / more blurry and you will need more denoise which in turn will start altering the result too much. The realistic model that worked the best for me is JuggernautXL even the base 1024x1024 images were coming nicely. The aspect ratio of 16:9 is the same from the empty latent and anywhere else that image sizes are used. As well Juggernaut XL and other XL models. This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. Thanks. Connect the Load Upscale model with the Upscale Image (using model) to VAE Decode, then from that image to your preview/save image. pth "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. SDXL most definitely doesn't work with the old control net. The downside is that it takes a very long time. I have a custom image resizer that ensures the input image matches the output dimensions. And when purely upscaling, the best upscaler is called LDSR. Though, from what someone else stated it comes to use case. Thanks for the tips on Comfy! I'm enjoying it a lot so far. I was looking for tools that could help me set up ComfyUI workflows automatically and also let me use it as a backend, but couldn't find any. attach to it a "latent_image" in this case it's "upscale latent" "Upscaling with model" is an operation with normal images and we can operate with corresponding model, such as 4x_NMKD-Siax_200k. To enable higher-quality previews with TAESD, download the taesd_decoder. In the CR Upscale Image node, select the upscale_model and set the rescale_factor. Please keep posted images SFW. If the workflow is not loaded, drag and drop the image you downloaded earlier. . 2 - Custom models/LORA's: Tried a lot of CivitAI, epicrealism, cyberrealistic, absolutereality, realistic vision 5. Then output everything to Video Combine . From the ComfyUI_examples, there are two different 2-pass (Hires fix) methods, one is latent scaling, one is non-latent scaling hey folks, latly if have been getting in to the whole comfyui thing and trying different things out. Edit: i changed models a couple of times, restarted comfy a couple of times… and it started working again… OP: So, this morning, when I left for… The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. Upscaling: Increasing the resolution and sharpness at the same time. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. Also, both have a denoise value that drastically changes the result. Then another node under loaders> "load upscale model" node. Do you have ComfyUI manager. Hi everyone, I've been using SD / ComfyUI for a few weeks now and I find myself overwhelmed with the number of ways to do upscaling. Plus, you want to upscale in latent space if possible. There's "latent upscale by", but I don't want to upscale the latent image. There are plenty of workflows made you can find. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. I am curious both which nodes are the best for this, and which models. The first is to use a model upscaler, which will work out of your image node, and you can download those from a website that has dozens of models listed, but a popular one is some sort is Ergan 4X. it's nothing spectacular but gives good consistent results without In other UIs, one can upscale by any model (say, 4xSharp) and there is an additional control on how much that model will multiply (often a slider from 1 to 4 or more). pth, taesdxl_decoder. I'm using a workflow that is, in short, SDXL >> ImageUpscaleWithModel (using 1. 1 and 6, etc. Please share your tips, tricks, and workflows for using this software to create your AI art. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. Good for depth, open pose so far so good. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. All the models are located in M:\AI_Tools\StabilityMatrix-win-x64\Data\Models. safetensors (SD 4X Upscale Model) Sep 7, 2024 · Here is an example of how to use upscale models like ESRGAN. Scan this QR code to download the app now ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. I want to upscale my image with a model, and then select the final size of it. pth, taesd3_decoder. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. You can also provide your custom link for a node or model. Edit : I am sorry I didn't see that you were looking for SDXL clip file i thought you wanted the cascade clip file. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. There are also "face detailer" workflows for faces specifically. 5 I'd go for Photon, RealisticVision or epiCRealism. One does an image upscale and the other a latent upscale. Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. I don't bother going over 4k usually though, you get deminishing returns on render times with only 8gb vram ;P Does anyone have any suggestions, would it be better to do an iterative upscale, or how about my choice of upscale model? I have almost 20 different upscale models, and I really have no idea which might be best. But for the other stuff, super small models and good results. Thank Sames as Swin4R which details a lot the image. 5 image and upscale it to 4x the original resolution (512 x 512 to 2048 x 2048) using Upscale with Model, Tile Controlnet, Tiled KSampler, Tiled VAE Decode and colour matching. ). Additionally, the animatediff_models and clip_vision folders are placed in M:\AI_Tools\StabilityMatrix-win-x64\Data\Packages\ComfyUI\models. In the comfy UI manager select install model and the scroll down to see the control net models download the 2nd control net tile model(it specifically says in the description that you need this for tile upscale). So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Look at this workflow : These comparisons are done using ComfyUI with default node settings and fixed seeds. Put them in the models/upscale_models folder then use the UpscaleModelLoader node to load them and the ImageUpscaleWithModel node to use them. I love to go with an SDXL model for the initial image and with a good 1. Stable Diffusion model used in this demonstration is Lyriel. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). If you don’t want the distortion, decode the latent, upscale image by, then encode it for whatever you want to do next; the image upscale is pretty much the only distortion-“free” way to do it. So I made a upscale test workflow that uses the exact same latent input and destination size. 5 if you want to divide by 2) after upscaling by a model. All of this can be done in Comfy with a few nodes. The Ultimate SD upscale is one of the nicest things in Auto11, it first upscales your image using GAN or any other old school upscaler, then cuts it into tiles small enough to be digestable by SD, typically 512x512, the pieces are overlapping each other and can be bigger. Step 3: Update ComfyUI Step 4: Launch ComfyUI and enable Auto Queue (Under Extra Options) Step 5: Drag and drog and sample image into ConfyUI Step 6: The FUN begins! If queue didn't start automatically, press Queue Prompt That workflow consists of vid frames at 15fps into vae encode and CNs, a few loras, animatediff v3, lineart and scribble-sparsectrl CNs, ksampler basic with low cfg, small upscale, AD detailer to fix face (with lineart and depth CNs in segs, and same loras, and animatediff), upscale w/model, interpolate, combine to 30fps. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Thanks Welcome to the unofficial ComfyUI subreddit. It turns out lovely results, but I'm finding that when I get to the upscale stage the face changes to something very similar every time. Like I can understand that using the Ultimate Upscale one could add more details through adding steps/noise or whatever you'd like to tweak on the node. 15-0. This way it replicates the sd upscale/ultimate upscale scripts from A1111. This is done after the refined image is upscaled and encoded into a latent. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. However, I'm facing an issue with sharing the model folder. So from VAE Decode you need a "Uplscale Image (using model)" under loaders. - latent upscale looks much more detailed, but gets rid of the detail of the original image. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. A step-by-step guide to mastering image quality. The workflow is kept very simple for this test; Load image Upscale Save image. I believe the problem comes from the interaction between the way Comfy's memory management loads checkpoint models (note that this issue still happens if smart memory is disabled) and Ultimate Upscale bypassing the torch's garbage collection because it's basically a janky wrapper for an Auto1111 extension. The restore functionality, that adds detail, doesn't work well with lightning/turbo models. e. Usually I use two my wokrflows: Upscale x1. pth and place them in the models/vae_approx folder. fix but since I'm using XL I skip that and go straight to Img2img, and do a SD Upscale by 2x. You just have to use the node "upscale by" using bicubic method and a fractional value (0. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) You can also run a regular AI upscale then a downscale (4x * 0. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. /r/StableDiffusion is back open after the Cause I run SDXL based models from start and through 3 ultimate upscale nodes. I get good results using stepped upscalers, ultimateSD upscaler and stuff. Once they're installed, restart ComfyUI and launch it with --preview-method taesd to enable high-quality previews. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. For the best results diffuse again with a low denoise tiled or via ultimateupscale (without scaling!). pth and taef1_decoder. g Use a X2 Upscaler model. For some context, I am trying to upscale images of an anime village, something like Ghibli style. the factor 2. second pic. I'm using mm_sd_v15_v2. 5 for the diffusion after scaling. Curious if anyone knows the most modern, best ComfyUI solutions for these problems? Detailing/Refiner: Keeping same resolution but re-rendering it with a neural network to get a sharper, clearer image. You can also do latent upscales. ckpt motion with Kosinkadink Evolved . 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Always wanted to integrate one myself. The last one takes time I must admit but it run well and allow me to generate good quality images (I managed to have a seams fix settings config that works well for the last one hence the long processing) Generates a SD1. Tried the llite custom nodes with lllite models and impressed. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Jan 5, 2024 · Start ComfyUI. desr tzgj ichq miqioiv setb qgle uvsljm vuum lur bgaxsr