Comfyui workflow examples reddit

Comfyui workflow examples reddit. 150 workflow examples of things I created with ComfyUI and ai models from Civitai LINK: https://comfyworkflows. 10 votes, 10 comments. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 hey got your workflow running last night and this is why I liked it so much as well! Wish moving the masked image to composite over the other image was easier, or like a live preview instead of queing it for generation, cancel, move it a bit more etc. Is there a workflow with all features and options combined together that I can simply load and use ? You can encode then decode bck to a normal ksampler with an 1. For your all-in-one workflow, use the Generate tab. This particular workflow is NOT open source Life happens, shit happens, 5. Examples of what you can pull off with the camera are cooler than pictures of the camera (even though we love those pics I'll do you one better, and send you a png you can directly load into Comfy. An All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text-to-img. The generated workflows can also be used in the web UI. comments sorted by Best Top New Controversial Q&A Add a AP Workflow v3. It's not meant to overwhelm anyone with complex, cutting edge tech, but rather show the power of building modules/groups as blocks, and merging into a workflow through muting (and easily done so from the Fast All’interno del workflow, troverai una casella con una nota contenente istruzioni e specifiche sui settaggi per ottimizzarne l’utilizzo. 0 for ComfyUI (SDXL Base+Refiner, XY Plot, ControlNet XL w/ OpenPose, Control-LoRAs, Detailer, Upscaler, Prompt Builder) Tutorial | Guide I published a new version of my workflow, which should fix the issues that arose this week after some major changes in some of the custom nodes I use. safetensors and 1. safetensors vs 1. Attention couple example workflow or ipadapter with attention mask, Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 /r/StableDiffusion is back open after the protest of Reddit killing open API /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Look for the example that uses controlnet lineart. 25x) before it, I improved on my previous expressions workflow for ComfyUI by replacing the attention couple nodes by area composition ones. Download one of the dozens of finished workflows from Sytan/Searge/the official ComfyUI examples. There are sections (for example loras and facedetailer) that I can easily bypass/un-bypass with ctrl-select and ctrl-B. txt both in my root python interpreter, and the comfyui venv, and I've also tried running the script with both. Table of contents. Any suggestions Outpainting: Works great but is basically a rerun of the Hard to say without examples. its the kind of thing thats a bit fiddly to use so using someone elses workflow might be of limited use to you. To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. You can see it's a bit chaotic in this case but it works. Going to python_embedded and using python -m pip install compel got the nodes working. I'm just curious if For example I had very good results using resolve and multiple layers that were AI generated and did the rest in standard VFX so to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, ComfyUI workflow with 50 nodes and 10 models ?share with ComfyFlowApp in two steps. Re #1: I'm sure, but my suggestion was to avoid that people have to deal with code, and then think about what happens when you upgrade via ComfyUI, and address non-technical audience, etc. I tried to find either of those two examples, but I have so many damn images I couldn't find them. For example I just glance at my workflows and pick the one that I want, drag and drop into ComfyUI and I'm ready to go. I'm glad to hear the workflow is useful. If you see a few red boxes, be sure to read the Questions section on the page. All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. 0, did some experiments, and came up with reasonably simple, yet pretty flexible and powerful workflow I use myself: . My primary goal was to fully utilise 2-stage architecture of SDXL - so I have base and refiner models working as stages in latent space. I've installed requirements. Same workflow as the image I posted but with the first image being different. x, Starting workflow. This is an interesting implementation of that idea, with a lot of potential. com/profile/d8351c2d-7d14-4801-84f4 Description. An example They do overlap. 2 denoise to fix the blur and soft details, you can just use the latent without decoding and encoding to make it much faster but it causes problems with anything less than 1. 4K subscribers in the StableDiffusionInfo community. safetensors sd15_t2v_beta. 00 a month doesn't sound like much and it's not really, but if you don't have it, or you are already supporting others, that doesn't equal "I think they are greedy". I'm not sure where nodes even is, it doesn't seem to be Hard to say without examples. x install? Has anyone gotten a good simple ComfyUI workflow for 1. cheers No one said it was greedy. The workflow posted here relies heavily on useless third-party nodes from unknown extensions. The key things I'm trying to achieve are: I tried to find a good Inpaint workflow and just found a bunch of wild workflows that wanted a million nodes and had a bunch of different functions. I recently started to learn ComfyUi and found this workflow from Olivio and Im looking for something that does a similar thing, but instead can start with a SD or Real image as an input. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. 0 for ComfyUI. I share many results and many ask to share. I can load the comfyui through 192. Please share your tips, tricks, and I've done it before by using the websocket example inside comfyui, Can your ComfyUI-serverless be adapted to work if the ComfyUI workflow was hosted on Runpod, Kaggle, Google Colab, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Only the LCM Sampler extension is needed, as shown in this video. The idea is that it creates a tall canvas and renders 4 vertical sections separately, combining them as they go. Img2Img ComfyUI workflow. Installing ComfyUI. [Load VAE] and [Load Lora] are not plugged in this config for DreamShaper. Share Add a Comment. image saving and postprocess need was-node-suite-comfyui to be installed. I noticed that the log shows what prompts are added and most of the parameters used, which I can then bring over to ComfyUI. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, I call it 'The Ultimate ComfyUI Workflow', easily switch from Txt2Img to Img2Img, built-in Refiner, LoRA can you explain me where did you get the example-lora_1. ControlNet and T2I-Adapter - ComfyUI workflow Examples Note that in these examples the raw image is passed directly to the ControlNet/T2I adapter. Those things I use less often I've saved as templates and if I need, say, IPAdapter, I can add that to my workflow with few clicks. You can Load these images in ComfyUI to get the full workflow. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work!. For example, it would be very cool if one could place the node numbers on a grid (of In understand your frustration but that's not really how ComfyUI is supposed to work. You may plug them to use with 1. From chatgpt: Guide to Enhancing Illustration Details with Noise and Texture in StableDiffusion (Based on 御月望未's Tutorial) Overview. I've been especially digging the detail in the clothing more than anything else. It's a bit messy, but if you want to use it as a reference, it might help you. Not unexpected, but as they are not the default values in the node, I mention it here. Please share your tips, tricks, and I created my first workflow for ComfyUI and decided to share it with you since I found it quite helpful for me. I want a ComfyUI workflow that's compatible with SDXL with base model, refiner model, hi-res fix, and one LORA all in one go. Merging 2 Images ComfyUI Examples. Please keep posted images SFW. It would require many specific Image manipulation nodes to cut image region, pass it through model and paste back. If you are using a PC with limited resources in terms of computational power, this workflow for using Stable Diffusion with the ComfyUI interface is designed specifically for you. Tutorial | Guide Locked post. I'm not going to spend two and a half grand on high-end computer equipment, then cheap out by paying £50 on some crappy SATA SSD that maxes comfy uis inpainting and masking aint perfect. These include Stable Diffusion and other platforms like Flux, AuraFlow, PixArt, etc. This guide is about how to setup ComfyUI on your Windows computer to run Flux. Let's break down the main parts of this workflow so that you can understand it better. ComfyUI - Ultimate Starter Workflow + Tutorial Heya, ive been working on this workflow for like a month and its finally ready, so I also made a tutorial on how to use it. 00 a month doesn't sound like much and it's not really, but if you don't have it, or you are already supporting others, that doesn't equal "I think they are greedy" I've been playing with ComfyUI for a while now and even though I only do it for fun, I think I managed to create a workflow that that will be helpfull for others. You can also just load an image on the left side of the control net section and use it that way edit: if you use the link above, you'll need to replace the A few months ago, I suggested the possibility of creating a frictionless mechanism to turn ComfyUI workflows (no matter how complex) into simple and customizable front-end for end-users. Please share your tips, tricks, and workflows for using this I'd recommend starting from the default workflow and working your way up into more complex ones. Please share your tips, tricks, and workflows for using this My ComfyUi Workflow output . It is much more coherent and relies heavily on the IPAdapter source image as you can see in the gallery. You can find examples and workflows in his github page, for example, txt2img w/ latent upscale (partial denoise on upscale) - 48 frame animation with 16 frame window. I made an open source tool for running any ComfyUI workflow w/ ZERO setup /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the 43 votes, 16 comments. I spent literally four hours today trying every possible combination of changes to my settings to get it to go away, it was so obvious in every picture I did that had a closeup of a person, it was To download the workflow, go to the website linked at the top, save the image of the workflow, and drag it into ComfyUI. That's the one I'm referring to. Using preset or ready-made workflow might sound like an easy solution, but in the long run those will hamper your progress. Creating such workflow with default core nodes of ComfyUI is not possible at the moment. Ending Workflow. One guess is that the workflow is looking for the Control-LoRAs models in the cached directory (which is my directory on my computer). I'm using ComfyUI portable and had to install it into the embedded Python install. Fully supports SD1. hr-fix-upscale: workflows utilizing Hi-Res Fixes For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples. I'm sharing this workflow that demonstrates how to convert a stable diffusion creation into Is there a way to convert vertex colors to UB mapping with for example blender? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app Just wanted to share that I have updated comfy_api_simplified package, and now it can be used to send images, run workflows and receive images from the running ComfyUI server. I don't know what Magnific AI uses. kinda annoys me. So I'm happy to announce today: my tutorial and workflow are available. Has anyone done something similar or can offer tips on creating this workflow in ComfyUI? Any guidance or examples would be greatly appreciated! Thanks! Share /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind If you have any of those generated images in original PNG, you can just drop them into ComfyUI and the workflow will load. 168. You can load this image in ComfyUI to get the full workflow. Many of the workflow examples can Plus there a ton of extensions which provide plenty ease of use cases. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. 3 different input methods including img2img, prediffusion, latent image, prompt setup for SDXL, sampler setup for SDXL, Annotated, automated watermark 15 votes, 14 comments. This workflow can use LoRAs, ControlNets, enabling negative prompting with Ksampler, dynamic thresholding, inpainting, and more. All the workflows I found creates totally new image Reply reply Here are my findings: Neutral value for all FreeU options b1, b2, s1 and s2 is 1. Hello everyone, I got some exiting updates to share for One Button Prompt. 23 votes, 21 comments. 5/clip_model_somemodel. x, SD2. It provides workflow for SDXL (base + refiner). Grab the ComfyUI workflow JSON here. I might do an issue in ComfyUI about that. ckpt This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Upscaling ComfyUI workflow. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. This is NO place to show-off ai art unless it's Demofusion Vs Deepshrink Comparison. 5 with lcm with 4 steps and 0. For a dozen days, I've been working on a simple but efficient workflow for upscale. 9 workflow, the one that olivio sarikas video works just fine) just replace the models with 1. From clean start (as in no loaded or cached anything), a full generation takes me about 46 seconds from button press, to model loading, encoding, sampling, upscaling, the works. io) Also it can be very diffcult to get the position and prompt for the conditions. They depend on complex pipelines and/or Mixture of Experts (MoE) that enrich the prompt in many different ways. Is this possible? This is the input image that will be used in this example: Here is an example using a first pass with AnythingV3 with the controlnet and a second pass without the controlnet with AOM3A3 (abyss orange mix 3) and using their VAE. b1: responsible for the larger areas on the image b2: responsible for the smaller areas on the image s1: responsible for the details in b2 s2: responsible for the details in b1 So s1 The point of this workflow is to have all of it set and ready to use at once. 150 workflow examples of things I created with ComfyUI and ai models from Civitai Moved my workflow host to: In this guide, we are aiming to collect a list of 10 cool ComfyUI workflows that you can simply download and try out for yourself. Instead, I created a simplified 2048X2048 workflow. All the images in this repo contain metadata which means they can be loaded into ComfyUI ComfyUI has a tidy and swift codebase that makes adjusting to a fast paced technology easier than most alternatives. A lot of people are just discovering this technology, and want to show off what they created. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Here's a quick example where the lines from the scribble actually overlap with the pose. Some custom_nodes do still Welcome to the unofficial ComfyUI subreddit. These are examples demonstrating how to do img2img. This is a really cool ComfyUI workflow that lets us brush over a part of an image, click generate, and out pops an mp4 with the brushed-over parts animated! This is super handy for a bunch of stuff like marketing flyers, because it can animate parts of an image while leaving other areas, like text, untouched. Perhaps OP has nothing to give. this is an arbitrary example of an image produced with a prompt that isn't just boilerplate. It makes it really easy if you want to generate an image again with a small tweak, or just check how you generated something. Some very cool stuff! For those who don't know what One Button Prompt is, it is an feature rich auto prompt generator, easy to use in A1111 and ComfyUI, to inspire and surprise. You can extend it in much more fine-grained way. Discovery, share and run thousands of ComfyUI Workflows on OpenArt. ComfyUI Workflow Welcome to the unofficial ComfyUI subreddit. Re #2: the content of the input you are ingesting, whichever that is. How it works: Download & drop any image from the Welcome to the unofficial ComfyUI subreddit. And then the video in the post shows a rather simple layout that proves out the building blocks of a mute-based, context-building workflow. pth Yup, all images generated in the main ComfyUI frontend have the workflow embedded into the image like that (right now anything that uses the ComfyUI API doesn't have that, though). But if you expect everthing to work right away without learning how it applies to your own workflows, comfyui might not be the best for you. When I run comfyui\_to\_python. Beginning tutorials. Nodes/graph/flowchart interface to experiment and create complex Here are approx. We will go through some basic workflow examples. 5 - to take You can just use someone elses workflow of 0. [If for some reasons you want to run somthing that is less that 16 frames long all you need is this part of the workflow] That's a bit presumptuous considering you don't know my requirements. Question | Help First of all, sorry if this has been covered before, i did search and nothing came back. here to share my current workflow for switching between prompts. It now officialy supports ComfyUI and there is now a new Prompt Variant mode. Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07. Area Composition Examples | ComfyUI_examples (comfyanonymous. EDIT: For example this workflow shows the use of the other This workflow by Antzu is a good example of prompt scheduling, which is working well in Comfy thanks to Fitzdorf's great work. pngs of metadata. This repo contains examples of what is achievable with ComfyUI. SECOND UPDATE - HOLY COW I LOVE COMFYUI EDITION: Look at that beauty! Spaghetti no more. You need to understand how everything fits I'm interested in using the UI to prototype, then productionising a fine tuned or similar model. 2/Run the step 1 Workflow ONCE - all you need to change is put in where the original frames are and the dimensions of the output that you wish to have. These are preliminary tests done by someone who has not fully tested this yet, so don't any of this as gospel. But for a base to start at it'll work. Belittling their efforts will get you banned. (I've also edited the post to include a link to the workflow) Ok, I've got an issue and am not able to run the script. More examples. 1 or not. I am so sorry but my video is outdated now because ComfyUI has officially implemented the a SVD natively, update ComfyUI and copy the previously downloaded models from the ComfyUI-SVD checkpoints to your comfy models SVD folder and just delete the custom nodes ComfyUI-SVD. Warning. It'll add nodes as needed if you enable LoRAs or ControlNet or want it refined at 2x scale or whatever options you choose, and it can output your workflows as Comfy nodes if you ever want to export Get the Reddit app Scan this QR code to download It seems wasteful like in the official ComfyUI SVD example to keep generating text to image to video in one go That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node I can't see it, because I cant find the link for workflow. Tested at 2048x2048 on the same prompt. true. Share, discover, & run thousands of ComfyUI workflows. And above all, BE NICE. 5 (+ Controlnet,PatchModel. Please share your tips, tricks, and workflows for using this software to create your AI art. A lot of people are just You can adopt ComfyUI workflows to show only needed input params in Visionatrix UI (see docs: 6 min read. github. 5 Lora with SDXL, Upscaling Future tutorials planned: Prompting practices, post processing images, batch trickery, networking comfyUI ComfyUI is usualy on the cutting edge of new stuff. Breakdown of workflow content. Example workflow Example of integration of the nodes in SDXL Turbo Workflow. Img2Img Examples. Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. Multiple characters from separate LoRAs interacting with each other. More info: No one said it was greedy. Some custom_nodes do still Reddit strips all metadata from uploaded images, for example, as does Facebook IIRC so you'll never be able to just drop images from here into comfy. This could lead users to increase pressure to developers. That way the Comfy Workflow tab in Swarm will be your version of ComfyUI, with your custom nodes. 6. /r/StableDiffusion is back open after the protest of Reddit killing open A few examples of my ComfyUI workflow to make very detailed 2K images of real people /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the This probably isn't the completely recommended workflow though as it has POS_G and NEG_G prompt windows but none for POS_L, POS_R, NEG_L, and NEG_R which is part of SDXL's trained prompting format. For Flux schnell you can get the checkpoint here that you can put in your: ComfyUI/models/checkpoints/ directory. They can create the impression of watching an animation when presented as an animated GIF or other video format. Here you can download my ComfyUI workflow with 4 inputs. ive got 3 tutorials that can teach you how to set up a decent comfyui inpaint workflow. Exactly this, don't try to learn ComfyUI by building a workflow from scratch. TLDR of video: First part he uses RevAnimated to generate an anime picture with Revs styling, then it passes this image/prompt/etc to a second sampler, but this instead I have them stored in a text file at ComfyUI\custom_nodes\comfyui-dynamicprompts\nodes\wildcards\cameraView. . r/StableDiffusion An example of how machine learning can overcome all perceived odds Welcome to the unofficial ComfyUI subreddit. The graphic style Hi everyone, I'm looking to deploy ComfyUI on Replicate similar to how it is done in this example: fofr/any-comfyui-workflow . ComfyUI-to-Python-Extension can be written by hand, but it's a bit cumbersome, can't take benefit of the cache, and can only be run locally. This is done using WAS nodes. Nothing fancy. This guide, inspired by 御月望未's tutorial, explores a technique for significantly enhancing the detail and color in illustrations using noise and texture in StableDiffusion. ComfyUI Fooocus Inpaint with Segmentation Workflow. Does anyone have a workflow I can have to use loras with SDXL? All workflows I have found don't have the option of adding loras. It's simple and straight to the point. So, I just made this workflow ComfyUI. Please share your tips, tricks, and workflows for using this That being said, even for making apps, I believe using ComfyScript is better than directly modifying JSON, especially if the workflow is complex. You should try to click on each one of those model names in the ControlNet stacker node From my tests, IPAdapter (at least in its current form) will not take you there. TLDR of video: First part he uses RevAnimated to generate an anime picture with Revs styling, then it passes this image/prompt/etc to a second sampler, but this instead That. To improve sharpness search for "was node suite comfyui workflow examples" on Google, should take you to a github page with various workflows, one of them I see is for running hipass for sharpening, you can download the workflow and run it on your comfy. TLDR: Check out Dalle-3 or some of the other all-in-one workflows if minutiae isn't your thing. It covers the following topics: ComfyUI Examples. or through searching reddit, the comfyUI manual needs updating imo. Reply reply aliguana23 • Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07. safetensors -- makes it easier to remember which one to choose This is the workflow I use in ComfyUi to render 4k pictures with Dream shaper XL model. I have an image that I want to do a simple zoom out on. Workflows: SDXL Default workflow (A great starting point for using Flux. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07 22K subscribers in the comfyui community. ComfyUI help needed: workflow for highres fix similar results. 21K subscribers in the comfyui community. Once you figure that out you'll see how flexible ComfyUI is abd will be doing stuff no other UI can Hi everyone. Create a ControlNet pose image with all characters in the 2:1 aspect ratio. 7K subscribers in the comfyui community. Do you all prefer separate workflows or one massive all encompassing workflow? I think it was 3DS Max. Nodes include: LoadOpenAIModel. compare workflows that compare thintgs; funs workflows just for fun. Then if it looks good I want to re-run it with upscale and save them. I love that and I'm trying to find a good workflow for my image creation + face swapping. 1:8188 but when i try to load a flow through one of the And im going crazy looking through all my workflows cause I remember finding one that you give it an mp4 and a photo of a face and it pops out that same video but face swapped, that is all really, but I cant find it anymore nether online nor in my huge stack of workflow files. Is there a way to load the workflow from an image within I am building this around the [Coherent Facial Expressions] ( The goal of the r/ArtificialIntelligence is to provide a gateway to the many different facets of the Artificial Intelligence community, and to promote discussion relating to the ideas and concepts that we know of as AI. An example of Inpainting+Controlnet from the controlnet paper. If you look at the ComfyUI examples for Area composition, you can see that they're just using the nodes Conditioning (Set Mask / Set Area) -> You can find the workflows and more image examples below: ComfyUI SUPIR Upscale Workflow. I've been using comfyui for a few weeks now and really like the flexibility it offers. SDXL Default ComfyUI workflow. After all: I couldn't decipher it either, but I think I found something that works. safetensors sd15_lora_beta. 9(just search in youtube sdxl 0. 1 ComfyUI install guidance, workflow and example. 9 leaked repo, you can read the README. I really like the flexibility of ComfyUI, but one minor problem I have with it is that I have no way to toggle on or temporarily disable parts of the workflow. Is this really such a good workflow or did I set up my workflow wrong and I'm not getting the quality I should and that's why the generations are so fast at such a large size? Give it to me straight, please tell me what I'm doing wrong as I don't want to just keep on using this workflow only to find out that my images aren't generating at the quality they could be! Img2img + Inpaint workflow Controlnet + img2img workflow Inpaint + Controlnet Workflow Img2img + Inpaint + Controlnet workflow Does anyone have knowledge on how to achieve this? I want the output to incorporate these workflows in harmony, rather than simply layering them. It's created so that I know exactly how I should attach the nodes to my existing workflow. Has anyone tried or is still trying? All’interno del workflow, troverai una casella con una nota contenente istruzioni e specifiche sui settaggi per ottimizzarne l’utilizzo. com/models/628682/flux-1-checkpoint A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). I wanted a very simple but efficient & flexible workflow. These have all been run on a 3080 with 64GB DDR5 6000mhz, and a 12600k. Each ControlNet/T2I adapter needs the image that is passed to it to be in a specific format like depthmaps, canny maps and so on depending on the specific model if you want good results. ComfyScript is simple to read and write and can run remotely. This is a super cool ComfyUI workflow that lets us brush over PARTS of an image, click generate, and out pops an mp4 with the brushed-over parts animated! This is handy for a bunch of stuff like marketing flyers, because it can animate parts of an image while leaving other areas, like text, untouched. After studying some essential ones, you will start to understand how to make your own. Img2Img works by loading an image Overview. Its just not intended as an upscale from the resolution used in the base model stage. 0, and the level of fidelity you can achieve is far, far from the level necessary in the professional field. I played for a few days with ComfyUI and SDXL 1. ENGLISH. The only issue is that it requieres more VRAM, so many of us will probably be forced to decrease the resolutions bellow 512x512. And my workflow itself for For ComfyUI there should be a license information for each node in my opinion: "Commercial use: yes, no, needs license" and a workflow using non-commercial should show some warning in red. Nobody needs all that, LOL. ComfyUI-stable-wildcards can be installed through the Comfy manager. I tried with masking nodes but the results weren't what I was expecting, for example the original masked image of the product was still processed and the text got all scrambled up. The only thing that hangs me up on Ferniclestix workflow is that I have to have the seed to I used it in this workflow here, check the provided example images: https: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, ComfyUI Workflow | OpenArt 6. If the term "workflow" is something that has only been used exclusively to describe ComfyUI's node graphs, I suggest just calling them "node graphs" or just "nodes". 0 denoise, due to vae, maybe there is an obvious solution but i don't know it. MoonRide workflow v1. Good ways to start out. If the term "workflow" has been used to describe node graphs for a long time then that's unfortunate because now it has become entrenched. You can write workflows in code instead of separate files, use control flows directly, call Python libraries, and cache results across different workflows. Mixing ControlNets Is it possible to create with nodes a sort of "prompt template" for each model and have it selectable via a switch in the workflow? For example: 1-Enable Model SDXL BASE -> This would auto populate my starting positive and negative prompts and my sample settings that work best with that model. ComfyUI architecturally is more future proof, as the workflow pipeline is not so riggid as in a1111 and don't have to adapt to any specific frontend. Think Diffusion's Stable Diffusion ComfyUI Top 10 Cool Workflows. No way around that. Also embedding the full workflow into images is so nice coming from A1111, where half the extensions either don't embed their params, or don't reuse those params when loading Has anyone managed to implement Krea. - Ling-APE/ComfyUI-All-in-One-FluxDev Example workflow: Many things taking place here: note how only the area around the mask is sampled on (40x faster than sampling the whole image), it's being upscaled before sampling, then downsampled before stitching, and the mask is blurred before sampling plus the sampled image is blend in seamlessly into the original image. A good place to start if you have no idea how any of this works Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. ) upvotes Yes, 8Gb card, ComfyUI workflow loads both SDXL base & refiner models, separate XL VAE, 3 XL LoRAs, plus Face Detailer and its sam model and bbox detector model, and Ultimate SD Upscale with its ESRGAN model and To those who already jumped on the ComfyUI boat say l wished to install ComfyUI on a system where I'm already running A1111's sd-webui, could ComfyUI be configured to make use of the already present SD backend, or would it necessarily need its own? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Looks like she is standing in front of a pool window in an aquarium for example, but then the shadow makes 18K subscribers in the comfyui community. Notice how we didn’t even need to add any node for all this to work! But of course, the point of working in ComfyUI is the ability to modify the workflow. Comparisons and discussions across different platforms are encouraged. Thanks for the responses tho, I was unaware that the meta data of the generated files contain the entire workflow. I had to place the image into a zip, because people have told me that Reddit strips . Features. SDXL Turbo is a SDXL model that can generate consistent images in a single step. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Animate your still images with this AutoCinemagraph ComfyUI workflow 0:07. For example, let's say I want to just bash out 20 low res images in preview to get a feel for a prompt. If you have the SDXL 0. I recently discovered the existence of the Gligen nodes in Comfyui and thought I would share some of the images I made using them (more in the civitai post link). and if you copy it into comfyUI, it will output a text string which you can then plug into you 'Clip text encoder' node and it is then used as your SD prompt. The key is understanding how how each node works and how they interact to create an image. While I was kicking around in LtDrData's documentation today, I noticed the ComfyUI Workflow Component, which allowed me to move all the mask logic nodes behind the scenes. EZ way, kust download this one and run like another checkpoint ;) https://civitai. However, when I change values in some other nodes like something like Canny Edge node or DW Pose Estimator, they don't rerun. I am personally using it as a layer between telegram bot and a ComfyUI to run different workflows and get the results using user's text and image input. Oh, and if you would like to try out the workflow, check out the comments! I couldn't put it in the description as my account awaits verification. This workflow requires quite a few custom nodes and models to run: PhotonLCM_v10. To quickly check the prompt in any generated image you can hover over the node and the executed prompt will be displayed. md file yourself and see that the refiner is in fact intended as img2img and basically as you see being done in the ComfyUI example workflow someone posted. I've created an All-in-One FluxDev workflow in ComfyUI that combines various techniques for generating images with the FluxDev model, including img-to-img and text Here are approx. More info: I'm making changes to several nodes in a workflow, but only specific ones are rerunning like for example the KSampler node. upvotes /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the 92 votes, 29 comments. All the images in this repo contain metadata which means they can be loaded into ComfyUI basics: some low-scale workflows. ) I haven't managed to reproduce this process in Comfyui yet. 19K subscribers in the comfyui community. 1. LoRA selector, (for example, download SDXL LoRA example from StabilityAI, put into ComfyUI\models\lora\) VAE selector, (download default VAE from StabilityAI, put into \ComfyUI\models\vae\), just in case in the future there's better VAE or mandatory VAE for some models, use this selector Restart ComfyUI Has anyone done something similar or can offer tips on creating this workflow in ComfyUI? Any guidance or examples would be greatly appreciated! Thanks! Share Add a Comment /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. Civitai has few workflows as well. com/. 0 is the first step in that direction. The new versions uses two ControlNet inputs : a 9x9 openpose faces, and a single openpose face. 7) ControlNet, IP-Adapter, AnimateDiff, . /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, samples, tips and tricks on the Sigma FP. So. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, ComfyUI workflow with vid2vid AnimateDiff to create alien-like girls Workflow Included An example of how machine learning can overcome all It is a simple workflow of Flux AI on ComfyUI. SDXL Turbo Examples. Its modular nature lets you mix and match component in a Go on github repos for the example workflows. 11K subscribers in the comfyui community. But it separates LORA to another workflow (and it's not based on SDXL either). Just my two cents. Then there's a full render of the image with a prompt that describes the whole thing. 5/clip_some_other_model. I also had issues with this workflow with unusually-sized images. Less is more approach. A. 5 for converting an anime image of a character into a photograph of the same character while preserving the features? I am struggling hell just telling me some good controlnet strength and image denoising values would already help a lot! A higher clipskip (in A1111, lower in ComfyUI's terms, or more negative) equates to LESS detail in CLIP (not to be confused by details in the image). I am hoping to find find a ComfyUI workflow that allows me to use Tiled Diffusion + Controlnet Tile for upscaling images~ can anyone point me /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Docker-compose example for graylog 5. Prompt: Two warriors. all in one workflow would be awesome. safetensors file and 4x UltraSharp. In this case he also uses the ModelSamplingDiscrete node from the WAS node suite, supposedly for chained loras, however in my tests that node made no difference Hey everyone, I'm looking to set up a ComfyUI workflow to colorize, animate, and upscale manga pages, but I'm running into some challenges, since I'm a noob. ComfyUI's API is enough for making simple apps, but hard to write by hand. You can use more steps to increase the quality. WAS suite has some workflow stuff in its github links somewhere as well. Join the largest ComfyUI community. 44 votes, 11 comments. Example workflow and video! /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 5 base models, and modify latent image dimensions and upscale values to Users of ComfyUI which premade workflows do you use? I read through the repo but it has individual examples for each process we use - img2img, controlnet, upscale and all. So, up until today, I figured the "default workflow" was still always the best thing to use. 0. Its a simpler setup than u/Ferniclestix uses, but I think he likes to generate and inpaint in one session, where I generate several images, then import them and inpaint later (like this) . You can't change clipskip and get anything useful from some models (SD2. Depending on what you want, it might be useful to run a controlnet group into the workflow, with a canny edge detection or similar in addition to the denoise, to keep certain elements during the resample, and keep the reimagining on track. It seems they used Monster Labs QR Control Net, which should work in comfy as it sounds like the proof of concept was done in comfy. The best way to learn ComfyUI is by going through examples. My intention is not to be glib. If you asked about how to put it into the PNG, then you just need to create the PNG in ComfyUI and 4 - The best workflow examples are through the github examples pages. AP Workflow 5. New comments cannot be posted Well, I feel dumb. It's quite straight forward, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Simple Local Models Answers Evaluation Colab example (with GPT4, no langchain etc. So if you want more advanced stuff, you can easily add it. I can load workflows from the example images through localhost:8188, this seems to work fine. Prompt: A couple in a church. There are many regional conditioning solutions available, but as soon as you try to add LoRA data to the conditioning channels, the LoRA data seems to overrun the whole generation. This is why the Object Swapper only leverages ControlNet. SD will re-imagine your image, that's what it's doing. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, I cant load workflows from the example images using a second computer. While waiting for it, as always, the amount of new features and changes snowballed to the point that I must release it as is. 25K subscribers in the comfyui community. 0 and upscalers Here is an example of 3 characters each with its own pose, outfit, features, and expression : Left : woman wearing full armor, ginger hair, braids hair, hands on hips, serious Middle : girl, princess dress, blonde hair, tiara, jewels, sitting on a throne, blushing Thats where I'd gotten my second workflow I posted from, which got me going. This by Nathan Shipley didn't use this exact workflow but is a great example of how powerful and beautiful prompt scheduling can be: I'm not very experienced with Comfyui so any ideas on how I can set up a robust workstation utilizing common tools like img2img, txt2img, refiner, model merging, loras, etc. Welcome to the unofficial ComfyUI subreddit. I just learned Comfy, and I found that if I just upscale it even 4x, it won't do something much. 0 and Pony for example which, for Pony I think needs 2 always) because of how their CLIP is encoded. AP Workflow 9. I understand how outpainting is supposed to work in comfyui (workflow For example I had very good results using resolve and multiple layers that were AI generated and did the rest in standard VFX so to /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, ComfyUI workflow with 50 nodes and 10 models ?share with ComfyFlowApp in two steps. txt but I'm just at a loss right now, I'm not sure if I'm missing something else or what. It also allows better experimentation for those who dare. The Real Housewives of Atlanta; The Bachelor; Sister Wives; 90 Day Fiance; Wife Swap; The Amazing Race Australia; Married at First Sight; The Real Housewives of Dallas I know i can go to huggingface and use illusion diffusion but i would like to use other plugins along with it. Image Processing A group that allows the user to perform a multitude of blends between image sources as well as add custom effects to images I built a free website where you can share & discover thousands of ComfyUI workflows -- https://comfyworkflows. I would like to include those images into ComfyUI workflow and experiment with different backgrounds - mist - lightrays - abstract colorful stuff behind and before the product subject. Please share your tips, tricks, and But standard A1111 inpaint works mostly same as this ComfyUI example you provided. Is there a workflow with all features and options combined together that I can simply load and use ? ComfyUI Examples. Image Realistic Composite & Refine ComfyUI Workflow . Thanks. Discuss all things about StableDiffusion here. com/models/628682/flux-1-checkpoint What is the best workflow that people have used with the most capability without using custom nodes? Flux Schnell. Now you can condition your prompts as easily as applying a CNet! All posts must be Open-source/Local AI image generation related Posts should be related to open-source and/or Local AI image generation only. hopefully this will be useful to you. For some workflow examples and see what ComfyUI can do you can check out: The UI now will support adding models and any missing node pip installs. I hope that having a comparison was useful nevertheless. Either way you approach it, know what each node does. For example, if you want to use "FaceDetailer", just type "Face". It is a simple workflow of Flux AI on ComfyUI. 0 with support for the new Stable Diffusion 3, but it was way too optimistic. Then you finally have an idea of whats going on, and you can move on to control nets, ipadapters, detailers, clip vision, and 20 30 nodes. 2. My long-term goal is to use ComfyUI to create multi-modal pipelines that can reach results as good as the ones from the AI systems mentioned above without human intervention. Just load your image, and prompt and go. I tried to use it in the Object Swapper function in my AP Workflow before releasing version 6. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. Thanks in advance. I need to KSampler it again after upscaling. if the workflow use img2img as source image and IP-adapter as reference only. Example: The stable-wildcards replaces the normal text-prompt node in a way that the really used text prompt is stored in the workflow. safetensors -- makes it easier to remember Why is everything to do with this fucking program like knocking on a random door out of curiosity and then having your fucking teeth ripped out with no explanation? I recently started to learn ComfyUi and found this workflow from Olivio and Im looking for something that does a similar thing, but instead can start with a SD or Real image as an input. Save this image then load it or drag it on ComfyUI to get the workflow. (for 12 gb VRAM Max is about 720p resolution). So, we will learn how to do things in ComfyUI in the simplest text-to-image workflow. Has anyone else messed around with gligen much? This is just a slightly modified ComfyUI workflow from an example provided in the examples repo. I originally wanted to release 9. You can use folders too, so eg cascade/clip_model. A video snapshot is a variant on this theme. Edit : as other people have suggested Civitai is great, just be aware that if you turn Reddit removes the ComfyUI metadata when you upload your pic. You can then load or drag the following Features. Also, if this is new and exciting to Here is one I've been working on for using controlnet combining depth, blurred HED and a noise as a second pass, it has been coming out with some pretty nice variations of the originally generated images. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will New Tutorial: How to rent 1-8x GPUS, install ComfyUI in the cloud (+Manager, Custom nodes, models, etc). I then recommend enabling Extra Options -> Auto Queue in the interface. Plus quick run-through of an example ControlNet workflow. Yesterday I released TripoSR custom nodes for comfyUI. Can you share IP-adapter workflow, please. More info: A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast Below are some example generations I have run through my workflow. I learned this from Sytan's Workflow, I like the result. Hi there. but mine do include workflows for the most part in the video description. Upcoming tutorial - SDXL Lora + using 1. It was one of the earliest to add support for turbo, for example. From what I see in the ControlNet and T2I-Adapter Examples, this allows me to set both a character pose and the position in the composition. I looked into the code and when you save your workflow you are actually "downloading" the json file so it goes to your default browser download folder. I'm not sure what's wrong here because I don't use the portable version of ComfyUI. This is ur workflow copied, but with a second sampler and a NNLatentUpscale (1. So, messing around to make some stuff and ended up with a workflow I think is fairly decent and has some nifty features. py\ it is unable to find from nodes import NODE_CLASS_MAPPINGS. I'm pretty happy with my workflow but when I swap faces I lose so many detail (like brushed face), so I'm thinking: is it possible to add a lora (Realora for example) after face swap to enhance realism on face? 9. Thanks, already have that but run into the same issue I had earlier where the Load Image node is missing the Upload button, fixed it earlier by doing Update All in Manager and then running the ComfyUI and Python dependencies batch files but that hasn't worked this time, so only going top be able to do prompts from text until This is something I have been chasing for a while. gbndj dbmwbj ehynxg tec rwe omte oto vrlnyaf ashyg tie  »

LA Spay/Neuter Clinic