Why is comfyui faster reddit. Forge's memory management is sublime, on the other hand.
Why is comfyui faster reddit I tried ComfyUI early on, but I wasn't mentally prepared-- it was too different. I also recently tried Fooocus and found it lacked customisation personally, but appreciate the awesome in-painting they have and their midjourney-inspired prompt stuff is really cool. Now I've been on Comfyui for a few months and I won't turn on the A1111 anymore. This is why I and many others are unable to generate at all on A1111, or only in lime 4min, whereas in ComfyUI its just 30s. a pain if you've got a lot. Standalone: everything is contained in the zip, you could use it on a brand new system. I only go back to Forge, which is the same as Automatic 1111 but with some improvements, for inpainting and X/Y/Z plot because I think it's faster and more convenient. So, as long as you don't expect comfyui not to break occasionally, sure give it a go. A few weeks ago I did a "spring-cleaning" on my PC and completely wiped my Anaconda environments, packages, etc. comfyanonymous. is more optimised out of the box, and so can run faster on less VRAM Comfy: More of a backend than the other options, meaning a much steeper learning curve updates faster and gives you access to the bleeding edge of SD (an To verify if I'm full of shit, go generate something and check the console for your iterations per second. New comments cannot be posted I think instead of using ComfyUI, it would be much faster to actually create a proper programm. the best part about it though >. Recently started playing with comfy Ui and I found it is bit faster than A1111. and i get the following results. After trying everything I can finally use ComfyUI (on my computer is faster than A111, for XL in particular). emaonly. 6 I couldn't run SDXL in A1111 so I was using ComfyUI. Better to generate a large quantity of images, but, for editing, this is not really efficient. For my own purposes, ComfyUI does everything I already used, and it is easy to get running. Even if there's an issue with my installation or the implementation of the refiner in SD. It’s the first UI that’s actually been able to challenge A111’s dominance in any kind of way- the other UIs mostly fly under the radar. It also is much, much faster than automatic1111. You should be able to drop images into comfyui as well and it will load up the workflow I used ComfyUI for a while but on Linux on my AMD card I found I was constantly getting OOM driver freezes and graphical glitches. Forge's memory management is sublime, on the other hand. For example you can do side-by-side and compare workflows: one with only base and one with base + lora and see the difference. i need help (i just want to install normal sd not the sdxl) Share Add a Comment. Comfyui authors are trying to confuse and mislead people into trusting this. Take it easy! 👍 I started on the A1111. That makes no sense. ComfyUI is also trivial to extend with custom nodes. I'll So I've been running ComfyUI inside of a python venv and getting ok speeds for my ancient 6GB GPU. Hey everyone! I'm excited to share the latest update to my free workflow for ComfyUI, "Fast Creator v1. so im getting issues with my comfyui and loading this custom sdxl turbo model into comfyui. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site #Comfyui #Ultimate upscale - a faster upscale, same quality . Just check your vram and be sure optimizations like xformers are set-up correctly because others UI like comfyUI already enable those so you don't really feel the higher vram usage of SDXL. ComfyUI is really good for more "professional" use and allows to do much more, if you know what are you doing, but it's harder to navigate through each setting if you want to tweak, you have to move around the screen much, zoom in, zoom out etc. Global Step: 840000 model_type EPS adm 0 making attention of type 'vanilla' with 512 in_channels I recall when Vlad was said to run much faster than Automatic1111. I use that to simplify my workflow. Sort by: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 13s/it on comfyUI and on WebUI i get like 173s/it. they are different. A lot of people are just discovering this /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, It's still 30 seconds slower than comfyUI with the same 1366x768 resolution and 105 steps. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. I've been scheduling prompt on hundred of image for animatediff for a long time with giant batch a 1000+ frames. Please share your tips, tricks, As an official Fidelity customer care channel, our community is the best way to get help on Reddit with your questions about investing with Fidelity – directly from Fidelity Associates. Easier to install an run but tend Comfy is faster than A1111 though--and you have a lot of creative freedom to play around with latents, mix-and-match models and do other crazy stuff in a workflow that can be built and re-used. This update includes new features and improvements to make your image creation process faster and more efficient. 5. And it's quite a bit faster then ADetailer (like so many other things in comfy) I agree about the LORAs though, that is def. I like web UI more, but comfy ui just gets things done quicker, and i cant figure out why, its breaking my brain. No spaghetti, no figuring out why this latent needs these 4 nodes and why one of them didn't work since the last update. Is it faster? Edit: Would This is Reddit's home for Computer Role Playing Games, better known as the CRPG subgenre! CRPGs are characterized by the adaptation of pen-and As you get comfortable with Comfyui, you can experiment and try editing a workflow. I tried --lovram --no-half-vae but it was the same problem I don't do a lot of just prompt and run work (which ComfyUI) is just as good as if not much better at. Why is everyone saying automatic1111 is really slow with SDXL ? I have it and it even runs 1-2 secs faster than my custom 1. Some of the ones with 16gb vram are pretty cheap now. It is how comfyui works, not how SD works. In the end, it turned out Vlad enabled by default some optimization that wasn't enabled by default in Automatic1111. It's possible, I suppose, that there's something I did try using SDXL 1. Very nice working well way faster than previous method i was using, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, I was using Automatic 1111 before now I use ComfyUI as main image generator. I wanted to get two environments running b/c my workflow tends towards lots of isolated What I can say is that I (RTX 2060 6 GB, 32 GB RAM, Windows 11) get vastly better performance on SD Forge with Flux Dev compared to Comfy (using the recommended Why is there such big speed differences when generating between ComfyUI, Automatic1111 and other solutions? And why is it so different for each GPU? A friend of mine for example is doing this on a GTX 960 (what a Comfyui's unique workflow is very attractive, but the speed on mac m1 is frustrating. Please keep posted images SFW. If it allowed more control then more people would be interested but it just replace dropdown menus and windows with nodes. I'm starting to make my way towards ComfyUI from A1111. Comfyui is much better suited for studio use than other GUIs available now. UPDATE 2: I suggest if you meant s/it, you edit your comment, even though it will leave me looking /r/StableDiffusion is back Sorry to say that it won't be much faster, even if you overclock the cpu. But you an achieve this faster in A1111 considering the workflow of comfy ui. Welcome to the unofficial ComfyUI subreddit. Workflows are much more easily reproducible and versionable. io Forge is built on top of A1111 web-ui, as you said. More info: I am running ComfyUI on a machine with 2xRTX4090 and am trying to use the ComfyUI_NetDist custom node to run multiple copies of ComfyUI server, each using separate GPU, to speed up batch generation. I'm into it. All of that is easier and faster than rebuilding work flows, or reusing templates, or rebuilding templates when needed. Hope I didn't crush your dreams. but it is simply not. It also seems like ComfyUI is way too intense on using heavier weights on (words:1. On my machine, comfy is only marginally faster than 1111. Comfy is basically a backend with very light frontend, while A1111 is very heavy frontend. The shots were all Comfyui with editing in premier. For instance (word:1. Is this more or less accurate? While obviously it seems like ComfyUI has big learning curve, my goal is to actually make pretty decent stuff, so if I have to put the time investment into Comfy, that's fine to me. 1K subscribers in the comfyui community. > <. Don't know if inpainting works with SDXL, but ComfyUI inpainting works with SD 1. I switched from forge to comfy yesterday and i already love it. next (still experimental), ComfyUI's performance is significantly faster than what you are reporting. next is faster, but the results with the refiners are worse looking. Sampling method on ComfyUI: LCM CFG Scale: from 1 to 2 Sampling steps: 4 Locked post /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will Welcome to the unofficial ComfyUI subreddit. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the ComfyUI uses the CPU for seeding, A1111 uses the GPU. Shouldn't you be able to reach the same-ish result faster if you just upscale with a 2x upscaler? The weights are also interpreted differently. I switched to ComyfUI from A1111 last year and haven't looked back, in fact I can't remember the last time I used A1111. 0. 0 on my RTX 2060 laptop 6gb vram on both A1111 and ComfyUI the A1111 took forever to generate an image without refiner the UI was very laggy I did remove all the extensions but nothing really change so the image always stocked on 98% I don't know why. I keep hearing that A1111 uses GPU to feed the noise creation part, and Comfyui uses the CPU. When I first saw the Comfyui I was scared by so many options of what can be set. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from but many anecdotes on this subreddit that ComfyUI is much faster than A111 without much info to back it up. Belittling their efforts will get you banned. The main difference between ComfyUI and Automatic1111 is that Comfy uses a non-destructive workflow. Before 1. Asked reddit wtf is going on everyone blindly copy pasted the same thing over and over. tbh i am more interested in why lora is so much different. This is why WSL performance on the virtualised ext file system is dramatically better than on the NTFS file system for some apps. for me its the UPDATE: In Automatic1111, my 3060 (12GB) can generate a 20 base-step, 10 refiner-step 1024x1024 Euler a image in just a few seconds over a minute. its actually meant to be watched on a phone screen, going on From there I can alter things one at a time or again regional prompt switching checkpoints, samplers, apps, Upscalers, and detailers all in one tab. Many here do not seem to be aware that ComfyUI uses massively lower VRAM compared to A1111. If someone needs more context please do ask. As to how to get a metadata from previous generations i found a custom node that lets you save img with metadata ingrained in it and theres another custom node to load img and it shows prompts, seed etc. thank you for your response. Then go disable Hyperthreading in the UEFI. the Standard 7). /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, SSD-1B: a distilled SDXL model that is 50% smaller and 60% faster without much quality loss! I'm mainly using ComfyUI on my home computer for generating images. But the speed difference is far more noticeable on lower-VRAM setups, as ComfyUI is way more efficient when it comes to using RAM and VRAM. A lot of people are just discovering this technology, and want to show off what they created. It is not. No idea why , but i get like 7. I *don't use* the --cpu option and these are the results I got using the default ComfyUI workflow and the v1-5-pruned. So it might be counterintuitive, but it’s only getting as much loud negative attention as it is because it’s getting just as much or more quiet love. The node based environment means its “flux1-dev-bnb-nf4” is a new Flux model that is nearly 4 times faster than the Flux Dev version and 3 times faster than the Flux Schnell version. Want to use latent space, again 1 button. Plus, Comfy is faster and with the ready-made workflows, a lot of things can be simplified and I'm learning what works and how on them. However, when SDXL was released, it was most usable in ComfyUI, so I forced myself to use it-- and I've never looked back. The only thing that I need to disable it the --no-half option and it generates image faster like in ComfyUI /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the But then I realized, shouldn't it be possible (and faster) to link the output from one into the next instead? In this example I have the first 3 samplers just executing 1 time each showing 1 2 3 sample stages, and the last one is unlinked and just doing a standard 3 step sample. They've been trying to unify the community around ComfyUI for a while, and in that respect, lllyasviel trying to distance his project from ComfyUI while using their code must be like using their own hand to slap their face, since it undermines Welcome to the unofficial ComfyUI subreddit. But I'm getting better results - based on my abilities / lack thereof - Introducing "Fast Creator v1. More info: What I see in tutorials and shared workflows over and over again is that people will first upscale their image with a 4x upscaler and then downscale it 0. So from that aspect, they'll never give the same results unless you set A1111 to use the CPU for the seed. So before abandoning SDXL completely, consider first trying out ComfyUI! I recommend downloading rgthree's custom nodes, there's a feature that gives you a button to bypass or mute groups faster. Most of my work is inpainting, upscaling, Krita editing, rerunning, and model swapping to get very exact results. I have no idea why that is, but it just is. ComfyUI is amazing. Then i tested my previous loras with comfyui they sucked also. Is there anyone in the same situation as me? Yes I don't have a mac to test on so the speed is not optimized. < Nodes! eeeee!, so because you can move these around and connect them however you want you can also tell it to save out an image at any point along the way, which is great! because I often forget that stuff. ComfyUI also uses xformers by default, which is non-deterministic. Learn comfyui faster Question How can I proceed I watched some videos and managed to install CFui but after I try to load workflows i found on the web or install "custom nodes" I get errors saying missing nodes but I can't install them from the manage addon. It's just the nature of how the gpu works that makes it so much faster. When I upload them, the prompts get automatically detected and displayed, but not the resources used. Please share your tips, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 2) and just gives weird results. 5 and 2. A non-destructive workflow is a workflow where you can reverse and redo something earlier in the pipeline after working on later steps. ComfyUI Weekly Update: Faster VAE, Speed increases, Early inpaint models and more. I can't thank you enough Reply reply Someone has linked to this thread from another place on reddit: [r/cryptogeum] What device you guys use? : comfyui I just dropped in the Marrigold depth as it now has a LCM model that is much faster, if you don't want to deal with this you can use any of the other depth estimators or ideally render a depth pass from your 3d software and use that. Notably faster. More info: ComfyUI already has a detailer in the impact pack plugin. I see a lot of stuff for running it in Automatic1111, but can it be used with ComfyUI? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Switching models takes forever in Forge compared to comfy. how can i fix that? I'm on an 8GB RTX 2070 Super card. The only cool thing is you can repeat the same task from the /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. View community ranking In the Top 20% of largest communities on Reddit. 4". comfyUI takes 1:30s, Comfy is maybe 10-15% faster without having measured precisely. Basically, in patcher, you can string plugins together in much the same way as ComfyUI. I have tried it (a) with one copy of SDXL running on each GPU and (b) with two copies of SDXL running per GPU. I don't like ComfyUI, because imo user friendly software is more important for regular use. Save up for a Nvidia card, and it doesn't have to be the 4090 one. A "fork" of A1111 would mean taking a copy of it and modifying the copy with the intent of providing an alternative that can replace the original. When ComfyUI just starts, the first image generation will always be fast (1 minute is the best), but the second generation (no changes to settings and parameters) and so on will always be slower, almost 1 minute slower. If I restart the app, then it will be faster again, but again, the second generation and so on will be slower again. Ive tried everything, reinstalled drivers, reinstalled the app, still cant get WebUI to run quicker. I started with A1111, switched for a few weeks to SDnext, then back. 5 models. ckpt model. More info: ComfyUI Weekly Update: Faster VAE, Speed increases, Early inpaint models and more. But one of the really cool things is has is a separate tab for a "Control Surface". More info: . I have a question, how hard is it to learn Comfy UI? I've started with Easy Diffusion, and then moved to Automatic 1111, but recently I installed Comfy UI, and drag and dropped a work flow from Google (Sytans Workflow) and it is Welcome to the unofficial ComfyUI subreddit. You can try launching it with: - At the moment there are 3 ways ComfyUI is distributed: 1. Comfy does launch faster than auto111 though but the ui will start to freeze if you do a batch or have multiple gene going on at the same time. On my rig, it's about 50% faster, so I tend to mass-generate images on ComfyUI, then bring any images I need to fine-tune over to A1111 for inpainting and the like. 1) in A1111. 56 votes, 17 comments. “(Composition) will be different between comfyui and a1111 due to various reasons”. You have to draw a mask, save the image with the mask, then upload to the UI again to inpaint. Please share your tips, tricks, and workflows for using this software to create your AI art. Maybe I am doing it wrong, but ComfyUI inpainting is a bit awkward to use. Comfyui wasn't designed for Animatediff and long batch, yet it's the best platform for it thanks to the community. Even thou i keep hearing people focusing the discussion on the time it takes to generate the image (and yes Comfyui is faster, i have a 3060) i would like people to be discussing if the image quality is better in which. Please keep /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the At the end of the day, i'm faster with a111, better ui shortcut, better inpaint tool, better using of copy/paste with clipboard when you want to use photoshop. VFX artists are also typically very familiar with node i heard that comfyUI generate more faster. i dont really care about getting the same image from both of them but if you check closely while automatic1111 is almost perfect (you dont have to know the model, it is almost real) but the comfyui one is as if i reduced lora weight or something. 10K subscribers in the comfyui community. Is there any website or YouTube video where I can get a full guide about its interface and workflow, how to create workflow for inpainting, controlnet and so on. For those of you familiar with FL Studio, and specifically with Patcher, you might know what I'm about to describe. I had previously used ComfyUI with SDXL 0. What normal setting are you curious about? /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Welcome to the unofficial ComfyUI subreddit. And above all, BE NICE. I expect it will be faster. I spend many hours learning comfyui and i still doesn't really see the benefits. Finally, drop that picture you generated back into ComfyUI and press generate again while checking the iterations per second. Please share your tips, /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind I have an M1 Macbook Air with 8 GPUs (vs. Question - Help Hi, I am upscaling a long sequence (batch - batch count) of images /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, Having used ComfyUI quite a bit, I got to try Forge yesterday and it is great! it has been noticeably faster unless I want to use SDXL + Refiner. with my 8 gb rx 6600 which I was only able to run sdxl with sd-next (out of memory after 1-2 runs and on default 1024x1024), I was able to use this is comfyui BUT only with 512x512 or 768x512 - 512x768 (memory errors even Turbo SDXL-LoRA-Stable Diffusion XL faster than light My civitai page: https few seconds = 1 image Tested on ComfyUI: workflow. 1. From what I gather only A1111 and its derivatives can correctly append metadata like prompts, CFG scale, used checkpoints/LoRAs and so on while ComfyUI cannot, at least not resources. So from what I can tell, ComfyUI seems to be vastly more powerful than even Draw Things (which has a lot of configuration settings). github. 4" - Free Workflow for ComfyUI. Please share [Please Help] Why is a bigger image faster to generate? This is a workflow I made yesterday and I've noticed, that the second KSampler is about 7x faster, even /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will what comfyui devs say and what people do with customs nodes are different thing. However, with that being said I prefer comfy because you have more flexibility and you can really dial in your images. Here's the thing, ComfyUI is very intimidating at first so I completely understand why people are put off by it. Locked post. The quality compare to FP8 is really close MAI Coffee : an exploration on how far I could push local video models today. I'll try in in ComfyUI later, once I set up the refiner workflow, which I've yet to do. Sure, my paintbrush never crashed after an update, but then comfyui doesn't get crimped in my bag, my loras don't need cleaning, and a png is quite a bit cheaper than canvas. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by Welcome to the unofficial ComfyUI subreddit. On the one hand, EXT is much faster for some operations, on the other, file corruption on NTFS is basically non existent and has been for decades. I always hated these node based programming substitutes, because they just take sooo much longer to accomplish the same thing. While kohya samples were very good comfyui tests were awful. I think ComfyUI remains far more efficient in loading when it comes to model / refiner, so it can pump things out faster. Tested failed loras with a1111 they were great. Hi :) I am using AnimateDiff in ComfyUI to output videos, but the speed feels very slow. 9 and it was quite fast on my 8GB VRAM GPU (RTX 3070 Laptop). 1) in ComfyUI is much stronger than (word:1. do i have to use another workflow or why is the images not rendered instant or ´why do i have these image issues? i provide here link to the model from civitai site and the result image and my comfyui workflow in a screenshot: 6. I switched to ComfyUI after automatic1111 broke yet again for me after the SDXL update. Also, if this is new and exciting to you, feel free to comfyui always says that its workflow describes how SD works. I guess gpu would be faster, have no evidence, just a guess. My system is more powerful than yours, but not enough to justify this enormous Welcome to the unofficial ComfyUI subreddit. I have yet to find anything that I could do in A1111 that I can't do in ComfyUI including XYZ Plots. If it's 2x faster with hyperthreading enabled, I'll eat my keyboard. When you build on top of software made by someone else, there are many ways to do it. SD. pwna hqttba cyk jsr caxj ocioa bcdra hria yujnmuy wpriu