Stable Diffusion and AI stuff

Support, Discussion, Reviews
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »



AI is progressing at a break neck speed still. There were at least 5 new models that came out ranging from video, image, audio to text models. All nice advancements. Huge progress in AI voice/emotion for local AI.

Just a little while ago I was gushing over OpenAI's image generator that allowed you to edit images using text commands and keep the image mostly the same.

Fast forward to yesterday and Black Forest Labs released FLUX Kontext

Basically it allows you to do all sorts of things with an image, keeping the context of the image. The video above is worth a watch. Right now it's available only in the Pro/Ultimate models but a local FLUX-Kontext DEV model (Like FLUX DEV before it) will be released allowing this to be done locally.

It can modify a scene, extract things from an image, Change text in an image, keeping the same style but with new text, if you check out the video, it can do things like "Take the pattern off the store window and make it a tattoo on a man's back"

It will also allow you to expand and fill (InPainting/OutPainting) an image

According to the charts near the end of the video. Even the Local FLUX Kontext DEV version is better than the commercial competition like ChatGPT, etc. in some areas. It's also fast. Looks like around 10 seconds for image2image using Kontext DEV. version and less than 3 seconds for the pro (vs almost a minute for gpt-image)

The local version won't be the "best" version but what matters is that it can run locally.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

Quick AI update before going traveling.

AI is amazing and hasn't stopped progressing at all. Always same or faster increase in AI capabilities.

As for video. The developments have come in the way of making the same videos faster with less VRAM. I can now make quality 7 second videos in about 90 seconds which means those with less VR can make them but slower processing time. Image to video is almost like magic. It still amazes me how the models figure out the movement of everything, hair, cloth, wind blown stuff, etc etc. They key here is the ability of the model to retain the facial features from a single still image and create a consistent video from it...and it's getting really good at that.

I won't post any guides. It's still takes some effort and not worth trying to show how do it here for now.

I've been focusing on audio AI. Chatterbox is amazing. This extended version in particular:

https://github.com/petermg/Chatterbox-TTS-Extended

You can use any 8-20 seconds of any voice and it will exactly reproduce that voice along with a setting on how much emotion you want.

It's really fucking good. This is "zero-shot" meaning you don't need to train it. Just give it the 8-20 second audio sample and you're off to the races.

Here are some examples of same voice and then the zero-shot cloned voices with various levels of Exaggeration:

https://resemble-ai.github.io/chatterbox_demopage/

The extended version (more capabilities) allows you to stitch together segments. I've been making around 15 minute stories but I think you can go to around 30 minutes without issue.
------------------------------

Another really cool feature of the extended version is voice conversion. It allows you to take an audio file and convert the voice to the sample voice you provide. It changes the voice but not the accent etc.

For example. I took the first chapter of Harry Potter's first audio book. I provided a sample of a Japanese Anime actress. I converted the entire first chapter in about 90 seconds (RTX4090). The result was the female actress's voice but she sounded British and used the exact tone/inflection of the original male narrator. Same high quality as the ''professional" narrator.

What this means is people like Spang can use their favorite Trans voice actor and convert any audio so he hears into that special Trans actor's voice that he holds dear. For the rest of us, it means that if you have an audio book and you can't stand the voice of that "professional" voice actor, you can convert the entire book into a voice you do like with zero loss in quality or voice emphasis/emotion etc.

If you also don't like their tone/emotion, you could completely convert it using just the text and setting the temperature/CFG, etc for the voice of your choice and just create an entirely new one.

For quality control, you can have it generate multiple times each sentence and then it has Whisper check each line for accuracy and it will regenerate the line if it doesn't sound right. That takes a lot more time but if you really want really high quality, it's capable.
---------------------

This is a huge step. Voice AI is now able to reproduce emotion and it's on the cusp of making it easy to break down a book/script into the individual voices and have the AI use as many voices as you want for the book....way better than those poor soon to be out of work voice actors can do. How many times have you (except Spang/trans stuff) been disgusted by a voice actor trying to emulate the opposite sex of what they are? No more. You will very soon be able to have high quality, emotional setting voices for Narration and every character in the book. Badass. The tech is already there but it will be another month or two before it's as easy as Chatterbox above to do it. They're working on the model already.

I attached a zip file with an mp3 of this post with Graham Hancock's voice. (can't attach mp3s directly) It's always more interesting if Graham Hancock is saying it!

Note: I upped the exaggeration just a tad so Graham is speaking a little fast : )

It's worth a listen! Listen to it as you read along so you can evaluate it's quality. I did change Baddass to "It's badass bitches!) in the audio version and also left the sentence " It's still takes some effort and not worth trying to show how do it here for now." in so that's on me, not the AI for repeating what I said.
You do not have the required permissions to view the files attached to this post.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

I've been messing around with the latest local video generator WAN2.2 It's really good.

Spang photobombs EQ Cover Lady
eqcover.png
From starting image of this. I used Kontext to make it realistic, then used WAN2.2 to create these. Note: I didn't directly use the first image. I could have kept her pose and clothes more accurate. Her side stance actually looks kinda strange to me. Sony probably didn't pay for a good artist!
eqphotob2.gif
click image to see gif animation
eqphotob1.gif
Spang in costume
A sorceress. A man leans in from the side and makes the peace sign. The sorceress hits the man with her staff knocking him out of the scene
Unfortunately these are high quality videos converted to .gif and then optimized gifs to be able to post on this forum. The original quality is really good.

It does text, I didn't put effort into this so didn't bother correcting Spang's t-Shirt.
----------------------
time:

less than 60 seconds to make EC Covergirl Photorealistic and remove the red tail at her feet using FLUX Kontext

less than 2 minutes to create each video.
----------------------------
These are bad examples of how good WAN2.2 is but they were quick test and made me laugh.

Key things with WAN2.2 vs WAN2.1 is it's a dual model system. first model sets up the scene and motion, second model refines it. This allows for higher quality videos without requiring more VRAM. I still have to use all the tricks available to make it process quickly with 24GB VRAm but if you want to wait like 30 minutes, you can generate on as little as 8 VRAM. 2 minutes = fun but if you really had something you wanted to animate or make a video of at least you can with lower VRAM and some hoop jumping.

WAN2.2 is getting really good at following prompts. It also makes very few clipping errors (like the one in second gif where her staff shows through her cloak)

TL;DR the EQ Cover lady doesn't like to be photobombed
You do not have the required permissions to view the files attached to this post.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

After a week or so of WAN2.2 being refined in workflows etc, it's safe to say this model is amazing.

I understand our forum is working of world war II or early 2000's tech so I can't upload short mp4 videos as examples so I'll just say what makes it great.

WAN2.2 is incredibly good at understanding real world physics. When you start a video with a still image, it understands shadows, wind physics, body physics extremely well.

It also can figure out how to make image to image (first frame/last frame provided) videos flow.

For example. The best example is a photo shoot because it's easy to imagine. You have a subject doing various poses. If you take one picture of a pose and another pose even is it's extremely different (standing/sitting/holding a prop, etc) WAN2.2 will smoothly transition a the video between the two images to make it seamless.

It understands as an example. say a woman's hair is down in one image but it's in a ponytail in the next. WAN2.2 figures all this out and has the woman pull her hair up into a ponytail and it all looks natural.

A more extreme example. Say her hair is dry in one picture and wet in the next. WAN2.2 will figure out a way for that hair to get wet, maybe random person pours a bucket of water on her head, etc (unless you actually prompt how you want it to get wet, WAN2.2 will make something up).

Keeping with a photo shoot. (for smut's sake, imagine it's a playboy centerfold pictorial). Say there's 10 photos. You can set first/last frame continuously between those 10 photos and say 6 second video steps). Run your workflow and you end up with a 60 second, seamless video of that photo shoot with the model smoothly moving around and posing in all of those 10 positions.

If the first/last frame is too extreme (say on beach in first image and on snow covered mountain top in second), WAN2.2 will still make a smooth fade transition between the two but of course can't accomplish that kind of change in 6 seconds.

That was a first/last frame example. You can also create a endless loop where you make multiple steps with different prompts.

Example:

Prompt 1: A women wearing a Viking cosplay outfit waves at the viewer

Prompt 2: The woman walks over to a bar and picks up a drink

Prompt 3: The woman rips off her clothes and a donkey mounts her from behind (lora required)

Prompt 4: Zoom in on donkey's face smoking a cigarette

So, in the above scenario, you don't need to say "wearing a viking cosplay outfit" in the second prompt because the last frame of the first video is used to start the second prompt. In the third prompt, WAN2.2 doesn't understand the concept of a donkey fucking a woman so you would add a LORA to that step so WAN2.2 can accomplish the task.

The above scenario would be 24 seconds long. Say you wanted to make the donkey scene longer. you would add a prompt "a donkey is having sex with a woman" inserted after prompt 3 and that scene would play out for 12 seconds. You wouldn't repeat prompt 3 because her clothes are already ripped off and she's already mounted.

Obviously the above is hypothetical since there's no giraffe involved but you get the point.

On a more fun side of things, you can take frames from movies and change the scene.



These aren't the best examples (Star Wars one is pretty funny) but gives you and idea of how good WAN2.2 looks.

Since WAn2.2 is a two models High Noise/Low Noise, it takes a little longer to make good videos. That 24 second video would take me about 8 minutes with 24VRAM. way longer with less VRAM. Do not buy anything at this point with less than 24 GB Vram. guaranteed regret. At the very worst, 24B Vram will eventually be the low point where solutions are made for it. 5090 with 32GB is iffy. I'm still on the fence. I will probably wait for 48GB VRam option but if the 5090 drops within a $200 or so of retail I might buy it.



Another amateur example of animating still images. from the comments :16GB VRAM, q5 gguf model
each 5 second clip take around 9-10 min.

It's not a continuous loop example like I described above so there are cuts. Plus it's slow motion which is kind of annoying.

on a 4090 that would take about 2 minutes a 5 second clip. WAN2.2 will keep the style (realistic or whatever art style you throw at it) without needing a LORA.



Example of text to video with no initial image.

WAN2.2 is impressive. We are getting closer. You can now start to mess around with scenes of movies etc.
User avatar
Aslanna
Super Poster!
Super Poster!
Posts: 12547
Joined: July 3, 2002, 12:57 pm

Re: Stable Diffusion and AI stuff

Post by Aslanna »

Note: I didn't directly use the first image. I could have kept her pose and clothes more accurate. Her side stance actually looks kinda strange to me. Sony probably didn't pay for a good artist!
You could have spent 5 seconds to check because now you look like a dumbo. It was done by Keith Parkinson. Most would consider him to be a "good artist."

Interesting trivia from that wikipedia article was that Brad McQuaid gave Keith's eulogy.
Have You Hugged An Iksar Today?

--
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

Aslanna wrote: August 16, 2025, 8:58 pm
Note: I didn't directly use the first image. I could have kept her pose and clothes more accurate. Her side stance actually looks kinda strange to me. Sony probably didn't pay for a good artist!
You could have spent 5 seconds to check because now you look like a dumbo. It was done by Keith Parkinson. Most would consider him to be a "good artist."

Interesting trivia from that wikipedia article was that Brad McQuaid gave Keith's eulogy.
Not to besmirch the name of the artist because everything art related is based on your personal preference. The EQ cover is not good art (to me). It's not horrible but not good. The hands! If it was AI you'd be complaining about the hands but because it's a "human" I guess they get a pass along with many many other bad hand artists. (refer to my post about comic book artists and hands with plenty of examples)
User avatar
Aslanna
Super Poster!
Super Poster!
Posts: 12547
Joined: July 3, 2002, 12:57 pm

Re: Stable Diffusion and AI stuff

Post by Aslanna »

The comment you made, and what I was responding to, was about the artist ("Sony probably didn't pay for a good artist!") and not the art. I wasn't commenting on that specific piece of art since art appreciation, as you state, is subjective. You may like it.. You may hate it.. But that doesn't necessarily make the artist bad. It's possible to separate the two.
Have You Hugged An Iksar Today?

--
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

I still liked the original loading screen for EQ. You can certainly pay for art and have the result be bad. I agree you can separate Sony being cheap from making bad decisions.

Sony (whatever division owned EQ) wasn't good at advertising. They got jack-stomped by WOW when it came out even though EQ was the better game. Well it was before some of the later expansions. Part of that was whiny people not liking more challenging games. EQ probably should have adapted though and figured out why they were losing people to the easier game. I liked the raids but did fall asleep multiple times due to their length.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

EQ1.png
OCR LLMS are getting really good. I used qwen3-v1-4b for this.

I loaded the model in LM Studio (the Q4_K-M Quant fits in under 6GB of VRAM)

took like 5 seconds for it to analyze and provide the description. while the larger LLMs get better and better, so do the smaller LLMs.

That's quite detailed description for a tiny LLM. It called her a warrior but did describe her actual appearance, staff, etc right. Guessed right on the dragon tail.

It knew it was the official cover art for EverQuest.
You do not have the required permissions to view the files attached to this post.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »



I'm all about local AI but sometimes it's worth mentioning how ridiculously good the large cloud image generators are getting.

Nano Banana Pro was released a few days ago. You can actually try use it free on the lower resolutions.

I would encourage you to check out the linked video to see what image generators can do now. It's 34 minutes long but highly bookmarked so you can scroll across the timeline/thumbs to see various examples.

I'm actually surprised google allows real celebrities to be generated. I guess we have Elon Musk and Grok to thank for that. Fuck that guy for forcing Google not to generate black WWII nazis and Native American US presidents. Haters gonna hate.

Specifically for real people check 5:34-9:17 times

It doesn't just generate images it analyzes them. You can upload a chart and say "change x numbers" It will not only correctly adjust the numbers but the chart itself which was just an image to begin with.

"Disassemble Gundam into model parts" was impressive as well.

Give it a geolocation and it will generate an image based on what's at that location

Give it a floorplan and it will create a photo based on the blueprints.

Colorize and translate pages of anime/comics etc.

A couple things to note. 2 years ago we couldn't even do video and images were basic and couldn't be edited for consistency, bad hands etc. Bad hands isn't an issue. You never see bad hands, like ever in new image models and videos.

This guy points out the remaining issues in some areas so Spang can march down the street with his buddies declaring that Nano Banana Pro still can't take the entire ingredients and fine print from a tube of toothpaste and integrate it into an image! Take that AI!

Pro or con AI, it's worth checking out some of the frequent updates on what AI can do. Separate from this image editing part. Gemini 3's text model can do crazy shit fast like "Create a clone of Windows 11 including working versions of MS Paint, Word and Browser and make it HTML" It can do that in seconds from scratch, code everything and present you with what you asked. I already use apps that people created entirely in AI. Small ones like an HTML based video viewer where I can drag how every many videos I want to the window and it will sync them so I can compare between generations. AI (now if you're more savvy or later if you're a grandma) will make it so you don't need to look for apps that do what you want. You can ask AI to create exactly what you need customized as you see fit. But Spang doesn't like that. He wants you to be stuck doing what only certain individuals are trained for doing. If you are 70 and want to explore nature but can't fucking walk, Spang says too fucking bad, that's only for healthy people. No way an AI robot should be able to assist an elderly person walk and see nature. And this is where Spang will start spouting out exceptions to the rule, just like all the crazy religions have done for centuries as technology advances, what they used to kill you for, they integrate now and say its "god's will".

Man it's fun to rant!
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

gemini-Blackpink-BTS-01.png
"Create a selfie image of the members of Blackpink and BTS all posing together for the photo. "
Not bad. If you are familiar with these groups, it's easy to recognize all of the members. It's only 1K because it's free and I'm not paying for 4K.

They forgot Suga in the image! hehe. I dont know if RM would be using an Apple phone as well. Pretty sure both Blackpink and BTS are sponsored by Samsung. We'll go ahead and assume he took the phone from a make-a-wish kid and took the picture for his make-a-wish.

Also, gotta love AI. BTS and Blackpink can't mix. Two different studios and they have draconian rules about mingling with other groups. Although only a handful of people might want this (which is why AI is great), while these groups can't even talk to each other IRL, with AI you can or will be able to create a cameo song featuring both groups. Spang will lose his shit and then eat it but really, who cares if someone wants to do that for their own entertainment?

It's hit and miss with celebrities. I asked to for this:
Create an image of 1970's era Debbie Harry from Blondie posing for a picture with present day Peter Dinklage. They are dressed in futuristic science fiction costumes on set with cameras and crew in the background.
I won't post that one. It wasn't even close : )
You do not have the required permissions to view the files attached to this post.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

Oh man, models never stop coming

Flux 2 released yesterday. Its a large slow model. Noteworthy because it's Flux but there's some better ones out in next few days

Qwen is releasing a 6B parameter model along with their latest Qwen-Edit 2511

6B is half the parameters of original Flux and the new Flux is 30B

This 6B model looks really good/consistent for it's size. On 40xx/50xx you'll be making images almost instantly. under 5 seconds.

Qwen-Edit will be the primary go-to image model for local but their 6B

Exciting times for AI. I made a 3 minute video last night using Illustrious/Qwen-Edit/Wan2.2

Illustrious (when I say Illustrious, not the base model. You use one of the thousands of fine tuned checkpoints on CivitAi that use Illustrious as the base)

Illustrious is awesome for adherence to poses but the least realistic. Illustrious isn't mandatory, it just helps sometimes.

I take the Illustrious image and then use Anime2Real Lora using Qwen-Edit to convert that image to a realistic version keeping all the details/pose etc.

I then use first/last frame in Wan2.2 to string together my edited images for a smooth long video.

Once you know what you're doing and have things setup, AI becomes more of a creativity tool than a hassle...but that initial learning of ComfyUI and understanding of the bazillion settings takes some time. That said, there are always basic workflows available on CivitAI that you can use until progress if want to improve. As I say this, it becomes easier and easier to do so that's a good combo.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

char1.png
cook1.png
street1.png
Some examples of anime2real

That's with no prompting, just using the lora and clicking the button. You can help the process by describing certain things.

I choose poor quality images on purpose because it demonstrates it doesn't matter, AI will figure most stuff out. The cooking one in particular is very low res. In the street one you'll notice it ignored the watermark (although you could say "remove text" as part of the prompt if it did show up.


Not bad for 20 second one click conversion. In cooking one, you'll notice my image dimensions are different, AI will attempt to fill in the missing parts. Her left hand is behind the bowl. I could prompt "left hand behind bowl" if I didn't like AI changing it to he holding the bowl. look how funky her hand is in the anime cooking image. AI still did a good job.

It's a nice tool. After you do that then you can use Qwen-Edit (which this is) to change thing like "Remove bowl. The woman is carving a turkey. Leave rest of image unchanged." Qwen-Edit will change her to carving a turkey. Have her look at camera, change background, add text to her apron "World's Best Cook". etc

In top image you could add a background, change her hair color, make a man, make her black, whatever. Change resolution to wide screen and say "woman is on stage, audience in background, whatever time of day etc. Change her pose

Powerful tools to get exactly what you want.

Can also go reverse of course and make a photo a sketch, anime, in the style of any of thousands of artists. etc.
You do not have the required permissions to view the files attached to this post.
User avatar
Spang
Way too much time!
Way too much time!
Posts: 4892
Joined: September 23, 2003, 10:34 am
Gender: Male
Location: Tennessee

Re: Stable Diffusion and AI stuff

Post by Spang »

The images on the right aren’t real. It’s just more slop.
For the oppressed, peace is the absence of oppression, but for the oppressor, peace is the absence of resistance.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

This is the new Qwen model Z-Image Turbo:

https://comfyanonymous.github.io/ComfyU ... s/z_image/

All you need is those three models linked on that page and then download the png image on that page and drag it into ComfyUI (updated today) and you can use it locally.

I show it using 21+ GB VRAM but I think it's because I have the clip (text encoder) loaded into VRAM as well which is around 7.5GB so around 13.5GB or so if that's loaded into system ram instead. That's with FP16 model. I'm sure FP8 will be available soon.

You can try it here:

https://huggingface.co/spaces/Tongyi-MAI/Z-Image-Turbo

That is on huggingface so while images take 5 seconds locally, it depends how many are using it on the demo page.

Just type in a prompt and it will generate. Very impressive model for its size. It's only 6B parameters. I think it's amazingly good for that size plus it's uncensored.

This is a good image generator to start with if you start using ComfyUI. Brain dead simple workflow with no special nodes needed. Qwen (Alibaba) is killing it. This 6B model will be the go to for most people due to its size. A new Qwen-Edit 2511 will be released in next few days which already is the go to for higher image generation and image editing. Wan2.2 is also Alibaba. The west comes out with a 32B parameter Flux-2 and then Qwen humiliates them with a model more than 4 times smaller in size the next day. The best photo realistic image generator is probably Wan2.2. While
Wan2.2 is a video model, it's single images are amazing.
trmpbid2.jpg
a photograph of a rope bridge with Donald Trump and Joe Biden crossing it
trmpbid.jpg
a photograph of a movie theater with Donald Trump and Joe Biden sitting in seats. They are sharing a bucket of popcorn. dim lighting.
A couple I made to test censorship
You do not have the required permissions to view the files attached to this post.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

wolv-spang.png
Closeup. Harley Quinn standing at a comic book convention. Her shredded T-shirt says "Daddy's lil Monster". Deadpool is standing on her right. Hugh Jackman as Wolverine is standing on her left. golden hour lighting. posing for selfie
Closeup. Harley Quinn standing at a comic book convention. Her shredded T-shirt says "Spang's my Daddy". Deadpool is standing on her right. Hugh Jackman as Wolverine is standing on her left. golden hour lighting. posing for selfie
Some spanking material for Spang.

I used Hugh Jackman's name for Wolverine, didn't use actress name for Harley but she looks like Margot Robbie

Not bad likenesses for no added Loras
aunt-elden.jpg
Aunt Jemima, obese African woman, her apron says "Aunt Jemima". she is holding a bottle of syrup. She is pouring syrup from a bottle onto a white man's tongue. The syrup drips off the man's tongue onto the pancakes. Syrup covering tongue. The skinny white man is sitting at a table. A plate of pancakes is in front of the man. The man's dirty T-shirt says "I beat Elden Ring 1000 times"
These are on the lowest quality/fastest setting. The model is going to be great. It's uncensored and they are releasing the non distilled base model so Loras and fine tunes are going to be outstanding.
strain.jpg
You do not have the required permissions to view the files attached to this post.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »



This guy made a video on how to make Z-Image loras. He's same guy that made the adapter so he knows what he's talking about.

As with most AI stuff. once you make the first one the rest are easy. You just clone the previous job and change a few things. The biggest thing is preparing your dataset (the images and captions)

Z-Image works well with Loras.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

I actually learned something from that video.

1. not to quantize the model if you have 24GB or more VRAM
2. train at multiple sizes. 512/768/1024 (the trainer sizes them for you. you just need to add the initial quality images along with their .txt captions)

I was training every step at 1024 because I never make images smaller than that. adds more than an hour to the training. It looks like I'll be able to train in about 1hour15 mins instead of 2hours40 mins.

I'm also trying his suggested differential guidance. Will see how that goes.

hour 15 mins much more manageable to try out some Lora styles/characters etc.

Anyway. good video made by the creator.
User avatar
Aslanna
Super Poster!
Super Poster!
Posts: 12547
Joined: July 3, 2002, 12:57 pm

Re: Stable Diffusion and AI stuff

Post by Aslanna »

I did do some AI stuff to do some stress testing on the 5090. Normal Text to Image generation didn't seem that stressful but honestly I forgot how to do most things and the ComfyUI UI changed since the last time I used it. I only did a ZIT (terrible acronym) thing. Also annoying is when you load a workflow and the resources aren't there Comfy will tell you what they are and where they need to be, and provide a download button, but afterwards you have to move things manually from your download directory. If it already has both pieces of information why can't it just download and put it where it needs to be instead of going through the browsers download functionality and moving it yourself later. Maybe there is the ability to do that but I couldn't find it.

However, then I tried Image to Video (WAN.2) and almost fried my GPU. I don't currently have curves set up on my fans so I've just had them manually set at 35%. After a few minutes I smelt something that wasn't quite burning but it was something enough to let me know that things weren't as they should be. I quickly bumped all the fans to 100% and it was still running in the 50c range. Checking HWINFO shows that my max power used on the 12HPWR power connector hit 533 watts. I felt all the connectors and wires and nothing seemed overly hot. I'd check the connector on the internal side but I hear it's not good to keep plugging and unplugging those and things are running fine. I'd also want to think that if it got too hot it would have shut down on its own but who knows!

Anyway I didn't know any better and tried a 1 minute video and I ran out of VRAM so that was sad. HWINFO also shows I ran out of physical+page file memory at some point so that extra 96GB I have sitting outside the computer would have come in handy. I did do a shorter video which worked and it was interesting how it knew (or guessed correctly) what to animate and in a seemingly realistic way. Not quite to the point I would call it magic... but more like dark sorcery. The 'textures' did look plastically though and didn't match the original image as much as I would have wanted. So.. Interesting but not really something I'll spend much time doing until things become faster and more efficient. Maybe if I want to heat the house up I'll start it up.
Have You Hugged An Iksar Today?

--
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

Aslanna wrote: December 15, 2025, 6:29 pm I did do some AI stuff to do some stress testing on the 5090. Normal Text to Image generation didn't seem that stressful but honestly I forgot how to do most things and the ComfyUI UI changed since the last time I used it. I only did a ZIT (terrible acronym) thing. Also annoying is when you load a workflow and the resources aren't there Comfy will tell you what they are and where they need to be, and provide a download button, but afterwards you have to move things manually from your download directory. If it already has both pieces of information why can't it just download and put it where it needs to be instead of going through the browsers download functionality and moving it yourself later. Maybe there is the ability to do that but I couldn't find it.

However, then I tried Image to Video (WAN.2) and almost fried my GPU. I don't currently have curves set up on my fans so I've just had them manually set at 35%. After a few minutes I smelt something that wasn't quite burning but it was something enough to let me know that things weren't as they should be. I quickly bumped all the fans to 100% and it was still running in the 50c range. Checking HWINFO shows that my max power used on the 12HPWR power connector hit 533 watts. I felt all the connectors and wires and nothing seemed overly hot. I'd check the connector on the internal side but I hear it's not good to keep plugging and unplugging those and things are running fine. I'd also want to think that if it got too hot it would have shut down on its own but who knows!

Anyway I didn't know any better and tried a 1 minute video and I ran out of VRAM so that was sad. HWINFO also shows I ran out of physical+page file memory at some point so that extra 96GB I have sitting outside the computer would have come in handy. I did do a shorter video which worked and it was interesting how it knew (or guessed correctly) what to animate and in a seemingly realistic way. Not quite to the point I would call it magic... but more like dark sorcery. The 'textures' did look plastically though and didn't match the original image as much as I would have wanted. So.. Interesting but not really something I'll spend much time doing until things become faster and more efficient. Maybe if I want to heat the house up I'll start it up.
nodes.png
use 960x608 (WxH or HxW) and 81 frames for the video length (5 seconds) 101 frames is what I usually use but start with 81 to test.

use these HIGH..LOW models (they have built in speed loras) 4 steps HIGH 3 steps LOW

I'm trying to find a non NSFW checkpoint but I can't so ignore that these are NSFW and use them: (despite the samples they can do normal SFW videos)

https://civitai.com/models/2053259?mode ... Id=2477539 (HIGH) 13.31 GB

https://civitai.com/models/2053259?mode ... Id=2477548 (LOW) 13.31 GB

put the models in the models/checkpoints folder in comfyUI.

I believe this model is both T2V and I2V so you can use same models for both. The linked ones should be the FP8 (you don't want the GGUF)

They are checkpoints as opposed to diffusion models. So you need to use the "Load Checkpoint" node instead of Load Diffusion/Unet node. Only attach the model part use separate nodes for Clip and VAE as shown in my screenshot) Double click in empty space on ComfyUI workflow and type "Load Checkpoint" to get the node. If you already have a workflow that works, you may only need to swap out the checkpoints and checkpoint node and change the Ksampler settings.

Use my linked picture to see the checkpoints and my Ksampler settings. it's 7 total steps. You'll see in the Ksampler I have it set to end HIGH at 4 steps and then LOW takes over. Note: im using omega checkpoint in the image but that's not available anymore. The one's I linked are very similar. Again, ignore the NSFW for now. CFG 1 is correct in the Ksamplers, you don't use negative prompts, only positive prompts will be acted on.

This is really fast. it will be like 2 mins or less (once you have the initial model loaded and make new videos)

I highly recommend you try Image to Video. That's pretty much all I use. I create the starting image in Qwen or Z_image then Wan2.2 has a nice starting point to begin the video.

In the image I posted above you'll see two places to input an image, that's first/last frame. If you look at the node to the left of the woman reading the flamevault, you can see that only "starting image" is connected. If I wanted to do first/last frame I would just connect that second load image node to the "end_image" and it would force the video the end with that second image. Try something basic...like create image of someone standing, then sitting. "The man sits in the chair" Wan2.2 will do the rest and move your character from standing to sitting ending exactly at that 2nd image as the last frame.

Note: for vae wan_2.1 is correct. Even though Wan2.2 is the model, it still uses 2.1 VAE.

I'll try and find a SFW model later and maybe I can find a simple workflow (mine are all convoluted messes I created that makes sense to me but not organized) Your 5090 should breeze though making 81 o 101 frame videos with this model and settings.

Once you finally get the initial workflow working, making videos is fun and easy. To extend videos just take the last frame and plug it into the first load image then place a new end image. You don't need a last image to extend though. you can let it do what it wants starting from the last frame of the previous video.
nodes2.png
If you want to automatically save the last image of a video, you can use these nodes, that way if you can just cut and paste the last fame back into the load image to start the next segment.

Use MP4 joiner to combine the segments easily in seconds.

https://www.mp4joiner.org/en/

make sure you're saving your Wan videos in h264-mp4 format (don't use gif or webp etc)
You do not have the required permissions to view the files attached to this post.
Last edited by Winnow on December 15, 2025, 9:18 pm, edited 4 times in total.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

Another short note.

nvidia/nemotron-3-nano

Just came out today. It's really fast. the Q4 is about 24 GB I think.

If you haven't already you just download LM Studio:

https://lmstudio.ai/

change it to white theme in the upper right corner. hate dark theme although that's personal preference.

then search nvidia/nemotron-3-nano inside LM Studio and download and you're ready to go. (it's probably already at the top of the "Staff Picks: list since it came out today)

It's big but also fast. It will make good use of 5090. It will handle large context sizes but that's dependent on VRAM. LM Studio defaults to 4096 context (in model settings) but you can try double or quadruple that. Basically check your VRAM in task manager/Performance/GPU as long as you aren't going over 32GB you can increase context size more. (just makes it remember more of the conversation/chat/inquiries you are having with it for reference so doesn't need to be maxed or anything.

Also make sure that all layers are being offloaded to GPU in the model settings. 54/54 I think for this particular model. You don't want anything in system ram.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

nodes3.png
Also, if you haven't already, set preview method to: Auto

this allows the ksamplers to show previews of images or videos being generated. It's super helpful as you can cancel a generating image or video in the middle of it if it doesn't look like it's doing what you want.

Those big black areas below my two ksamplers in the first image will show the video preview as it progresses.
You do not have the required permissions to view the files attached to this post.
User avatar
Aslanna
Super Poster!
Super Poster!
Posts: 12547
Joined: July 3, 2002, 12:57 pm

Re: Stable Diffusion and AI stuff

Post by Aslanna »

Winnow wrote: December 15, 2025, 10:12 pm
nodes3.png
Also, if you haven't already, set preview method to: Auto
Yeah that's the UI I was familiar with. I am using ComfyUI 4.0 (well technically v0.4.0). 3.37 was a long time ago! I don't have that Preview Method option. It's all topsy-turvy! In a way it almost seems less-functional that it was previously but I'm sure that's just my inexperience with it.

The video I was generating was 1024x1024, because that was the size of the original image, so that may have had something to do with running out of memory. It was ok for 30 seconds though. I did resize the image to 512x512 and try again but the quality just wasn't there when compared to the previous one.

It's not high on my list at the moment. I'll probably work on getting all 4 sticks of RAM stable first.
Have You Hugged An Iksar Today?

--
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

Aslanna wrote: December 15, 2025, 11:05 pm
Winnow wrote: December 15, 2025, 10:12 pm nodes3.png

Also, if you haven't already, set preview method to: Auto
Yeah that's the UI I was familiar with. I am using ComfyUI 4.0 (well technically v0.4.0). 3.37 was a long time ago! I don't have that Preview Method option. It's all topsy-turvy! In a way it almost seems less-functional that it was previously but I'm sure that's just my inexperience with it.

The video I was generating was 1024x1024, because that was the size of the original image, so that may have had something to do with running out of memory. It was ok for 30 seconds though. I did resize the image to 512x512 and try again but the quality just wasn't there when compared to the previous one.

It's not high on my list at the moment. I'll probably work on getting all 4 sticks of RAM stable first.

yeah you don't want to make your videos very that high of resolution. 960x608 (784x784 square) was a sweet spot I found. For the videos I post on CivitAI I upscale in Topaz Video AI 2x (1920x1216) and make them 60fps, and add whatever the weakest AI enhancing option (Proteus I think). It only takes a few seconds to upscale the 6 second (101 frames at 16 FPS) video and less than a minute for a longer first/last frame one like 24 seconds or so. I tend to keep my videos under 30 seconds because any longer and CivitAI doesn't autoplay them.

You don't want to make your video much longer than 6 seconds (101 frames) as things start to repeat as Wan2.2 was trained in 5-8 second clips. It's much better to use first/last frame and stitch multiple 5-6 second videos together. It also gives you more control as you can direct each segment and use differen't LORAs/change the prompt etc.

Anything larger than some combination of 960x608 (which is about half 1920x1200) and your generation times start getting a lot longer. Way easier to upscale them later if needed.

I think I made it as high as #12 on CivitAI for videos (based on thumbs up rankings they have) in the humor category. I've dropped off since then though as it's a rotating maybe 30 day or even maybe weekly count they track the votes.

It sounds like you're using the ComfyUI desktop app or something. They've been messing with the UI even for the portable version lately making some bad UI decisions. I'm sure 3.37 is old. I don't use Manager to install nodes. I download them from github myself then update them manually via

Code: Select all

python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-whatever the node is you're updating\requirements.txt
manager doesn't work with a portable version of comfyUI but I wanted to keep ComfyUI iseparate so it's python/cuda etc were separate from my system versions. I also have triton and sage-attention installed, both a bitch to mess with that increase speed. With a 5090 your speeds should already be good (relative to the masses).

Definitely try and figure out how to turn on ksampler previews as it's very useful to see what's going on and save time by cancelling videos that aren't looking good or what you were expecting to either regenerate or change the prompt a bit.

If the version you're using has the .bat file, this might work:
Edit the "run_nvidia_gpu.bat" file with "--preview-method auto" on the end.

It will show the steps in the KSampler panel, at the bottom.
User avatar
Winnow
Super Poster!
Super Poster!
Posts: 27809
Joined: July 5, 2002, 1:56 pm
Location: A Special Place in Hell

Re: Stable Diffusion and AI stuff

Post by Winnow »

nodes-front.png
this is what i'm currently running for ComfyUI. 0.3.76 is close to the latest if not the latest. latest frontend is 1.36.2

That's the stand alone build. the front-end updates every couple days. Since they are messing around with it so much, I wait a bit to update. I update the back end even less, only when I have to to support a new model (Like Qwen-Edit 2511 coming out in the next few days will require me to update)

Don't use python 3.13. It has issues with a lot of things. Python 3.10 to 3.12 is best.
2.8.0+cu128 or 2.9.0+cu129 are good for 50xx. I have 128 and I'm not touching it unless I have to as I'd need to recompile triton and sage-attention

When it comes to ComfyUI, if it's working don't mess with it unless you have to. Sometimes you have to as new stuff might require a newer version of Pytorch etc.
You do not have the required permissions to view the files attached to this post.
User avatar
Aslanna
Super Poster!
Super Poster!
Posts: 12547
Joined: July 3, 2002, 12:57 pm

Re: Stable Diffusion and AI stuff

Post by Aslanna »

All those words you are saying.. That's all beyond me at the moment. I use Stability Matrix (Windows version) and used that to install the ComfyUI package. I haven't installed WSL on this computer yet. So this was the easiest path to get things installed again.

I did get around to setting up my fan curves so they (cpu cooler, rear fan, gpu fans) are all 35% at idle and ramp up based on the GPU temp. Right now they are all going to 100%, which is a bit loud, but I'll do some monitoring and see if I can drop that some.

https://github.com/comfyanonymous/Comfy ... tag/v0.4.0

I don't know how to post screenshost but my UI is 0.4.0 and frontend is 1.33.13. I don't have Manager or EasyUse.

Python is 3.12.11 and Pytorch is 2.9.1+cu128. Whatever that means!
Have You Hugged An Iksar Today?

--
Post Reply