AI [AI Art] - Show Us Your AI Skill *NO TEENS*

4.80 star(s) 6 Votes

Elefy

Member
Jan 4, 2022
241
947
I lock it to that face and generate a bunch of images from different angles
Try to inpaint only body number of times, choice one and send it to D-ID, make video and get good frames , done!

After you made first LORA trained on 2-3 or some small number images, you load it in txt2img and generate 100+ and pic good for second training.
View attachment friendly-sky.mp4
 

atheran

Member
Feb 3, 2020
355
2,759
Try to inpaint only body number of times, choice one and send it to D-ID, make video and get good frames , done!
Good idea, but WAY too mechanical. I suppose I could do it with inpaint, but that's too much work :D

EDIT: Oh! Now that's a good idea. Multiple trainings in an ever growing library of renders. Somehow I thought I'd train it only once.
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,611
3,863
Can't you do this with Live Pose in ControlNet?
I'd argue we've never seen a complete "fly around" of a character, and hence I firmly believe this cannot be done.

Here is the context. I bet in DAZ/Blender/HS2 you can clearly make a video showing a spinning charter while you change the environments and clothes. I trust this is what everyone wants to get from AI/SD and this is the big goal. This is truly the distilled ask.

And while I watch the Control Net videos it feels like their end results fall way short of this, although the tech is promising.

But it will probably take SD 3.0 or somesuch together with the next crop of Control Net like tech before we are there.
 

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,808
I'd argue we've never seen a complete "fly around" of a character, and hence I firmly believe this cannot be done.

Here is the context. I bet in DAZ/Blender/HS2 you can clearly make a video showing a spinning charter while you change the environments and clothes. I trust this is what everyone wants to get from AI/SD and this is the big goal. This is truly the distilled ask.

And while I watch the Control Net videos it feels like their end results fall way short of this, although the tech is promising.

But it will probably take SD 3.0 or somesuch together with the next crop of Control Net like tech before we are there.
What about charturner?
 
  • Like
Reactions: Sepheyer

Sepheyer

Well-Known Member
Dec 21, 2020
1,611
3,863
What about charturner?
Indeed, applies to CharTurner too. Once I see a video of a finished product: a chara spinning as outfits and environments change I'll be a believer. Right now, based on my understanding of how SD works, the character is materially affected by changing the environment and their outfit.

We will probably arrive at a hybrid first - you make a model in, say, DAZ, she becomes your seed, i.e. Alice123, you spin her and generate images, then you train a model, and then you bring her back in via a tag [Alice123].
 
Last edited:
  • Like
Reactions: Jimwalrus

Mr-Fox

Well-Known Member
Jan 24, 2020
1,401
3,808
Indeed, applies to CharTurner too. Once I see a video of a finished product: a chara spinning as outfits and environments change I'll be a believer. Right now, based on my understanding of how SD works, the character is materially affected by changing the environment and their outfit.
I see. (y)
 

Sepheyer

Well-Known Member
Dec 21, 2020
1,611
3,863
And of course, the moment I say something like XYZ is still far-far away, the "3D mesh" generators start becoming a thing:

 

baloneysammich

Active Member
Jun 3, 2017
998
1,542
Has anyone played around with using 3D posing to help direct generation?
 
4.80 star(s) 6 Votes