Latent Vision
Latent Vision
  • Видео 29
  • Просмотров 845 464
Advanced Style transfer with the Mad Scientist node
We are talking about advanced style transfer, the Mad Scientist node and Img2Img with CosXL-edit. Upgrade the IPAdapter extension to be able to use all the new features. Workflows are available in the example directory.
Discord server: discord.com/invite/W2DhHkcjgn
Github sponsorship: github.com/sponsors/cubiq
Support with paypal: www.paypal.me/matt3o
Twitter: cubiq
00:00 Intro
00:23 Style Transfer Precise
02:03 Mad Scientist Node
05:35 Advanced Blocks Tweaking
07:27 CosXL Edit
Просмотров: 10 258

Видео

Dissecting SD3
Просмотров 14 тыс.16 часов назад
How does SD3 work? Is it any good? No drama, no politics, only the technical side of things. The SD3 Negative node is part of the Comfy Essentials: github.com/cubiq/ComfyUI_essentials Free SD3 generations at OpenArt: openart.ai/create?ai_model=stable-diffusion-3-sd3 Discord server: discord.com/invite/W2DhHkcjgn Github sponsorship: github.com/sponsors/cubiq Support with paypal: www.paypal.me/mat...
Higher quality images by prompting individual UNet blocks
Просмотров 14 тыс.14 дней назад
This time we are going to do some R&D and I will need your help to reverse engineer the UNet. Basically prompting each block of the UNet separately with a dedicated prompt we are able to get higher quality generations. Extension repository: github.com/cubiq/prompt_injection Discord server: discord.com/invite/W2DhHkcjgn Github sponsorship: github.com/sponsors/cubiq Support with paypal: www.paypa...
About AI, Art, Ethics and the environment
Просмотров 7 тыс.21 день назад
Not a tutorial but this is something I wanted to talk about since a while. Ethics of using AI, the environmental costs and Is AI art? The subtitles are hand edited and corrected. What do you think? Discord server: discord.com/invite/W2DhHkcjgn Github sponsorship: github.com/sponsors/cubiq Support with paypal: www.paypal.me/matt3o Twitter: cubiq TED Talk by Sasha Luccioni: www.ted.co...
How to use Face Analysis to improve your workflows
Просмотров 11 тыс.Месяц назад
I often use Face Analysis in my workflows but we never actually talked about how it actually works. Here all you need to know. Remember to upgrade the extensions, these are all new features! Check my Discord for the workflows, they are all free for everybody to use. Discord server: discord.com/invite/W2DhHkcjgn Github sponsorship: github.com/sponsors/cubiq Support with paypal: www.paypal.me/mat...
How to use PuLID in ComfyUI
Просмотров 25 тыс.Месяц назад
In this video I'm going through some basic PuLID usage and also comparing it to other face models. If you have it already installed remember to upgrade the extension! PuLID ComfyUI extention: github.com/cubiq/PuLID_ComfyUI Face Analysis node: github.com/cubiq/ComfyUI_FaceAnalysis Github sponsorship: github.com/sponsors/cubiq Support with paypal: www.paypal.me/matt3o Twitter: cubiq M...
Animation with weight scheduling and IPAdapter
Просмотров 27 тыс.Месяц назад
About time we talked about animations again! I just released new nodes IPAdapter and the Essential that make scheduling IPAdapter, Prompt and controlnet very easy and efficient. Workflows: f.latent.vision/download/scheduled_weights.zip Github sponsorship: github.com/sponsors/cubiq Support with paypal: www.paypal.me/matt3o Twitter: cubiq My Discord server: discord.com/invite/W2DhHkcj...
All new Attention Masking nodes
Просмотров 21 тыс.2 месяца назад
I just pushed an update to simplify attention masking and regional prompting with IPAdapter. Be sure to upgrade the IPAdapter and the ComfyUI Essential to get access to all the new features. The Essentials can be found here: github.com/cubiq/ComfyUI_essentials Download the workflow: f.latent.vision/download/new_attention_masking.zip Github sponsorship: github.com/sponsors/cubiq Support with pay...
Become a Style Transfer Master with ComfyUI and IPAdapter
Просмотров 24 тыс.2 месяца назад
This time we are going to: - Play with coloring books - Turn a tiger into ice - Apply a different style to an existing image Github sponsorship: github.com/sponsors/cubiq Support with paypal: www.paypal.me/matt3o Discord server: discord.com/invite/W2DhHkcjgn All the workflows can be downloaded here no strings attached: f.latent.vision/download/style_transfer.zip The SDXL lineart controlnet: hug...
Style and Composition with IPAdapter and ComfyUI
Просмотров 27 тыс.2 месяца назад
IPAdapter Extension: github.com/cubiq/ComfyUI_IPAdapter_plus Github sponsorship: github.com/sponsors/cubiq Paypal: www.paypal.me/matt3o Discord server: discord.com/invite/W2DhHkcjgn 00:00 Intro 00:26 Style Transfer 03:05 Composition Transfer 04:56 Style and Composition 07:42 Improve the composition 08:40 Outro
IPAdapter v2: all the new features!
Просмотров 67 тыс.3 месяца назад
I updated the IPAdapter extension for ComfyUI. It's a complete code rewrite so unfortunately the old workflows are not compatible anymore and need to be rebuilt. Sorry about that but I don't have time to maintain old code. IPAdapter Extension: github.com/cubiq/ComfyUI_IPAdapter_plus Sponsor the development of my extensions: www.paypal.me/matt3o Discord server: discord.com/invite/W2DhHkcjgn 00:0...
Build Your Own ComfyUI APP!
Просмотров 18 тыс.3 месяца назад
This time we are getting our hands dirty into code! I wanted to show you how easy it is to build custom web applications with ComfyUI and absolutely no knowledge of python. Let me know if you'd like more of this kind of content! Comfy Dungeon: github.com/cubiq/Comfy_Dungeon The FastGen extension can be downloaded from here: f.latent.vision/download/fastgen.zip Discord server: discord.com/invite...
Variations with noise injection KSampler (in pills)
Просмотров 8 тыс.3 месяца назад
This is a kind of experiment I'm doing... I try to pack my videos with a lot of information and sometimes it might feel overwhelming. I was thinking maybe you'd also appreciate shorter videos dedicated to just one concept or even just one node. Please let me know what you think and if you'd like more of this kind of videos. Discord server: discord.com/invite/W2DhHkcjgn Complete video about imag...
InstantID: Everything you need to know
Просмотров 52 тыс.4 месяца назад
InstantID is a style transfer tool targeted to portraits. It's incredibly easy to create a composition in specific style. In this video I'm showing you how to improve the likeliness, how to make a scene with multiple people and much more! InstantID Extension: github.com/cubiq/ComfyUI_InstantID Face Analysis Extension: github.com/cubiq/ComfyUI_FaceAnalysis Generic Workflows: github.com/cubiq/Com...
ComfyUI: Advanced understanding Part 2
Просмотров 29 тыс.4 месяца назад
This is Part 2 of my basics series. Last time we learned how to set the conditioning to the whole scene, time to see how to make localized changes. I'm also talking about LCM, Math Nodes and other big and small tricks! As always do let me know what you think and if I should keep releasing "basics" tutorials or you prefer more advanced stuff. Discord server: discord.com/invite/W2DhHkcjgn 00:00 I...
Making Trading Cards with ComfyUI
Просмотров 11 тыс.4 месяца назад
Making Trading Cards with ComfyUI
Throwing data to your face (models)!
Просмотров 17 тыс.5 месяцев назад
Throwing data to your face (models)!
ComfyUI: Advanced Understanding (Part 1)
Просмотров 67 тыс.5 месяцев назад
ComfyUI: Advanced Understanding (Part 1)
FaceID Take 2! Even more face models! (IPAdapter+ComfyUI)
Просмотров 37 тыс.5 месяцев назад
FaceID Take 2! Even more face models! (IPAdapter ComfyUI)
FaceID: new IPAdapter model
Просмотров 49 тыс.6 месяцев назад
FaceID: new IPAdapter model
Jellyfish Ballerina Animation with AnimateDiff
Просмотров 15 тыс.6 месяцев назад
Jellyfish Ballerina Animation with AnimateDiff
Image stability and repeatability (ComfyUI + IPAdapter)
Просмотров 56 тыс.6 месяцев назад
Image stability and repeatability (ComfyUI IPAdapter)
Animations with IPAdapter and ComfyUI
Просмотров 33 тыс.6 месяцев назад
Animations with IPAdapter and ComfyUI
Infinite Variations with ComfyUI
Просмотров 16 тыс.7 месяцев назад
Infinite Variations with ComfyUI
Attention Masking with IPAdapter and ComfyUI
Просмотров 41 тыс.7 месяцев назад
Attention Masking with IPAdapter and ComfyUI
Upscale from pixels to real life
Просмотров 12 тыс.7 месяцев назад
Upscale from pixels to real life
From real to anime (with IPAdapter and ComfyUI)
Просмотров 15 тыс.7 месяцев назад
From real to anime (with IPAdapter and ComfyUI)
ComfyUI IPAdapter Advanced Features
Просмотров 31 тыс.8 месяцев назад
ComfyUI IPAdapter Advanced Features
How to use IPAdapter models in ComfyUI
Просмотров 93 тыс.8 месяцев назад
How to use IPAdapter models in ComfyUI

Комментарии

  • @SouthbayCreations
    @SouthbayCreations 7 часов назад

    I keep getting this error "Prompt outputs failed validation", any ideas? Tried searching your Discord but no luck.

  • @SouthbayCreations
    @SouthbayCreations 7 часов назад

    This is great but I keep getting an error "Prompt outputs failed validation", any ideas??

  • @brunoribeirorodrigues5946
    @brunoribeirorodrigues5946 9 часов назад

    why in my case the IPA ADV if haven't clip vision itsn't work so when I put CLip Vision the IPA ADV error Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).

  • @farey1
    @farey1 9 часов назад

    Any clue why I am not able to run cosXL workflow, please???? I keep getting this error all the time: Error occurred when executing SamplerCustomAdvanced: tuple index out of range File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy_extras odes_custom_sampler.py", line 557, in sample samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 684, in sample output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 663, in inner_sample samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 568, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\k_diffusion\sampling.py", line 599, in sample_dpmpp_2m denoised = model(x, sigmas[i] * s_in, **extra_args) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 291, in __call__ out = self.inner_model(x, sigma, model_options=model_options, seed=seed) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 650, in __call__ return self.predict_noise(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy_extras odes_custom_sampler.py", line 469, in predict_noise out = comfy.samplers.calc_cond_batch(self.inner_model, [negative_cond, middle_cond, self.conds.get("positive", None)], x, timestep, model_options) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\samplers.py", line 226, in calc_cond_batch output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\model_base.py", line 113, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float() File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 887, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 44, in forward_timestep_embed x = layer(x, context, transformer_options) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\ldm\modules\attention.py", line 694, in forward x = block(x, context=context[i], transformer_options=transformer_options) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "F:\AI\stable-diffusion-webui\venv\lib\site-packages\torch n\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\comfy\ldm\modules\attention.py", line 618, in forward n = attn2_replace_patch[block_attn2](n, context_attn2, value_attn2, extra_options) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 26, in __call__ out = out + callback(out, q, k, v, extra_options, **self.kwargs[i]) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 139, in ipadapter_attention ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0) File "F:\AI\stable-diffusion-webui\extensions\sd-webui-comfyui\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\CrossAttentionPatch.py", line 139, in ip_k = torch.cat([(k_cond, k_uncond)[i] for i in cond_or_uncond], dim=0) Queue size: 0 Extra options * ipadapter_cosxl_edit

  • @nerdbg1782
    @nerdbg1782 10 часов назад

    This builds on your previous experimental node where you asked for some help from the community. Glad to see they helped you decipher the layers

  • @DarkGrayFantasy
    @DarkGrayFantasy 12 часов назад

    As always amazing work Matt3o! For those interested in the Crossattention codes this is what they target: 1) General Structure 2) Color Scheme 3) Composition 4) Lighting and Shadow 5) Texture and Detail 6) Style 7) Depth and Perspective 8) Background and Environment 9) Object Features 10) Motion and Dynamics 11) Emotions and Expressions 12) Contextual Consistency

  • @StudioOCOMATimelapse
    @StudioOCOMATimelapse 14 часов назад

    Very good as always Matteo. Can you explain all the index please? I've noticed only 3: 3: Reference image 5: Composition 6: Style

  • @MichaelLochlann
    @MichaelLochlann 15 часов назад

    maybe you can do a "pill" on the image negative because its still not super clear how it works.

  • @denisquarte7177
    @denisquarte7177 20 часов назад

    "We fail to understand what we already have" - cries in GLIGEN conditioning

  • @user-ir4km6dz1n
    @user-ir4km6dz1n 21 час назад

    CosXL-edit does not work if the source image is large (mine is 3840*2160)

  • @manojkchauhan
    @manojkchauhan 22 часа назад

    Hey Matteo, Just finished your ComfyUI tutorial - seriously impressive stuff! 👍❤ Your breakdown of advanced features with practical examples is super motivating. I'm excited to put these into action and unlock the full potential of ComfyUI. Thanks for sharing your knowledge!

  • @kenwinne
    @kenwinne 22 часа назад

    Matteo, thank you for bringing us IPAdapter, which provides a solid ground for us to combat the uncertainty generated by large models. I personally like your explanation of basic theories. Although your course is less than 10 minutes, I have studied it repeatedly for several hours. If you have time to explain in detail the specific functions and applications of the 12 layers of the cross nerve, thank you very much for your efforts, thank you!

  • @peterr6595
    @peterr6595 22 часа назад

    IPAdapterApply fail because my only IPAdapterApply node is IPAdaterApply(SEGS) . I tried installing IPAdapter_plus again. Does not work. What am I doing wrong?

  • @chriscodling6573
    @chriscodling6573 23 часа назад

    Wasn't going to download sd3 but this video definitely changed my mind so I'll give it a try

  • @baseerfarooqui5897
    @baseerfarooqui5897 23 часа назад

    hi thanks for this great tutorial. im getting error while executing the code is "" ipadapter object has no attributee 'apply_ipadapter" i tried to using sd15 checkpoints as well sdxl. but getting same.

    • @latentvision
      @latentvision 15 часов назад

      maybe it's an older version, of an old workflow, or simply browser cache

  • @rajahaddadi2274
    @rajahaddadi2274 День назад

  • @bobgalka
    @bobgalka День назад

    I just have to laugh... I wanted to use some of the ideans from this work flow and started a new flow and started building my flow and almost immediately got stuck on on the pos and neg nodes... took me awhile to figure out that the nodes are called primitiveNode... so i added that but it looked nothing like yours.. tried different things... then I though to just copy paste the node to my new on... nope.. no text area to type.... How did you create those primitiveNode nodes to have string out and multiline text area? BTW I am totally enjoying my self watching and learning from your videos. ;O)

  • @zheshi9809
    @zheshi9809 День назад

    6666

  • @user-uv4vv4mk4j
    @user-uv4vv4mk4j День назад

    When I Type 3:2.5,6:1 always gives an error: Error occurred when executing IPAdapterMS: not enough values to unpack (expected 2, got 1) File "E:\sd\comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd\comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd\comfyui\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd/comfyui/ComfyUI_windows_portable/ComfyUI/custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 763, in apply_ipadapter work_model, face_image = ipadapter_execute(work_model, ipadapter_model, clip_vision, **ipa_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd/comfyui/ComfyUI_windows_portable/ComfyUI/custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 253, in ipadapter_execute weight = { int(k): float(v)*weight for k, v in [x.split(":") for x in layer_weights.split(",")] } ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\sd/comfyui/ComfyUI_windows_portable/ComfyUI/custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 253, in weight = { int(k): float(v)*weight for k, v in [x.split(":") for x in layer_weights.split(",")] }

  • @miguelitohacks
    @miguelitohacks День назад

    HOLY SHIT, this is powerful!

  • @DanielVagg
    @DanielVagg День назад

    Great video. Top notch content, as always

  • @ryanontheinside
    @ryanontheinside День назад

    this is awesome thank you

  • @bgtubber
    @bgtubber День назад

    I tried this with a few images. I'm getting back a similar image, but not the same ones as the original. What am I doing wrong? Mostly the background is different, while the subject stays more or less the same (some little differences in attire).

    • @latentvision
      @latentvision День назад

      hard to say, it was an "old" workflow so it might be just a matter of updated checkpoints or different version of some library

    • @bgtubber
      @bgtubber 23 часа назад

      @@latentvision Ah, I see. No worries. I'll keep trying. Hopefully I'll figure it out. :)

  • @adelechelmany
    @adelechelmany День назад

    🫡👏👏

  • @ElevatedKitten-sr6yi
    @ElevatedKitten-sr6yi День назад

    🤯

  • @morenofranco9235
    @morenofranco9235 День назад

    This is incredible. I will have towatch it two or three time more to get a real understanding. Thanks for the lesson.

  • @sephia4583
    @sephia4583 День назад

    Is there any similar way to apply Lora style to only specific layer? Maybe we can apply negative weight for composition layer (e.g. layer 3) and positive weight for style layer (e.g. layer 6)?

  • @angry_moose94
    @angry_moose94 День назад

    I can't find the style transfer precise on the list. Is it the same as "strong style transfer"?

    • @angry_moose94
      @angry_moose94 День назад

      nevermind, just had to update the custom nodes!

  • @ceegeevibes1335
    @ceegeevibes1335 День назад

    love love love this, going MAD!!!!

  • @calvinherbst304
    @calvinherbst304 День назад

    dying to know what the other index blocks are!

  • @johnsondigitalmedia
    @johnsondigitalmedia День назад

    Awesome work! Do you have the info on the other 10 control index points?

  • @kinai_4414
    @kinai_4414 День назад

    Damn that's impressive. Could the same logic be applied to a Lora node in the future ?

  • @courtneyb6154
    @courtneyb6154 День назад

    If anyone could prove that some ai artists should have copyright protection over their image generations, it would be you. Definitely no "Do It" button here. Amazing stuff and thank you for taking the time to grind through all the possibilities and then breaking it down for the rest of us dummies ;-)

  • @SouthbayCreations
    @SouthbayCreations День назад

    Great video, thank you! Where can we find this node?

  • @BubbleVolcano
    @BubbleVolcano День назад

    Nice work! ❤It's awesome to see real progress on the U-net layer. But having too many parameters can make it tough to get started, even for someone like me who's been at it for over a year. It's just too challenging for ordinary people. If we change the filling parameter to four simple options like ABCD, it might be easier to promote. Ordinary people aren't into the process; they're all about the end result.

  • @HasanBudi-uo1oe
    @HasanBudi-uo1oe День назад

    HI matheo, i use SDXL ckpt model from Cyberrealistic and i have been set ipadapter in all sdxl setting, but it generate like a PIXAR face in real person body, ive been add more prompt to get realistic. any idea to fix my work ?

  • @mengwang-io7fw
    @mengwang-io7fw День назад

    Collaboration Invitation,may i get your e-mail?

  • @glassmarble996
    @glassmarble996 День назад

    you have so many secrets matteo :D

  • @alxleiva
    @alxleiva День назад

    You called that node based on yourself right? You're truly a mad scientist bringing us the best discoveries! Thank you Mateo

  • @HiProfileAI
    @HiProfileAI День назад

    I love the idea of target conditioning various layers and being able to direct the layer with this kind of control in the cross attention. Thank you Matteo for you continued work and expertise. You give us a lot to play with and work with. The implications of the kind of control we can have in image creation and manipulation will last for years. Continued blessing to and appreciation to you good sir. 🙏🏾👍🏾

  • @isaactut2520
    @isaactut2520 День назад

    I will say this again, you are simply amazing Matteo! "Shut Up and Take My Money!"💰

  • @sarkarneelratan_gmail
    @sarkarneelratan_gmail День назад

    Like your video… I don’t know anything. I don’t have gpu… but I came to know about getsalt ai… Can you make this video… with getsalt ai

  • @TheD4rkR00m
    @TheD4rkR00m День назад

    Ciao Matteo, non ho trovato il link al workflow e ho copiato seguendo i tuoi passaggi, ma il SamplerCustomAdvandces, nella sezione CosXL Edit, mi dà errore. Ho aggiornato IPadpter e tutti i nodi su ComfyUI e controltato ripetutamente il tuo settaggio in video senza successo. Grazie per i preziosi contributi e forza Firenze! ♥

  • @aidiffuser
    @aidiffuser День назад

    Hello man, thanks for sharing this amazing improvement on control! Did something change between the style transfer and composition from 2 days ago to this release? I cant seem to reproduce same results :( Or, is there a way to reproduce the exact same layer weights of that previous release within the mad scientist node?

    • @latentvision
      @latentvision День назад

      no, style and composition should be the same. if you have issues please post an issue on the official repository with a before/after images possibly

  • @urbanthem
    @urbanthem День назад

    Thanks a thousand Matteo. Your last statement is something I tell time and time again, we only use so little potential in what's already out there. Brilliantly proving that point.

  • @YING180
    @YING180 День назад

    so cool and you are our mad scientist

  • @majic_snap
    @majic_snap День назад

    I have a question: when the weight type is changed to style transfer precise, is the array in the third layer composition negative,By the way, Matteo, your work is as exciting as ever.

  • @baheth3elmy16
    @baheth3elmy16 День назад

    Thank you very much for the video!

  • @mssuxmyass
    @mssuxmyass День назад

    One more... I don't know if it helps more than a single comment... But you are an artist, watching that video with you setting wrights was watching an artist at work, thank you again!

  • @fukong
    @fukong День назад

    God of IPAdapter