ADOBE FIREFLY AI: The TERRIFYING New Reality for Artists!

You get AI, you get AI, you get AI. Everybody gets AI. That’s right, everybody, the AI hype train is not stopping, and that is because a few hours ago, Adobe released their brand new suite of AI tools for creative professionals called Firefly. And I’m gonna tell you right now, it will have some very interesting consequences. So let’s talk about that.

Hello, humans! My name is, of course, as you saw by the title of this video, today we’re going to be talking about Adobe Firefly, which is basically Adobe’s response to the current AI race. And I hope that by releasing Firefly, they’ll be able to take a little piece of that cake. I mean, not gonna lie, I want a piece of that cake too. So in this video, how about we take a look at what Firefly is, what exactly we’ll be able to do with it, what kind of results you can expect, and also see what kind of images people are generating right now with it. But also, we’re gonna be talking about some very funny consequences because of that release, because yes, Adobe Firefly is actually pretty special, but we’ll talk about that later. So for now, how about we watch the trailer presentation and see what Firefly is all about? Okay, let’s go.

[Music]

Thank you.

[Music]

Presentation for Adobe Firefly, but I gotta say, it was very, very fast. So let’s actually analyze a little bit, bit by bit, what exactly they showed here. Okay, so the first thing they introduced is, of course, text to image where you can input a prompt and then generate images based on the prompt. Now, of course, if you use something like Stable Diffusion or MeJohnny before, this is not really impressive. You’ve already seen all of that. But what’s actually impressive is what we can see right here on the side because in that little section, you have different parameters. You have the aspect ratio where you’ll be able to select and generate images in different aspect ratios. Then you have the content type, which will allow you to generate an image under a different form, like a realistic photo, an illustration, a graphic. Then you have the style section, which will allow you to basically select a certain style and then generate images in that particular style, as we see in the example, the person here selected an ink sketch, and indeed, it has generated images in that ink sketch style.

Now again, this is nice and all, but technically, it’s not very impressive if you’ve already used a text to image AI before because, I mean, this is just technically writing in your prompt box “Ink sketch” instead of nothing. So again, technically, nothing of that is really revolutionary, but it is definitely a little bit more interesting and a little bit easier to use for complete beginners because since the UI looks so clean, it makes the overall image generation process a little bit more enjoyable for people who might not know how to prompt correctly, which, in a way, is a pretty good idea.

So then they show the extend image feature, which is basically outpainting in Stable Diffusion. Then they show the in-painting, exactly like in Stable Diffusion, where you can just replace it by something else. Then they showed a smart portrait, which is a feature that is already present in Photoshop, where you can basically change the expression of a face using different sliders, which again might not be super impressive, but it looks like the quality has been improved. Then they showed the depth to image option, which is again something that we’ve been using in Stable Diffusion with ControlNet. Then they showed 3D to image where again, they basically take a 3D image, run it through a generation to create a brand new 2D image, which again is something that you can do inside Stable Diffusion with ControlNet.

So then they show text to template, where you can basically input a prompt, and it will create a template for a card or a poster, and they also give you a lot of customization options that are all generated automatically depending on what you want to accomplish. So then, the short conversational editing, which will basically allow you to talk with an AI exactly like ChatGPT, for example, and ask the AI to be able to perform certain tasks like, for example, modifying the style of an image, which again, spoiler alert, is something that you can do inside Stable Diffusion already. So then they show that the text to Vector option, which will allow you to generate an image, but this time it is a vector image, so it will be very useful for illustrator traders and maybe logo creators. So that’s actually pretty cool.

So then they showed the combined photo feature, exactly like you have inside MeJohnny, where you basically take several images and then combine them into one, which is a feature that I actually really enjoy inside MeJohnny, and I really hope that we could have inside Stable Diffusion. So then they showed the color-condition image generation, where you basically input an image, and it will generate a brand new image based on the color of your input image, which is something that I think you can do inside ControlNet using the color model. Now, I could be wrong; I haven’t used it that much, but I’m pretty sure this is definitely something that you can do in Stable Diffusion right now. And then, finally, they finish the presentation by presenting an upscaling option that, spoiler alert, we’ve had inside Stable Diffusion all along.

Now, in that short presentation, they haven’t really shown everything, especially when it comes to the text-to-image option. So here is a little bit more info on that. So if we look here in this new presentation, we see something that is very similar to what we saw previously, but now we see it a little bit more in action in how exactly it works and how fast it is to actually generate those images. And as you can see right here very easily, the person behind is able to choose different techniques, effects, themes, materials, and then very easily, very fast, you can see the effect of all of those concepts and all of those effects applied into the image generation. Now again, this is very cool and all. This looks very clean. This looks very fast, and it seems to work very, very well, and the model they managed to train looks actually very powerful. However, again, in a way, this is not something that we haven’t seen before because all you’re really doing is just replacing words that you can type yourself manually by some selection boxes in a taskbar on the right, which again, it’s absolutely fine. It is definitely way better and way easier to use if you’re a complete beginner if you’ve never used a text-to-AI generator before. Then I think that for the people who’ve already been using MeJohnny or Stable Diffusion, this is definitely not something very impressive. This is not something that we haven’t seen before.

Now, one thing that is at least pretty cool is their new text effects option, which basically allows you to type a text and then describe a prompt on how you want this text to be generated. And to have this kind of results, I mean, this looks pretty good because if you really wanted to create something like this inside Stable Diffusion, for example, you would have to use Control Net with a lot of inpainting and a lot of tries to be able to get something like this. And even in the end, I’m not sure that it will be that good. So yeah, that text effects option is definitely very powerful.

Okay, so basically, if I were to sum it up, Firefly, this new brand set of AI tools for our professionals, will allow you to generate images from text, will allow you to generate text with different effects, will allow you to create variations of your own artwork from a text prompt. It will allow you to use inpainting to add, remove, or replace objects and then generate something new on top of it. It will allow you to generate images based on your own photos, exactly like a Dream Booth or Laura or Texting version. It will allow you to create Vector images from prompts that you can then modify inside Illustrator. It will allow you to open the image to extend the borders. It will allow you to generate 2D images from 3D elements. It will allow you to generate seamless styles and patterns from a text prompt. You can then generate brushes from Photoshop in the Fresco from a simple text prompt and then paint with that brush inside Photoshop or Fresco. Turn your simple sketch drawing into full-color images just like the Scribble Control Net model in sensitive diffusion. And then finally, it will also allow you to generate editable templates from a text prompt.

Okay, well, so all of that looks very, very good. I’m not gonna lie. It looks like the way that Adobe is planning on implementing all of these tools together sounds very, very smart. But in a way, as someone who is an AI art enthusiast, someone who has been using Stable Diffusion in mid-Journey for several months, I see very little progress, very little differences compared to the tools that we already have. No, don’t get me wrong. I’m sure that Adobe will be able to implement all these tools together in a pretty coherent and efficient manner. But in a way, I’m not really that impressed because most of the features that we see right here are already present and have been present inside Stable Diffusion for months. And actually, well, the reason why is pretty simple. That is because Adobe Firefly is based on Stable Diffusion technology. So yeah, obviously, a lot of the tools that they showed here will be similar to what we already know in designing Stable Diffusion Community. We’ve already been using all of these tools for months now.

But I think that what Adobe is trying to do here is trying to create a set of tools that everybody can use very easily in a very beautiful and easy-to-use UI that even non-tech-savvy people will be able to use very easily. So yeah, I mean, in a way, I’m not mad. They definitely know what they’re doing. But if you are tech-savvy and if you already use some of those tools before, you might not be as impressed as some beginners are.

Now, if you go and look at the gallery and we look at the images that are already generated by some people of the community using Adobe Firefly, you can see that they look actually really, really good. I mean, this is some really quality stuff. You have lots of different styles, some ultra-realistic images, some very, very artsy, very colorful. Some look like vector images. And with Stable Diffusion model that they’re using right now is definitely very, very powerful. I mean, try to generate something like this, like I told, practicing karate, inside one of your usual Stable Diffusion models. I mean, this is impressive stuff.

Now, personally, as of right now, I’m still on the waiting list. So I’m hoping to have access to it soon. And when I do, I’ll be sure to make a video comparison between Stable Diffusion models, the latest mid-Journey V5, and Adobe Firefly. I think it will be very, very interesting.

Now, here is the interesting part. Remember how I said in the beginning of the video that Firefly is very unique? Well, the reason why Adobe Firefly is so unique is because this AI model that you see right here, that is able to generate all of those features that you see on the screen, this AI model was actually trained on legally obtained images from Adobe Stock, openly licensed content, and public domain works with expired copyrights. That’s right.

So if you remember my previous video a few months ago about anti-AI art and the people who are against AI tools, or if you’ve just been following the debate online about the ethics of using AI tools for image generation, you will know that a lot of artists are against these tools because they claim that tools were created illegally. Because the companies that created those AI tools used images present on the internet without the artists’ consent.

Now, I think that you all know my opinion on this. Personally, I believe that if you put your image online, the image is available for everyone to see and use. Now, we’re not talking about plagiarism here, where someone just takes your image and copies and pastes it and makes it sound like they are the one who created it. We’re talking about taking images and using them for inspiration or training, which, again, in my personal opinion โ€“ again, it is my opinion โ€“ you don’t necessarily have to agree with that โ€“ but, in my opinion, this is fine.

But I also might understand why you might feel that these tools that were created, that were trained on images where the artists did not give their consent, then these tools are illegal and should not be available. Well, guess what? Now that Adobe has trained Firefly using images that were legally obtained, this now means that those who were against AI tools in the first place due to ethical concerns have now absolutely no arguments as to why these AI tools should not be used. Adobe with Firefly has now removed the so-called devilish aspect of AI-generated art and has given artists a way to create amazing images without any guilt.

Moreover, as of right now, Adobe is even developing a compensation model for Adobe Stock contributors where people might get paid if their images are used in the training data. So now, if you don’t want to use these AI tools, not only do you have absolutely no arguments, but now, and that is the fun or sad part, you’ll be able to use the exact same tools that the Stable Diffusion Community has been using for months, but now you will have the joy to pay for it. So congratulations, you played yourself.

This is exactly what I predicted a few months ago, where I explained that you can easily create a model trained on copyright-free images and still have an AI art tool that generates beautiful images. Except that now, this is currently inside the hands of big corporations. So now, instead of having a free open-source tool that everybody can contribute to and that everybody can use for absolutely free, you’ll be able to do the exact same thing, but now you’ll have to pay a big corporation to do so. Yeah, that’s great, congratulations, well done!

Now again, I say it like that, I’m roasting the anti-AI art artists because of their nonsensical arguments against AI art. But this is something that would have happened anyway, even without their complaints. I mean, when you have an amazing technology where big corporations can make a lot of money from or that can save them a lot of money, of course, they’re gonna be using them. Now, Adobe is not at fault here, don’t get me wrong. They actually did everything pretty well. They did everything in a very, very smart way, and they’re really good at creating powerful art tools for professionals, and they’ve been doing it for decades.

And although it’s still a little bit sad to see this kind of technology in the hands of big corporations, at least in a way, I think that it will maybe push more and more people who might have thought that stable diffusion is a little too hard to use and maybe not a tool that is ethical to use, to maybe try out these new AI tools for themselves. And really, this is always good. I want artists to be able to use these tools. I want them to be in our community. I welcome them.

But the problem is that personally, I’ve always hated their arguments against AI tools because ethics are so relative that using this as a base for arguments against using an AI tool just does not make sense to me. Not sure they could always use the arguments that everything that is done by an AI is a devil invention. If it’s not made by a human, it should not exist. Then this is really just nonsense. And all that these new tools will be available for everyone to use and use very, very easily made by a reputable company like Adobe, the artists that don’t use this AI technology will be left in the dust against the artists who use the technology right now.

So again, if you are an artist and you plan on staying an artist, I highly suggest you jump on this AI bandwagon like right now because the more you wait, the harder it will be for you to compete with other artists who use this technology. I can assure you, you will not win. It’s only going to be harder to use this new technology and that you’ll be able to implement them into your workflow, that you’ll be able to compete. It’s simple as that. So yeah, there you go, the future of creativity is here, and it’s powered by AI.

Adobe right now has taken a huge step forward by bringing AI into their Creative Suite, making it super accessible and super user-friendly for artists and content creators. And although it is a big corporation, it’s still good to be able to have more tools for us to play with. The reality, folks, thank you guys so much for watching. Don’t forget to subscribe and smash the like button for the YouTube algorithm. Thank you also so much for my Patreon supporters for supporting my videos. You guys are absolutely awesome. You people are the ones who support me so I can make these videos possible for you. So thank you so much, and I’ll see you guys next time. Bye.

Privacy Policy | Privacy Policy