Author name: alexlatour@protonmail.com

Uncategorized

The AI Animator’s Dream (Achieving Scene Consistency And Continuity In Animation With OpenArt AI)

ai slave It took me nearly two years, but I’m finally taking AI animation seriously. I held off all this time, not because I wasn’t curious, but because the tools weren’t ready and neither was I. The tech was limiting, workflows were chaotic, and burning credits on 5-second clips that led nowhere felt wasteful. The second challenge? The process. Generating hundreds (sometimes thousands) of images just to get a handful that actually worked for a few seconds of animation sounded overwhelming to me, and not a great use of time. The third? Merging consistency with continuity. Despite improving my technical skills for consistent creation, that consistency was not transferable across multiple frames or scenes, and not satisfactory enough for animation. The turning point came when the right combination of tools and workflow finally emerged. The tools got smarter. I got better at using them. The process is now faster. The results are finally consistent and continuous, and if you know what you’re doing, achievable with minimal resources in less time. All thanks to some platforms that have really stepped up their game, especially OpenArt AI, which has been my go-to powerhouse for almost two years. I initially turned to it as a travel-friendly alternative to my Automatic 1111 locally run workflow. From back then, OpenArt already had everything I needed to keep on rolling with my AI creations. Stable Diffusion, LoRA training, inpainting, face editing, and most importantly, they always stood up to date with the latest generation models. That included the Flux suite, especially the latest Flux Kontext, which without knowing was going to be the crucifix that I needed to put an end to the monster of scene continuity in AI animation. If you’ve been around long enough, you know the curse of consistent character design in AI generation, get one great image, try to recreate it from a different angle, and suddenly you’re dealing with a shapeshifting mess. Armor morphs. Faces change. Proportions betray you. Flux Kontext: Design That Holds Up Across Angles But with Flux Kontext, that nightmare finally ended. This model isn’t just about style, it’s about structural memory. It holds on to geometry, lighting, texture, and silhouette in a way I hadn’t seen before. I could take my main character, a Viking Berserker in this case, turn him 45°, then 90°, then 180°, and the model understood it was the same person, same horns, same armor plates, same axe, just from another angle. That alone saved me dozens of hours and spared me from having to retouch or regenerate endlessly. The Chat Feature: Art Direction With a Dialogue Box Then there’s the Chat feature, which I didn’t expect to love as much as I do. Instead of engineering 20 prompts for slight variations, I used Chat like an art director. I’d upload an image and say: “Give me a close-up view of the Viking.” “Change the axe to a sword.” “Change the axe to a gun.” (Not sure that one belongs in an RPG, but we never know, it if I choose one day choose to add elements of time traveling to the story) The Chat feature delivered quickly and with high fidelity. It felt like working with a focused assistant who understood context, visual logic, and my style preferences. A Sword That Cuts Through Art & Scene Design For a long time I was a hardcore Stable Diffusion user, on a quest of consistency and control, until Black Forest Labs released their Flux Pro model, and OpenArt had it integrated in their platform. I started using Flux Pro a few months ago to illustrate a series of dark fables I’m writing. I ended falling on a particular art style that immediately felt like my kind of world. It became a bit of a signature style, and since then I have used it across multiple projects, including a few of my AI music albums produced with Suno. One of those albums is Skaalven Dovhkaar, a fantasy soundtrack inspired by my love for Skyrim and the genre as a whole. This soundtrack, and the story behind it, set the tone for this animated clip. Using Flux Pro, I created snowy village and forest scenes in that same visual style. Then, using Flux Kontext, I composited the Viking into those scenes, making sure lighting, shadows, and environment matched. This separation of steps, first designing the character in a controlled space, then placing him into the environment, was the missing piece to build AI animations that felt more intentional and directed, and less like a random disjointed slideshow. This is where things get kinetic, where still frames begin to breathe using the video generation feature inside OpenArt itself, a full-fledged and loaded arsenal for the AI video creator. We’re talking Kling AI, Google’s Veo (Veo 2 & 3), my now personal go-to, PixVerse 4.5, including other popular models like MiniMax Hailuo 02 and the latest Seeddance 1.0. Kling AI: The Forgotten King When I first jumped into AI video, Kling AI was my primary weapon of choice. It was new, exciting, and reasonably good at preserving character structure. For a while, it was also the cheapest option. It taught me a lot, and helped me refine my process for bringing images to life with AI. And for the most part, it did the job. Veo3: The Dead King Then came Veo2, I tried it when it was on a limited-time offer at launch on OpenArt. Great value but still was not ideal for my stylized work. Veo3 arrived, 5 times the cost of Veo2, carried by Google, influencers, trailers, and headlines. The reality? For me, it was hype over substance. As someone with a background and deep expertise in SEO, I know Google’s ecosystem all too well. I’ve seen the same pattern in SEO, break rankings with algorithm “updates,” then sell visibility through ads. Veo3 felt like the same marketing playbook, expensive, overhyped, and irrelevant to my use case. When limited on resources, use your brain.

Uncategorized

OpenArt AI Affiliate Program Partner

ai slave I’ve officially become an affiliate partner of OpenArt AI. I don’t promote AI platforms lightly. But when something aligns with my creative process, delivers consistent results, and offers real value for money, I’m willing to share it. OpenArt AI has been instrumental in translating my inner worlds and shaping my journey with GenAI over the past two years. This isn’t about monetization. It’s about giving direction to emerging AI creators, especially those on a budget. If you’re looking for a tool that offers more control and versatility than Midjourney, at a fraction of the price, this might be it. For me, it was. If you’ve seen my AI work, then you already know what this tool can do when you take the time to learn how to use it properly. If not, know that nearly every image on this website was made using OpenArt AI. If you’d like to support my work and save a few bucks while creating your own, here’s my affiliate link: https://openart.ai/home?via=alexinseomnia or use code LCbec3U when signing up. And if you do sign up, message me. I’ll give you a free walkthrough of the platform, to save you time, credits, and help you get started with creating with precision and intent. The same way I’ve been doing for years now.

Uncategorized

Taking The Politics Out Of Melania Trump’s Hat

ai slave Melania Trump wore a hat at the presidential inaugural, and I loved it. Not the politics, just the hat. It reminded me of the Sombrero Cordobés, commonly known in English as the Cordovan hat. I would not be surprised if the designer took inspiration from it. The Cordovan hat is a traditional piece, often worn by women during festivities and men during equestrian events in Andalusia. Over time, it became an iconic symbol of flamenco dancers, a dance where gender, age and politics fade into irrelevance. A dance for yourself, where you can be as bold and insolent as you desire. You know who else looks stunning in a hat? The ladies in my Flor de Luna collection.

Uncategorized

Why AI Will Not Revolutionize Filmmaking Like Akira Kurosawa Did

ai slave Once upon a time, I embarked on a quest, one I’ve yet to fully complete, to watch all of Akira Kurosawa’s movies. I chose to do this not just because I was a fan of Japanese culture, bushido, and the samurai genre, or because I felt I had exhausted my options after devouring classics like Harakiri (my favorite in the genre), Ame Agaru, and Zatoichi. Kurosawa was different. His name was whispered in reverence, an undisputed master who left an indelible mark on Western cinema. When I discovered that Francis Ford Coppola (whose The Godfather is my favorite Western cinematic masterpiece) had supported Kurosawa’s later works, it felt like a necessity to immerse myself in his films. So, I finally sat through the epic three-hour runtime of Seven Samurai, a movie I had long avoided because its story had already been spoiled for me multiple times by other adaptations, including Samurai 7, a favorite anime of mine and one of the rare adaptations approved by Kurosawa’s estate. From there, I moved on to Yojimbo, Rashomon, Ran, Throne of Blood, and The Men Who Tread on the Tiger’s Tail, among a few others. Unlike formal students of film, I’ve never attended film school or been on a movie set. I wasn’t compelled by a curriculum but driven by passion. Whenever I had free time, I would choose one of Kurosawa’s films and let it consume me. I would often watch these films probably as how a student of filmmaking would, dissecting the long scenes, the pauses, the directing choices, and the sheer artistry that separates great cinema from mediocrity. The more I watched Kurosawa’s works, the more undeniable their influence on Western films became. Echoes of his storytelling and directing style are still felt today, a constant déjà vu in the language of cinema. Kurosawa didn’t just make films; he revolutionized the medium. https://www.aislave.xyz/wp-content/uploads/2024/12/kurosawa.mp4 Revolutions in any field are rooted in a deep understanding of what is being revolutionized. Kurosawa revolutionized filmmaking because he understood its essence. He drew from classical Japanese storytelling, theater, and culture while simultaneously embracing and influencing Western cinematic techniques. One of his greatest influences was William Shakespeare. Throne of Blood is a reimagining of Macbeth, set in feudal Japan, blending the Bard’s tragic themes with noh theater’s stark, haunting aesthetics. Similarly, Ran draws heavily from King Lear, transforming the tale of a king’s descent into chaos into a visually stunning exploration of loyalty, betrayal, and the fragility of human ambition. By merging Shakespeare’s timeless narratives with Japanese cultural elements, Kurosawa created films that resonate universally while remaining distinctly his own. His works are a bridge between worlds, timeless in their relevance and impact. Today, with the emergence of AI technology, we’re witnessing a surge of short, flashy clips labeled as “films.” These clips, often no longer than a few seconds, are being worshiped as evidence of a filmmaking revolution. As someone active in both the AI community and a lifelong admirer of cinema, I personally disagree. AI may revolutionize film production, making it cheaper and faster to create moving images, but it is yet to revolutionize filmmaking. And if we continue to approach it this way, it never will. What we are seeing is a proliferation of motion, not emotion. AI has made it easier for anyone to turn still images into moving ones, but storytelling, narrative, and directing remain out of reach. These are the elements that separate a mere movie from a true film. A movie is a sequence of moving images; a film is a story that touches lives, evokes emotions, and leaves an impact. It’s the difference between disposable entertainment and timeless art. The problem with AI’s so-called filmmaking revolution is that many of its proponents lack a foundational understanding of what makes a film great. Their benchmark is often the past decade of formulaic, CGI-laden Hollywood blockbusters, which have prioritized spectacle over substance. Without understanding the true essence of cinema, how can one hope to revolutionize it? Great films require more than technology; they require vision. The long, unbroken takes, the deliberate pacing, the unspoken emotions, Kurosawa’s mastery lay not in the tools he used but in how he wielded them to tell stories that resonated across cultures and generations. My own life has been profoundly influenced by films. I started studying yoga and later became a yoga teacher after watching Darren Aronofsky’s The Fountain, leading me to travel the world for over a decade, impacting my outlook of life. Similarly, countless freedivers trace their passion back to Luc Besson’s The Big Blue. This is the power of true filmmaking: it doesn’t just entertain; it transforms. AI-generated “films” so far have achieved mostly one thing: promoting the tools used to create them. Most of them still lack the soul, humanity, and vision that define great cinema. Yet, AI does have potential. For indie creators, it offers accessibility, lowering production costs and democratizing some aspects of filmmaking. Tools that once required teams of professionals can now be accessed by a single individual. This technological advancement, however, is only a complement, not a substitute, for storytelling and artistic vision. In the end, if nothing else changes, AI will likely give us more of the same formulaic nonsense Hollywood has been churning out with CGI, only faster and cheaper. It’s a revolution of production, not storytelling. To truly revolutionize filmmaking, AI creators must shift their focus to the story and narrative. It’s not enough to marvel at a tool’s ability to turn an image into a video or control camera angles. Instead, they should study the masters, Kurosawa, Coppola, Kubrick, and learn what made their films timeless. These directors understood that the heart of cinema lies in bringing to life stories that move people, narratives that resonate, and experiences that linger long after the credits roll. Without this, AI will remain a tool for production, not a force for artistic transformation. If you’re content with production over artistry, feel free to

Uncategorized

MANGO AI Campaign

ai slave I’m sure by now you’ve come across posts about MANGO using AI-generated images for one of their collections. When I first heard about it, I got excited. Finally, I thought, we’d get to see it: cats in clothes, ruling the fashion industry like they’ve always been destined to. After all, human models have been imitating the feline grace of the “catwalk” for decades. But… nope. Turns out I got my hopes up for nothing. Instead, what MANGO and their AI design team delivered was something far less exciting or progressive. They chose to use AI to generate images based on the same unhealthy beauty standards the fashion industry is infamous for. Images that, even though disclosed as AI-generated, probably would have gone unnoticed by most untrained eyes if left unmentioned. The result? Yet another reinforcement of a singular definition of beauty for young girls: tall, skinny, and flawless. And now, many blame the AI for this while few actually question the humans who prompted it to generate such images. It’s not the AI at fault here, it’s just the same old message repackaged in “innovative” AI wrapping paper: This is what the feminine ideal looks like. Anything else doesn’t belong, not even in our trained AI model. Frankly, it’s disappointing (at least for me) and feels like a missed opportunity. AI has the potential to disrupt and reimagine industries, celebrating diversity, creativity, and inclusivity. Yet here, it’s just another tool to double down on stereotypes that perpetuate insecurity and enforce outdated norms. For a technology praised as revolutionary, this use case feels ironically stale. Imagine the possibilities if AI were used to embrace differences, challenge conventions, or, yes, bring cats in clothes to the forefront of fashion. Now that would be groundbreaking. Instead, we’re left with more of the same, a cycle that desperately needs to break. Let’s hope the next iteration of AI-generated fashion, and the brands choosing to publicly use it, actually take a step forward instead of reinforcing the past. Until then, I’ll stick to generating my own feline takeover of the fashion industry.

Scroll to Top