ZBrush 2026 has been released, and the big change is that it now supports Python scripting. Coming soon is a total makeover of the horrible and infernal UI.
The MyClone blog is back online
The MyClone blog is back online. The WordPress “Pinterest RSS widget” has triggered a fatal error, and I was unable to get even to the dashboard. I had to get into the wp_options section of the database via my web host’s hPanel, and then find active_plugins and set all the plugin states back to ‘none’ (you paste in a:0:{} ), which was the only way to then get back into the WordPress dashboard.
Anyway, the pretty sidebar, which called thumbnails from a Pinterest board dedicated to the best examples of Poser and Daz images… is now sadly gone. The plugin which called it has had to be deleted.
A little more progress in Poser to Stable Diffusion…
I’ve had time to get back to experimenting with my Poser renders to Stable Diffusion workflow. As you can see, I can now use SDXL and prompting for expressions is possible (even if not present in the Poser render). A Firefly lines-only render is used in the Controlnet, and a very basic Comic Book real-time render is used for the Img2Img. The next big problem to solve is the “stare at the camera” problem. The last thing you want in a comic is the characters looking at the reader.
Local Microsoft Azure AI voices
The original Msty (1.9.2, not the flaky new Studio version) is a fine free desktop host for running local offline LLMs (‘AIs’). But it has no offline text-to-speech. Your AIs can’t talk, unless you get an API key and are always online.
One offline solution is the freeware Simple TTS Reader 2.0, which reads whatever gets sent to the Windows clipboard. Very very simple, it just does the job — anything copied to the clipboard gets read aloud. Great, but sadly it can only use Microsoft Speech voices, which are rather robotic and limited. It appears Microsoft has moved on from these, to its new and far better Azure TTS voices.
However, there’s a hack to get these Azure voices locally. There’s a handy NaturalVoiceSAPIAdapter from GitHub, with a straightforward Windows installer. This freeware makes Microsoft’s Azure natural (aka ‘neural’) TTS voices accessible locally to any SAPI5 compatible TTS desktop software. On Windows 11, these more advanced voices are otherwise locked to the Windows Narrator for local use, and no other software can use them (booo…). But now Simple TTS Reader can use them too.
As well as NaturalVoiceSAPIAdapter also get your target Azure voice from this selection of free voice downloads. Unzip it as directed and put it somewhere sensible. When you install NaturalVoiceSAPIAdapter, you need to tell the software where the voice is on your PC.
After that, restart Simple TTS Reader 2.0 and you have a good local Azure TTS voice for automatically reading whatever is sent to the clipboard. Now when you hit ‘Copy to Clipboard’ at the end of a Msty LLM response, the text will be read in a reasonably good AI voice.
Regrettably Msty can’t automatically ‘Copy to Clipboard’ at the end of each LLM response. It has to be done manually, by clicking the icon. Ideally, Msty would add a “copy each new sentence to the clipboard, on completion” option.
Having Azure voices locally on your PC may also interest animators who’d like to have such quality voices without going online. Of course, there are also dedicated TTS AIs now… but they can be very fiddly to set up, require many Gbs of downloads and disk space, and also a good graphics card to run them. The above fast Windows 11 solution requires a mere 80Mb in total and no graphics card.
Automatic Facefix
More progress in my learning ComfyUI for use with Poser renders. It’s not just about plugging in a character LoRA and hoping for the best, I’ve found. Automatic face masking and facefix + a character LoRA can be added to the SDXL workflow.
Here we see the ‘before’ and ‘after’ effect on a scene with H.P. Lovecraft in the middle-distance. Before it’s sort of vaguely like H.P. Lovecraft but also more like William Burroughs, and then after it’s much more recognizable as Lovecraft. The face in the picture is automatically identified and masked, then that area is regenerated at a higher size — so that the face can become more refined. And the LoRA tags along with that process, also making the character more recognizable.
Of course the problem is that it only works if the figure is alone. Otherwise every face in the scene gets a makeover. Another reason, I think, to make character cutouts and backgrounds separately and composite them in Photoshop. That said, I haven’t yet learned how to set up a way to manually paint-in a face mask (update, seems to be impossible while the workflow is running – easier just to output before/after images and Photoshop them together later).
New Poser comic from Brian Haberlin
Brian Haberlin has a produced a Faster That Light 3D Treasury Edition. 3D here means ‘red-blue glasses’ 3D. It’s actually two new stories from his Faster Then Light sci-fi world, not a collection of the old Faster That Light series converted to stereoscopic 3D.
Presumably Brian’s sophisticated ‘Poser to comic-book’ method is used here as before, but now the Poser 3D models + scenes + a PoserPython script all give the ability to produce a stereoscopic (red and blue glasses) ‘3D view’. Aka ‘stereo anaglyph’.
Basically to get this effect you lower the camera dolly X value to something quite low (0.020) and then make a ‘right eye’ and ‘left eye’ render, then combine in software for making stereo anaglyphs. World of Paul has the details and the free PoserPython scripts.
UK users should change their DeviantArt settings to ‘United States’
Just a thought. Given the fast-growing climate of censorship here in the UK, it’s probably best for a UK user to change their DeviantArt setting to read ‘United States’, ASAP. Ideally you’d do that once you have your VPN set up, and you appear to be surfing from the USA.
This is because DeviantArt are bound to be next in terms of having to block/remove all their UK users.
I find I can get to DeviantArt with the Mullvad VPN without any problems at all. Changing my region setting, while using VPN USA connection, didn’t trigger an email challenge.
AVFix fixed – now at Archive.org
AVFix is now at the Internet Archive, following the demise of the PoserLounge domain in the Netherlands. The little fix was a vital one for Poser 11 users, and loading it then allowed a number of older Python scripts to also load. I’ve also gone through the MyClone blog and fixed all the URLs that were pointing to the now domain-expired poserlounge.nl site.
Not needed for Poser 12 or 13.
New for Poser and DAZ – July 2025
Hurrah! Yes, another round-up of the new items for Bondware’s Poser or DAZ Studio. This time covering releases in July 2025.
As always these are ‘just my picks’ and I also try to warn of any pitfalls re: people not knowing something is fanart and thus not for commercial-use. Also, at the end, I note the the usual extras in terms of software, scripts, tutorials, and also some Stable Diffusion image-generation items.
Science-fiction:
Cybotaur for G8 and G9. Would also suit steampunk, suitably re-textured.
Antwan for Genesis 9, a fairly convincing ‘insect alien’. I assume it’s not close to being some Star Wars fan-art.
SE Animal Korch, a strange alien swamp creature, and there’s also SE Animal Lityn for the skies of your alien planet.
Steampunk:
Steampunk Owl for DAZ Studio. I assume it’s not fan-art.
Time Traveller Props for DAZ Studio. Be aware that the Time Machine is effectively fan-art, closely derived from the George Pal movie of the famous novel The Time Machine, and should not be used commercially.
The Cerebral Orb, plus a simple but nicely-done control-cap.
For a more ornate control-cap for the Orb, how about the new Halo Headdress?
A free Aeolipile for DAZ and also in .OBJ format. Five parts.
Fantasy:
Medieval Forest Village for DAZ. There’s also a home interior and forest shine, separately.
Free for Poser, ColorHUe Layers for LF2/LH2/LF/LH, and also for ColorHue Layers for Poser 11. Basically, give your faery La Femme a suitable skin shade.
Storybook:
Not much this month, just Ice Cream Bars for DAZ.
Toon:
Little creepy Bot for DAZ Studio.
Halloween:
DoubleD for DAZ Studio, a two-headed hell-dog with pose controls.
Dungeon Hound for DAZ. Probably looks better in deep shadow with rim-lighting.
Figures, accessories, and everday scenes:
FG Poker Night, a complete modern-day room with table and all accessories. Possibly useful for making tutorial animations for learning the game?
Animals:
Goat by AM Bundle for DAZ. Doesn’t require a dependency, and you can age it, add curly horns etc.
Nature’s Wonders Snakes of the World, Vol. 2, for Poser and DAZ. Includes a Boa Constrictor.
Need to have your character fend off the giant snakes? There’s now a Fully Poseable Bull Whip.
Raw Capybara for DAZ Dog 8, with fur.
Landscapes:
Undergrowth for DAZ Studio, a sort of ‘Florida swamp’ miniature.
Butterfly Garden, plants for your Ken G. butterflies. For Poser and DAZ.
Historical:
Vanishing Point’s Renaissance Courtyard for Poser. Probably a bit basic by modern standards, but good geometry and AI can now do wonders with even a basic render.
dforce Short Morphing Hair for Genesis 9. Early 1960s hair styles.
Soviet-era Helicopter for DAZ.
Japanese Backstreets for DAZ, 1990s otaku style.
Tutorials:
In Poser, a trick to get the brow material settings to render a black brow on figures. Potentially useful for filtering the render for comics, where you’d want a clear black brow.
For Poser on YouTube, useful tips on saving poses to the Library, with morphs.
For Poser on YouTube, why the group tool is better than parenting.
Also on YouTube, using DAZ Studio and DaVinci Resolve (the free powerful video-editor).
Scripts and Plugins:
For Poser, an update to one of the nodes in EZSkin, used to give older Poser figures the ability to be rendered in SuperFly. (Scroll down the forum thread to “Here is an update to the EZ_CycleSkin (PrincipledBSDF) plugin.”)
BJ Layers Plugin for DAZ. Can’t quite work out what it would do, but I guess seeing it working in a video might help.
Software:
Winxvideo AI 4.1, effectively a poor man’s $50 Topaz Video AI for the desktop PC. Now greatly improved as an AI-powered video upscaler, in the new 4.1 version.
Ultimate TTS Studio for Windows / NVIDIA. A sort of ComfyUI, but for generating text-to-speech (TTS) rather than images. Though note that Comfy has audio capabilities too, if you install Torch Audio.
Blender 4.5 final LTS is now available, with updates that potentially offer a more robust and faster Poser/DAZ -> .FBX -> Blender -> ComfyUI rendering pipeline.
iClone now officially has robust ComfyUI integration, for rendering with AI. I see that Keyshot has also officially added AI editing, probably Kontext based.
The ComfyUI rival Invoke 6.0 is now available. They appears to be getting Blender-itis, in that the UI is constantly being changed and thus it becomes a horrible ‘shifting target’ in terms of learning the software or making tutorials. Anyway, I’ve moved to ComfyUI now and have deleted Invoke, so it no longer affects me.
I see MediaChance’s Dynamic Auto Painter (DAP) image filter system, and its excellent NovelForge AI for writers both have a $10 discount. Incidentally I can report that NovelForge can easily connect to Msty-hosted local AIs (‘LLMs’). In NovelForge just select “LLM Studio” as your LLM host option but input the Msty local URL of http://localhost:10000 – then press ‘get models’ and choose a model to use. “Save” the settings.
Stable Diffusion learning:
My new tutorial on setting up Working OpenPose with face and hands, in ComfyUI. Should work with any Poser/DAZ render of a figure. I believe there’s also a four-legged Openpose model, and I assume my workflow will also work with that. So theoretically you could get horses etc into Openpose.
Ultimate Openpose editor, for when you need extreme fine-tuning.
Removing colour from comic-book lineart with Flux Kontext. Kontext can also remove hatch and dash shading (see naked Moebius!).
Endless-Nodes for ComfyUI had a major update in June 2025. It includes the useful Fontifier — change the fonts in your Comfy workflows. Useful for those who, as the author says, have a “4K monitor and old eyes”.
The important WAS Node Suite for ComfyUI is now ‘WAS Node Suite – Revised’ at a new GitHub location, and the old one should be replaced. The Suite is important partly because it’s so widely used in shared workflows. But also because it has a “Save Image” node which can force 300dpi final output — important for those generating images destined for print.
Several useful Stable Diffusion models are on torrents at the Internet Archive, which are possibly useful for those starting out and on slow connections or who are blocked from CivitAI by censorship. The ‘Photon v1’ model is an excellent starter for Stable Diffusion 1.5 photography emulation, and also very useful as a base test for LoRAs since it’s not going to interfere with them in terms of style. While ‘RealVisXL v50 Lightning Baked VAE’ is your starter turbo/lightning fast SDXL model, and the vanilla ‘Realism Engine SDXL v3.0 VAE’ is for when your SDXL workflow needs wiggle-room that the fast version can’t offer.
Services:
Here in the increasingly-censored UK, I’m now using the always-running Mullvad VPN for all online work. A reasonable £48 for a year and a flat no-nonsense fee, paid for anonymously via a simple scratch-card sold and delivered by Amazon. It works lovely, and seems to be giving me faster Internet in some cases (YouTube, Archive.org and a few others)! I guess that’s because I’m now bypassing some ISP congestion in London, by hopping up to Manchester and then over to the east coast of the USA and thence to… free-speech and freedom! Something which seems increasingly rare here in the UK, and is likely to get scarcer. I recommend the service, but be warned that it doesn’t support huge streaming services (Netflix etc) and that Hostinger rented websites (such as this MyClone blog, regrettably) are unreachable until you turn off the VPN. It appears that Hostinger blocks all VPNs, which regrettably I wasn’t aware of when I purchased the web space from them. Here’s the workaround…
1) In Mullvad’s settings click on “split tunnelling”, where you can easily allow non-VPN Internet access for a secondary Web browser of your choice.
2) Install and set the freeware Browser Tamer to route / auto-switch any Hostinger-hosted URLs to your secondary browser, instead of the usual browser.
3) From the Firefox or Chrome Store install the free Browser Tamer extension / add-on in your main browser. Re-start.
4) Now you simply change your main Web browser’s ‘VPN un-reachable’ bookmark URL to read as follows: x-bt://https://ur_lovely_site.com/ (or whatever you want to reach). The x-bt:// is the prefix that tells the Browser Tamer extension to send the clicked bookmark URL to Browser Tamer, which now knows that it must launch your secondary browser to that URL. The Edge browser, with almost no extensions installed, is superfast to launch and thus may be a good secondary browser to use.
That’s it for now. More in August/September!
iClone goes full AI, with robust ComfyUI integration
Reallusion’s iClone embraces AI image rendering, with a new official ComfyUI integration. It looks like they’re going all-in for AI, regardless of the AI doom-moaners. Good for them.
We’re excited to launch the Open Beta of AI Render plugin, a powerful and completely free tool that bridges real-time 3D animation with AI-powered rendering, seamlessly integrated into the ComfyUI workflow. … AI Render uses custom nodes that connect Reallusion’s 3D environments directly into the ComfyUI ecosystem.
Now in Open Beta. Officially supports only Stable Diffusion 1.5 and WAN 1.3b video, though node-wranglers may find information about other types of AI image-gen on the Reallusion forums. Likely the new Wan 2.2 relatively lightweight 5B model will be of most interest there.
Seems to have a standard approach. The Comfy nodes appear to take Depth, Pose, Normals and Edge preprocessor data simultaneously from iClone’s real-time viewport, also marrying them with one-click style presets (it’s an IP-adapter) for refinement of output. The two comics presets are manga style, so there’s no western comics style as yet.
Interestingly also… “AI Video Generation models optimized for consistent frame-by-frame results”
How consistent? That would be the worry there. Still images are one thing but, even with four Controlnets working at once, are we still going to get slightly “wobbly” faces? One would also have to worry about shifting colours.
New DAZ to/from Blender plugin
A new DAZ to/from Blender plugin for Windows, and on the DAZ Store.
Import or export complete scenes with full material and lighting setups … automated import/export systems
Sounds good, especially now that Blender has ComfyUI integration for AI image generation. For DAZ Studio 4.23 or higher, Blender 4.0 or higher.
DAZ to Blender Bridge is free, though it turns out it’s still the older version. The new one is actually in a package called FAST Animation Studio Tools: DAZ to Blender Pipeline. Which is $150(!), currently discounted.
The Blender to DAZ plugin is $50, but also currently discounted.
Keep in mind that Blender now has robust new .FBX import for free, so try that before paying $150.
Back to Lovecraft gazing at the stars
More fun with ‘Poser to Stable Diffusion’, now that I’ve moved to Windows 11 Superlite and have the AI stuff mostly set up.
This time I can use SDXL rather than SD 1.5. I think regular readers of this blog will recall the previous attempts with the same Poser source, and see quite a difference in the result. I’m using the same test render.
To get this I made a ComfyUI workflow featuring an SDXL turbo model powering Img2Img, plus three LoRAs, and a lineart Controlnet. Not sure the latter is really needed (a relic of the old workflow), provided the colour stays steady from image to image and thus from panel-to-panel and page-to-page in a comic. Or I guess I could go all-in and try four different Controlnets working at once, and see how stable the results are compared to the Poser render.
But this is just a first experiment, and it’s encouraging to get this far immediately.
On the other hand, it’s inventing things like the suit pockets and a waistcoat. Which is annoying since consistency is needed. The reason to use Poser is to have the results be consistent, not full of little differences that either take a lot of postwork to fix, or which are lazily left in and annoy the heck out of the reader. (Update: prompt to “dark 2-piece suit” to get rid of waistocoats)
The result comes in at a healthy 1432px (in about 12 seconds), from a 768px starter Poser render. Meaning that cutout and de-fringing is easier in Photoshop. Here the result is cutout, defringed, and given a Stroke to firm the holding-line. The shadows have also been lifted a little, to give it a more graphic look.
Next step will be to get some more SDXL Controlnets, and output a variety of different Poser renders and then see what combination works the best with this workflow.
Working OpenPose with face and hands, from any image, in ComfyUI
Working OpenPose with face and hands, from any image, in ComfyUI. Works with quick real-time renders from Poser.
1. Use your existing ComfyUI or, if new, then try the ComfyUI Windows portable. For the Portable, cut the entire custom_nodes folder and place the more immediately useful individual custom_nodes folders back one-by-one, if ComfyUI fails to load all its UI elements on startup. I found that one of the custom_nodes was stopping the full UI from loading.
2. Into your ComfyUI you install ComfyUI’s ControlNet Auxiliary Preprocessors packages, which come in a big bundle… and one of these is for openpose processing.
3. Into your ComfyUI you also install DWPose as ComfyUI-Dwpose-Tensorrt to speed things up. You’re on Windows and NVIDIA, I assume.
4. Download two required Torch files dw-ll_ucoco_384_bs5.torchscript.pt and yolox_l.onnx. About 500Mb in total, and they’re open on HuggingFace. There are two folders to manually put these in, for me…
C:\ComfyUI_Windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\yzd-v\DWPose\
C:\ComfyUI_Windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts\hr16\DWPose-TorchScript-BatchSize5\
I was told to put the files in the first yzd-v folder and then the workflow gave me a ‘not found, download from Huggingface’ error. However, I then also tried copying the same files to the second hr16 folder and… the Openpose workflow worked.
5. My simple Openpose workflow for Comfy, working. Just drop an image in and ‘Run’. Should take about five seconds to produce an Openpose image.
As you can see, you can switch off face and/or hands if they’re not required.
You then save out this special image, and drop it into a Controlnet workflow which has an openpose model (here for SDXL models) linked to it…
Once downloaded I renamed this openpose model to openpose-sdxl-diffusion_pytorch_model.safetensors so that I know it’s for SDXL. It is copied into C:\ComfyUI_Windows_portable\ComfyUI\models\controlnet\SDXL_controlnet\
On more powerful PCs you can link these two workflows together in the same workflow. But with more basic PCs, it seems best to try to limit how much the workflow is being asked to load all at once.
The resulting suitably-prompted image then conforms to the input Openpose pose. Use a setting of 0.85 to give the Controlnet more wiggle-room than a strict 1.0 setting.
All free. There’s also a paid plugin on Renderosity, which does this for Poser 12 and 13.
I also tried to get depth (aka depthmap) Controlnet working, but with no success at all. I must have downloaded 20 workflows and countless models, custom_nodes and preprocessors, and not a damn one worked. Errors every time. I give up on depth in ComfyUI, and will just work with the working MistoLine lineart and OpenPose Controlnets.
Release: Blender 4.5 final LTS
The Blender 4.5 Long-Term Support (LTS) release is now available in its final version.
Yes, yet another UI makeover. Ugh.
But of special interest to Poser and DAZ users may be the new “robust” .FBX importer. However, note that the new faster importer is still “experimental”. Having moved away from using Python, it apparently doesn’t encounter many of the usual roadblocks encountered when trying to move 3D models + textures from one software to another. It can import “very old” .FBX files which may have been saved a decade or more ago, but can also handle newer OpenPBR textures. Can even handle animations. It’s many times faster. Nice.
Note that the OBJ import-export also gets an overhaul.
Potentially then this could open up a robust and much faster Poser/DAZ -> .FBX -> Blender -> ComfyUI rendering pipeline.
Update: I now find that ComfyUI has native import of .OBJ and .FBX and nodes that can visualise them rotating in 3D space. There is a question-mark over size and robustness, though, for me. Seeing a demo of a simple lightweight mesh of a garden gnome is one thing, but I suspect that displaying and manipulating a La Femme .FBX export is quite another.
Update: Yes, I tested the 3D node in Comfy. It imports fine, but the basic window is obviously not meant for anything big and complex. Thus Blender + Comfy running as an in-UI plugin seems worth exploring. Ideally we could do this with Poser, but the Comfy integration would likely have to be done via a script calling a set of real-time renders, rather than an in-UI window.
Completed the move to a new OS, these are the goodies that now become possible…
Having made the leap to Windows 11 Superlite, I now have it nailed down and the AI image generators I wanted. I’m working on ComfyUI Portable, which was updated to the latest version for Flux Kontext Dev. I suspect I won’t be going back to InvokeAI much, now that I have Comfy made… comfy. It’s not so daunting once you get the hang of it. Here’s what I now have…
* SDXL:
Can be made blisteringly fast with realvisxlV50_v50LightningBakedvae.safetensors or the amazingly fast/good-quality splashedMixDMD_v5.safetensors. From the latter, four seconds on a 3060 12Gb for this image at this 1280px size. No postwork…
Four seconds! Works with mistoline_rank256.safetensors as the single universal lineart controlnet (not used in the above image). There are two slight disadvantages to the otherwise awesome splashedMixDMD_v5 model. 1) you get no negative prompt — since you have to work at CFG 1.0 and thus the Negative prompt is ignored; and 2) not all SDXL LoRAs appear to work with splashedMixDMD. Still, some nice ones do, such as the comics one you see in action above. I think I have a new favourite go-to for experimenting with style-changing Poser renders with Controlnet. Maybe also OpenPose Controlnet, since there’s at last a good one for SDXL.
Theoretically, since I also have the original vanilla SDXL base model, I could also now train up some LoRAs myself.
Also of note is the SDXLFaetastic_v24.safetensors which is dedicated to western fantasy artwork (painting, lineart, charcoal etc). Perhaps useful as a backup when a LoRA fails to work in a turbo model.
* Illustrious (SDXL):
Illustrious models are supposed to be ‘SDXL for illustration’ but appear to be overwhelmingly anime (ugh), but at least that makes the good ones excellent at poses and action. I’m not hugely impressed by using the more interesting LoRAs with Nova Flat XL v3, the model that I was recommended to try for making ‘flat’ comics images. The model is indeed great for what it’s meant to do, but I didn’t get much from using it with LoRAs such as Ligne Claire (clear line Eurocomics style) or the Moebius style LoRA. But maybe that’s because I haven’t played around with them long enough or got them in a good Illustrious workflow with suitable prompts that shift it away from anime. Or maybe I need another Illustrious base model.
* Flux Kontext Dev:
Somewhat slow, but with the AurelleV2 LoRA it can take a Poser render and generate a very convincing watercolour + lineart which exactly aligns when laid over the top of the starting Poser render. And which keeps the base colours. Good for illustrating children’s storybooks then. It can also do its other ‘I am a Photoshop Wizard’ magic, albeit slowly — such as merging two images and re-posing, removing items including watermarks, removing or changing colour, re-lighting, placing a face into a new environment and position, etc. Useless for auto-colourising greyscale, compared to online services such as Palette and Kolorize.
* WAN 2.1 Text to Video / Single Image:
Yes, I even tried WAN on a humble 3060 12gb card. Working, with two turbo LoRAs running in tandem. 80 seconds for a nice 832 x 480px single frame, with a workflow optimised for single images. Slow, but it can be done and the results are very cohesive and convincing as photography. This success suggests that a 16fps text-to-video at that size would take maybe 2 hours for five seconds, and making a single-image preview first would reassure one about the eventual results.
* WAN 2.1 Image to Video:
Working, with a turbo LoRA. 36 minutes for 5 seconds at 480 x 368px (81 frames at 16fps). Initial tests show it works well and looks good (spaceship entering planetfall from orbit), and it’s feasible in terms of time. So 820 x 480px, with more quality, might at a guess be three hours for six seconds at 16fps? That would be perfectly feasible to run overnight. After a week one would have some 40 seconds of video. And a hefty electric bill in due course, no doubt. Though, Wan 2.2 is due soon and will add a lightweight 5b model with better camera shot-name and camera movement understanding, and may it well also be quicker.
There’s a lot more to explore, such as tiled upscaling, facerestore, character adapters, normal map Controlnets etc. But for now I’m pleased I’ve made the leap to an OS where I can use more than SD 1.5 and SD 2.1 768. I’ll still go back to them in due course, especially now I can use them with turbo workflows. They can also be used in tandem with other types of model, such as Illustrious for coherent action scenes and then try to get the result into photoreal + nice faces with SD 1.5. It’ll also be interesting to see what ‘SD 2.1 768 to Illustrious’ can do with a Syd Mead landscape.
And I got all the above just in time, since CivitAI is to be effectively banned here in the UK, from tomorrow!


































