Definitely getting there, in the quest to use Stable Diffusion 1.5 like a Photoshop filter. This is another follow-on from my two previous tutorials for Poser with SD.
The final result
To obtain this final result, for the Img2Img source I started again with exactly the same Poser scene, camera and light as before, but this time I dropped AS’s Hanyma Platform for Poser (now no longer sold) to the scene as a background prop. Poser’s Comic Book inking applied to both.
Raw scene in the Poser UI.
Then I made a quick Preview render of this scene at 768 (remembering to boost the texture sizes from 512), and then in Photoshop used the Glitterato plugin to add a quick starry sky.
In InvokeAI this replaced the previous figure-only Img2Img image, but the Firefly lineart Controlnet image stayed the same, thus giving a fixed figure outline that matches that of the Poser render — possibly important for later consistent colouring.
In this experiment I also added a Moebius LORA, which knows what our hero Lovecraft looks like. No additional prompting is needed to account for the backdrop, since the CFG is so low.
The final result at 1024px
It’s all going a bit ‘black on black’ (arrgh!) for this quick demo, but a non-background SD generation of the figure alone can then be used in Photoshop to mask (Crtl + click on layer, invert) and then fade or lighten the background a touch (as I believe Brian Haberlin does) so as to make the character stand out a little more. And ideally your comic script would try to avoid ‘black cat in a coal cellar’ settings, for this reason.
Simply using the new Img2Img as the Controlnet, replacing the Firefly outlines render? Nope, that doesn’t keep the detailing on the character or keep him consistent across a quad of generations. The Canny Controlnet needs to be focused just on the character, in the same way the comic reader’s eye is. Going from 768px to 1024x in the Img2Img seems to give SD some creative wiggle-room, despite the low CFG. Since it’s a low CFG for the Img2Img, there’s not much shifting in the details of the background. This seems to be the sweet spot: good model, Img2Img with a low CFG but slight upscale, and use the Controlnet with pure Poser Firefly lineart to keep the figure stable and in lockstep with your Poser renders. Presumably all this would also work for two characters interacting.
By having both character and backdrop generated by SD, there’s some SD gloop and later a loss of flexibility when putting the frame together in Photoshop. But one also avoids the need to mask, extract, defringe, colour-balance etc. It may be possible to prompt for lighting and get it, but I haven’t tried that yet.
Obviously for a comic you’d also start breaking free of the stock camera lens and use foreshortening etc for a more dynamic look. Poser has a special camera dial that makes that very easy.




Pingback: New for Poser and DAZ – April 2025 – MyClone Poser and Daz Studio blog