I’ve now completed an automated Python script for the bulk of the Poser to Stable Diffusion workflow…
The one-click automated output is the .PNG files, plus the same .PNGs automatically stacked into a Photoshop .PSD file and saved.
For later work in Photoshop: Firefly AO (Ambient Occlusion pass, adds subtle shadows if needed), a real-time Preview faux ‘clown pass’ (aka ToonID) (easy masking, though the Meshbox HP Lovecraft figure doesn’t have much to mask here), and a Preview colour render (for blending back colours, so they’re consistent from panel-to-panel in a comic, and for dropping in a backdrop). Their folder is datestamped and also has the name of the Poser scene file. Then the Poser scene is reverted to where it started, safe and sound.
The Comic-Book Preview and the Firefly lineart layers are then merged in Photoshop, with an Action, and the result is then dropped to the desktop and from there manually dragged over to be used as the Img2Img source in SD.
The SD result then gets saved and manually opened in Photoshop, and an automated Action takes over and restores the Poser Preview render colours. After that it’s optional to add a holding line, mask areas, add very subtle 3D shading of the ligne claire (clear line) comic style, or cut out the plain backdrop and add a new one.
Had to go back to Poser 11 for this script, because Poser 13 doesn’t appear to support options.BucketSize(256) in Python. Without having Firefly set a big bucket size, the three passes of Firefly rendering are rather slow.