More on my devising of a Poser to SD workflow. I was happy with the look I had got. But I took time to look into the ‘jaggies’ in my Poser to Stable Diffusion success. The relatively small 786px input .PNG, used for both Img2Img and the Controlnet was thought to be the problem. The output is too jaggy on many lines, despite Poser’s real-time Comic Book Preview render being anti-aliased. How to fix it?
First I ran the source through Vector Magic at medium detail level, which includes smoothing along with the vectorisation.
Much better linework from SD with this as the source, and the dotted suit-lapels vanish. Though it’s lost definition on the suit buttons, and there’s a weird long fingernail. Also the sleeve is a little rough. There are other glitches elsewhere.
Though the edges – such as that sleeve – are not a problem — just add a centered ‘holding-line’ outline stroke in Photoshop and the bobbliness is covered up. But things changing shape and glitching is not good for replacing the colour, via a full Poser Preview render used as a blending-layer in Photoshop. So vectorisation of 768px is perhaps not the way to go. Though it did at least seem to confirm that it’s not some inherent limitation in the Img2Img or Controlnet process that causes the problem.
Nor was the free anti-aliasing filter for Photoshop, Japan’s OLM Smoother, found to do anything. It’s quite subtle and there’s very little difference in the end result.
What about the G’MIC Repair / Smooth options? Nope. Though there is a way to use a G’MIC custom filter on the SD output, to get an anti-aliased ‘inked’ layer that might overlay the SD output lines, in Photoshop. Nice, but it’s not consistent across images and moves things further way from replicability across different images on a comic-book page.
A custom variant of FXEngrave.
In the end, after further tests, I appear to have discovered that… source DPI affects style in Stable Diffusion 1.5. Who knew? 72dpi source and Img2Img = roughly drawn comic book (seen at the start of this post). Exactly the same settings with a 300dpi source give a far more refined and also smoother-lined style. I guess the added DPI gives SD even more ‘wiggle room’ to add the style, in an Img2Img process that moves the source from 768px to 1024px?
Next step, then, is to experiment with 120dpi and 150dpi to find a balance point. But even before that, a Python script to automate the testing and save time/gruntwork.




