{"id":23720,"date":"2025-07-26T20:18:33","date_gmt":"2025-07-26T20:18:33","guid":{"rendered":"https:\/\/jurn.link\/dazposer\/?p=23720"},"modified":"2025-08-14T17:00:51","modified_gmt":"2025-08-14T17:00:51","slug":"back-to-lovecraft-gazing-at-the-stars","status":"publish","type":"post","link":"https:\/\/jurn.link\/dazposer\/index.php\/2025\/07\/26\/back-to-lovecraft-gazing-at-the-stars\/","title":{"rendered":"Back to Lovecraft gazing at the stars"},"content":{"rendered":"<p>More fun with &#8216;Poser to Stable Diffusion&#8217;, now that I&#8217;ve moved to Windows 11 Superlite and have the AI stuff mostly set up.<\/p>\n<p><a href=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/poserdemo-sdxl.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/poserdemo-sdxl-1024x607.jpg\" alt=\"\" width=\"640\" height=\"379\" class=\"aligncenter size-large wp-image-23721\" srcset=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/poserdemo-sdxl-1024x607.jpg 1024w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/poserdemo-sdxl-300x178.jpg 300w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/poserdemo-sdxl-768x456.jpg 768w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/poserdemo-sdxl-1536x911.jpg 1536w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/poserdemo-sdxl-2048x1215.jpg 2048w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p>This time I can use SDXL rather than SD 1.5.  I think regular readers of this blog will recall the previous attempts with the same Poser source, and see quite a difference in the result. I&#8217;m using the same test render.<\/p>\n<p>To get this I made a ComfyUI workflow featuring an SDXL turbo model powering Img2Img, plus three LoRAs, and a lineart Controlnet. Not sure the latter is really needed (a relic of the old workflow), provided the colour stays steady from image to image and thus from panel-to-panel and page-to-page in a comic. Or I guess I could go all-in and try four different Controlnets working at once, and see how stable the results are compared to the Poser render.<\/p>\n<p>But this is just a first experiment, and it&#8217;s encouraging to get this far immediately.<\/p>\n<p>On the other hand, it&#8217;s inventing things like the suit pockets and a waistcoat. Which is annoying since consistency is needed. The reason to use Poser is to have the results be consistent, not full of little differences that either take a lot of postwork to fix, or which are lazily left in and annoy the heck out of the reader. <em>(Update: prompt to &#8220;dark 2-piece suit&#8221; to get rid of waistocoats)<\/em><\/p>\n<p>The result comes in at a healthy 1432px (in about 12 seconds), from a 768px starter Poser render. Meaning that cutout and de-fringing is easier in Photoshop. Here the result is cutout, defringed, and given a Stroke to firm the holding-line. The shadows have also been lifted a little, to give it a more graphic look.<\/p>\n<p>Next step will be to get some more SDXL Controlnets, and output a variety of different Poser renders and then see what combination works the best with this workflow.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>More fun with &#8216;Poser to Stable Diffusion&#8217;, now that I&#8217;ve moved to Windows 11 Superlite and have the AI stuff mostly set up. This time I can use SDXL rather than SD 1.5. I think regular readers of this blog will recall the previous attempts with the same Poser source, and see quite a difference [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,13,3,28],"tags":[],"class_list":["post-23720","post","type-post","status-publish","format-standard","hentry","category-comics","category-companion-software","category-poser","category-posertosd"],"_links":{"self":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts\/23720","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/comments?post=23720"}],"version-history":[{"count":14,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts\/23720\/revisions"}],"predecessor-version":[{"id":23824,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts\/23720\/revisions\/23824"}],"wp:attachment":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/media?parent=23720"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/categories?post=23720"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/tags?post=23720"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}