{"id":23082,"date":"2025-05-04T18:19:13","date_gmt":"2025-05-04T18:19:13","guid":{"rendered":"https:\/\/jurn.link\/dazposer\/?p=23082"},"modified":"2025-05-28T19:29:27","modified_gmt":"2025-05-28T19:29:27","slug":"successful-test-the-background","status":"publish","type":"post","link":"https:\/\/jurn.link\/dazposer\/index.php\/2025\/05\/04\/successful-test-the-background\/","title":{"rendered":"Successful test &#8211; the background"},"content":{"rendered":"<p>Definitely getting there, in the quest to use Stable Diffusion 1.5 like a Photoshop filter. This is another follow-on from my two previous tutorials for Poser with SD.<\/p>\n<p><a href=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3-300x300.png\" alt=\"\" width=\"300\" height=\"300\" class=\"aligncenter size-medium wp-image-23085\" srcset=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3-300x300.png 300w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3-150x150.png 150w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3-768x768.png 768w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3.png 1024w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p style=\"text-align:center\"><em>The final result<\/em><\/p>\n<p>To obtain this final result, for the Img2Img source I started again with exactly the same Poser scene, camera and light as before, but this time I dropped AS&#8217;s <a href=\"https:\/\/www.renderosity.com\/marketplace\/products\/72396\">Hanyma Platform<\/a> for Poser (now no longer sold) to the scene as a background prop. Poser&#8217;s Comic Book inking applied to both. <\/p>\n<p><a href=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/sceneinposer.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/sceneinposer-1024x752.jpg\" alt=\"\" width=\"640\" height=\"470\" class=\"aligncenter size-large wp-image-23083\" srcset=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/sceneinposer-1024x752.jpg 1024w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/sceneinposer-300x220.jpg 300w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/sceneinposer-768x564.jpg 768w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/sceneinposer.jpg 1327w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p style=\"text-align:center\"><em>Raw scene in the Poser UI.<\/em><\/p>\n<p>Then I made a quick Preview render of this scene at 768 (remembering to boost the texture sizes from 512), and then in Photoshop used the Glitterato plugin to add a quick starry sky.<\/p>\n<p><a href=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/backdropexp.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/backdropexp-300x300.png\" alt=\"\" width=\"300\" height=\"300\" class=\"aligncenter size-medium wp-image-23084\" srcset=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/backdropexp-300x300.png 300w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/backdropexp-150x150.png 150w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/backdropexp.png 768w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p>In InvokeAI this replaced the previous figure-only Img2Img image, but the Firefly lineart Controlnet image stayed the same, thus giving a fixed figure outline that matches that of the Poser render &mdash; possibly important for later consistent colouring.<\/p>\n<p>In this experiment I also added a Moebius LORA, which knows what our hero Lovecraft looks like. No additional prompting is needed to account for the backdrop, since the CFG is so low.<\/p>\n<p><a href=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3.png\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3-300x300.png\" alt=\"\" width=\"300\" height=\"300\" class=\"aligncenter size-medium wp-image-23085\" srcset=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3-300x300.png 300w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3-150x150.png 150w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3-768x768.png 768w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/05\/hpl-demo3.png 1024w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/a><\/p>\n<p style=\"text-align:center\"><em>The final result at 1024px<\/em><\/p>\n<p>It&#8217;s all going a bit &#8216;black on black&#8217; (arrgh!) for this quick demo, but a non-background SD generation of the figure alone can then be used in Photoshop to mask (Crtl + click on layer, invert) and then fade or lighten the background a touch (as I believe Brian Haberlin does) so as to make the character stand out a little more. And ideally your comic script would try to avoid &#8216;black cat in a coal cellar&#8217; settings, for this reason.<\/p>\n<p>Simply using the new Img2Img as the Controlnet, replacing the Firefly outlines render? Nope, that doesn&#8217;t keep the detailing on the character or keep him consistent across a quad of generations. The Canny Controlnet needs to be focused just on the character, in the same way the comic reader&#8217;s eye is. Going from 768px to 1024x in the Img2Img seems to give SD some creative wiggle-room, despite the low CFG. Since it&#8217;s a low CFG for the Img2Img, there&#8217;s not much shifting in the details of the background. This seems to be the sweet spot: good model, Img2Img with a low CFG but slight upscale, and use the Controlnet with pure Poser Firefly lineart to keep the figure stable and in lockstep with your Poser renders. Presumably all this would also work for two characters interacting.<\/p>\n<p>By having both character and backdrop generated by SD, there&#8217;s some SD gloop and later a loss of flexibility when putting the frame together in Photoshop. But one also avoids the need to mask, extract, defringe, colour-balance etc. It may be possible to prompt for lighting and get it, but I haven&#8217;t tried that yet. <\/p>\n<p>Obviously for a comic you&#8217;d also start breaking free of the stock camera lens and use foreshortening etc for a more dynamic look. Poser has a special camera dial that makes that very easy.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Definitely getting there, in the quest to use Stable Diffusion 1.5 like a Photoshop filter. This is another follow-on from my two previous tutorials for Poser with SD. The final result To obtain this final result, for the Img2Img source I started again with exactly the same Poser scene, camera and light as before, but [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,13,3,28,12],"tags":[],"class_list":["post-23082","post","type-post","status-publish","format-standard","hentry","category-comics","category-companion-software","category-poser","category-posertosd","category-tutorials"],"_links":{"self":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts\/23082","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/comments?post=23082"}],"version-history":[{"count":22,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts\/23082\/revisions"}],"predecessor-version":[{"id":23113,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts\/23082\/revisions\/23113"}],"wp:attachment":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/media?parent=23082"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/categories?post=23082"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/tags?post=23082"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}