{"id":23699,"date":"2025-07-26T10:38:10","date_gmt":"2025-07-26T10:38:10","guid":{"rendered":"https:\/\/jurn.link\/dazposer\/?p=23699"},"modified":"2025-11-11T22:37:51","modified_gmt":"2025-11-11T22:37:51","slug":"working-openpose-with-face-and-hands-from-any-image-in-comfyui","status":"publish","type":"post","link":"https:\/\/jurn.link\/dazposer\/index.php\/2025\/07\/26\/working-openpose-with-face-and-hands-from-any-image-in-comfyui\/","title":{"rendered":"Working OpenPose with face and hands, from any image, in ComfyUI"},"content":{"rendered":"<p>Working OpenPose with face and hands, from any image, in ComfyUI. Works with quick real-time renders from Poser.<\/p>\n<p><a href=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/workingopen.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/workingopen-1024x421.jpg\" alt=\"\" width=\"640\" height=\"263\" class=\"aligncenter size-large wp-image-23704\" srcset=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/workingopen-1024x421.jpg 1024w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/workingopen-300x123.jpg 300w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/workingopen-768x316.jpg 768w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/workingopen-1536x631.jpg 1536w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/workingopen.jpg 1727w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p><strong>1.<\/strong> Use your existing ComfyUI or, if new, then try the <a href=\"https:\/\/github.com\/YanWenKun\/ComfyUI-Windows-Portable\">ComfyUI Windows portable<\/a>. For the Portable, cut the entire <em>custom_nodes<\/em> folder and place the more immediately useful individual <em>custom_nodes<\/em> folders back one-by-one, if ComfyUI fails to load all its UI elements on startup. I found that one of the custom_nodes was stopping the full UI from loading.<\/p>\n<p><strong>2.<\/strong> Into your ComfyUI you install <a href=\"https:\/\/github.com\/Fannovel16\/comfyui_controlnet_aux\">ComfyUI&#8217;s ControlNet Auxiliary Preprocessors<\/a> packages, which come in a big bundle&#8230; and one of these is for openpose processing.<\/p>\n<p><strong>3.<\/strong> Into your ComfyUI you also install DWPose as <a href=\"https:\/\/github.com\/yuvraj108c\/ComfyUI-Dwpose-Tensorrt\">ComfyUI-Dwpose-Tensorrt<\/a> to speed things up. You&#8217;re on Windows and NVIDIA, I assume.<\/p>\n<p><strong>4.<\/strong> Download two required Torch files <a href=\"https:\/\/huggingface.co\/hr16\/DWPose-TorchScript-BatchSize5\/blob\/main\/dw-ll_ucoco_384_bs5.torchscript.pt\">dw-ll_ucoco_384_bs5.torchscript.pt<\/a> and <a href=\"https:\/\/huggingface.co\/yzd-v\/DWPose\/blob\/main\/yolox_l.onnx\">yolox_l.onnx<\/a>. About 500Mb in total, and they&#8217;re open on HuggingFace. There are two folders to manually put these in, for me&#8230;<\/p>\n<p><em>C:\\ComfyUI_Windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\yzd-v\\DWPose\\<\/em><\/p>\n<p><em>C:\\ComfyUI_Windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\hr16\\DWPose-TorchScript-BatchSize5\\<\/em><\/p>\n<p>I was told to put the files in the first <em>yzd-v<\/em> folder and then the workflow gave me a &#8216;not found, download from Huggingface&#8217; error. However, I then also tried copying the same files to the second <em>hr16<\/em> folder and&#8230; the Openpose workflow worked.<\/p>\n<p><strong>5.<\/strong> My simple Openpose workflow for Comfy, working. Just drop an image in and &#8216;Run&#8217;. Should take about five seconds to produce an Openpose image.<\/p>\n<p><a href=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_111737.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_111737-1024x531.jpg\" alt=\"\" width=\"640\" height=\"332\" class=\"aligncenter size-large wp-image-23701\" srcset=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_111737-1024x531.jpg 1024w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_111737-300x156.jpg 300w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_111737-768x398.jpg 768w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_111737-1536x796.jpg 1536w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_111737.jpg 1811w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><\/a><\/p>\n<p>As you can see, you can switch off face and\/or hands if they&#8217;re not required.<\/p>\n<p>You then save out this special image, and drop it into a Controlnet workflow which has an openpose model (here <a href=\"https:\/\/huggingface.co\/xinsir\/controlnet-openpose-sdxl-1.0\">for SDXL<\/a> models) linked to it&#8230;<\/p>\n<p> <a href=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_113410.jpg\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_113410.jpg\" alt=\"\" width=\"946\" height=\"759\" class=\"aligncenter size-full wp-image-23702\" srcset=\"https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_113410.jpg 946w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_113410-300x241.jpg 300w, https:\/\/jurn.link\/dazposer\/wp-content\/uploads\/2025\/07\/2025-07-26_113410-768x616.jpg 768w\" sizes=\"auto, (max-width: 946px) 100vw, 946px\" \/><\/a><\/p>\n<p>Once downloaded I renamed this openpose model to <em>openpose-sdxl-diffusion_pytorch_model.safetensors<\/em> so that I know it&#8217;s for SDXL. It is copied into  <em>C:\\ComfyUI_Windows_portable\\ComfyUI\\models\\controlnet\\SDXL_controlnet\\<\/em><\/p>\n<p>On more powerful PCs you can link these two workflows together in the same workflow. But with more basic PCs, it seems best to try to limit how much the workflow is being asked to load all at once. <\/p>\n<p>The resulting suitably-prompted image then conforms to the input Openpose pose. Use a setting of 0.85 to give the Controlnet more wiggle-room than a strict 1.0 setting.<\/p>\n<p>All free. There&#8217;s also a paid plugin on Renderosity, which does this for Poser 12 and 13.<\/p>\n<hr>\n<p>I also tried to get depth (aka depthmap) Controlnet working, but with no success at all. I must have downloaded 20 workflows and countless models, custom_nodes and preprocessors, and not a damn one worked. Errors every time. I give up on depth in ComfyUI, and will just work with the working MistoLine lineart and OpenPose Controlnets.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Working OpenPose with face and hands, from any image, in ComfyUI. Works with quick real-time renders from Poser. 1. Use your existing ComfyUI or, if new, then try the ComfyUI Windows portable. For the Portable, cut the entire custom_nodes folder and place the more immediately useful individual custom_nodes folders back one-by-one, if ComfyUI fails to [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,13,28,12],"tags":[],"class_list":["post-23699","post","type-post","status-publish","format-standard","hentry","category-comics","category-companion-software","category-posertosd","category-tutorials"],"_links":{"self":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts\/23699","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/comments?post=23699"}],"version-history":[{"count":24,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts\/23699\/revisions"}],"predecessor-version":[{"id":23987,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/posts\/23699\/revisions\/23987"}],"wp:attachment":[{"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/media?parent=23699"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/categories?post=23699"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jurn.link\/dazposer\/index.php\/wp-json\/wp\/v2\/tags?post=23699"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}