Coming in early/mid June 2022, a two-part webinar on working with material zones in DAZ Studio.
Category Archives: Tutorials
EZ Skin 3
My EZ Skin 3 post, with mini-tutorial and advice, has been updated. Including links. It’s also now available in a version for Poser 12.
Synthetik Studio Artist 5.5 and Poser 11
There are plenty of bits of software that will take this sort of lineart and filter it. This unfiltered example of source lineart is from Poser 11, with the real-time comic book option set to b&w and simple lighting.
Such lineart can be filtered by, for instance, the free G’MIC which has a big range of filtering options. The new G’MIC 3.1 will be out for Photoshop in a few days, and will add another comic-oriented filter. Then there’s Digital Auto-Painter (DAP), though only its graphic-novel preset is of real use for lineart — and with a bit of twiddling that can be emulated with the free G’MIC. A nice one I rate is Redfield’s Sketchmaster, especially if you want a kind of soft pastels look. Some people even work wonders with the native Photoshop filters, chaining them together in an Action. AKVIS Charcoal I tried some years ago, and though kind of nice it was slow. It may have improved since.
Topaz Clean 3 is also useful for cleaning off the bump-map and muddy-texture grunge, prior to any filtering. That can also be emulated with the free G’MIC. Though the sadly-discontinued Topaz Clean 3 is more than twice as fast, on what is a very slow process.
Now I’ve found another new way of filtering. I discovered that the maker of DAP had launched a new Style Animator 1.0 at $40. It vectorizes lineart, and can then apply a preset style. Kind of like SketchUp’s line styles, which many readers will be familiar with. I tried it, it’s nice, it works, but… is somewhat limited in its range.
Yet the idea of Style Animator 1.0 led me to discover software that’s been hiding in plain sight for the last 20 years. So much so that I don’t think it’s ever had a review. At least, I can’t find one. It’s Synthetik Studio Artist, which is from developer John Dalton and recently had a major update to 5.5.5. If the $40 Style Animator is a cute little furry Bush Baby, then the $200 Synthetik Studio Artist is a massive chest-beating 500lb Mountain Gorilla. And just as fearsome to approach, as it’s not easy software to learn despite the 560-page manual and a wealth of video tutorials. ‘Autopainter’ it may be… but it sure takes some getting used to. Yet recent intensive testing shows it has at least half a dozen great possibilities ‘out of the box’, when fed Poser lineart. When I say great I mean ‘looks relatively hand-made, without being cheesy’. The next edition of VisNews will have the details.
There’s a generous non-expiring free trial for it, and I’ve made two free preset actions for it which are on Dropbox.
1. Open File | New Source and Canvas (Ctrl + N), and select some Poser lineart.
2. Type number 100 in the h Mult box on the import parameters, to get the Canvas the same size as the Source image you’re loading onto it. Sadly this step can’t be automated.
3. Run one of my preset actions. If you loaded a .PNG with an alpha mask, then run the main action. If you loaded a render from Poser’s Sketch (no alpha possible), then run the Sketch one.
4. Either should result in the output of a cleanly masked .PNG file , when you use “Save Canvas as…” to save a .PNG file.
The Poser inks after my preset
A Poser Sketch render after my preset
There are of course just starting points. The idea is you bring the output into Photoshop. Output should be the same size as the source, and so easy to composite. Here G’MIC has added a finishing touch, seen most clearly on the toes. Not one line of this was inked by hand…
Figure is ‘BioBot’ by AntFarm.
Script: apply a single shader to multiple surfaces in DAZ
I’ve finally found and hacked a working way to have a script apply a single shader across multiple surfaces in DAZ. Many hours of searching and testing finally surfaced Mcasual’s free mcjSelectTheseMats. This will do the job in DAZ 4.12.x, with a few custom adjustments. I’m amazed no-one has made such a script for this basic task, until now. Here is my working tweak to extend the ‘mcjSelectTheseMats’ script.
SelectTheseMats_ApplyShader.txt (download, rename to be a .DSA script)
This selects the named base figure in the scene, then makes sure he is really selected, then selects and highlights all the non-eye surfaces of the ‘Genesis 2 Male’. Adjust as desired for your likely target figure (e.g ‘Genesis 8 Female’. There is also a naming convention which handles having two or more Genesis figures of the same base type in a scene).
The resulting script auto-selects all the required surfaces. This part works even if something else entirely is selected in the scene, when the script is run, e.g. a light. It also doesn’t matter if the target figure is named ‘Fluffy Bunny’ etc in your runtime, the DAZ scene only sees and knows him as a ‘Genesis 2 Male’.
Then at the end of the script, I added two lines to apply the chosen shader to the selected surfaces. Obviously you will need to adjust the path to your desired shader and file and then save the script.
With this script in hand you can then theoretically build it into a larger multi-pass ‘render, load and repeat’ script. By having the script make an automatic render after each new shader is applied. It would be ‘shader-based’ multi-pass, rather than ‘render-engine based’ multi-pass.
Obviously the script as it stands is not the ‘apply to all surfaces in a scene, except those identified as eyes’ I initially wanted, but it works and goes a long way toward it.
Also, this was the short non-working version that I wrestled with for a long time. Theoretically it should work, but it doesn’t select the surfaces, only applies the shader to any already-selected surfaces…
var figure = Scene.findNodeByLabel( “Genesis 2 Male” );
figure.select( true );
var sPathToFile = “C://my_folder/my_shader.duf”;
App.getContentMgr().openFile( sPathToFile, true );
It may save someone else the trouble in the future.
Staying on point
Vital advice from Warlord at Renderosity, “Making Video Tutorials – Staying on Point”. I suspect I know the recent two-hour slog-a-thon which may have triggered his article, as I saw that one too.
“Practice”, yes, as he stresses. And I’d add that also from practice comes your timing. Don’t spend ages on introductions and ‘Computer Graphics classroom’ theory about isometroptical flange-widgets and how vertex pixel-wiggling happens (that no-one will remember or ever use). Only to run out of time to present the actual useful knowledge for the software in question (i.e. ‘find X here, set Y here, then press this, not that’) that people are waiting for. In such situations you then have to cram it all into four garbled minutes while skipping lots of the juicy stuff. Practice and timing helps you bypass such problems, because you know in advance if it will all fit in the time available.
Also vital advice from Warlord…
Show a clip at the beginning of the tutorial demonstrating the final result.
Good microphones and levels are also a ‘must have’.
I’d add that “can we keep questions to the end please” is also useful to prevent interruptions and sidetracking during a ‘live room’ presentation. A variant for webinars is “questions at the end of each segment, please” although such things can also be done in other ways.
DAZ-ling romance
“Writing Romance and Relationships for Visual Narratives”, a two-part webinar with Drew Spence on getting ‘the feel’ right for convincing DAZ-rendered relationship stories in comics, storybooks, slideshows, animation storyboards etc. Booking now.
Vue Solutions
A free Vue Solutions community webinar, 28th November 2021. Booking now.
Learn from top DAZ Store maker and vendor Esha
A new tutoring group, with top DAZ Store maker and vendor Esha, offering ‘in person’ a full up-to-date…
guide for texturing props and clothing, without the usual headaches and hurdles. … we’ve had repeated requests for Esha to teach this topic, so we are starting on Sunday the 28th of November 2021.
Small numbers in the group, for the personal coaching touch. Booking now.
If you can’t afford Substance Painter for this, I imagine that the translation of that part of the tutoring to the similar tool $20 ArmorPaint would not be too difficult.
Installing the Asus Xtion Pro in 2021
I bagged a nice “why-not” eBay bargain on an old Kinect-a-like motion-capture device. Yes, there are still a few real low-price auction bargains from real people to be had there, among the vast herds of re-listers and ‘gadgets from China’ sellers. The Asus Xtion Pro depth-sensing camera is arriving soon. Unlike the early Kinect it’s Windows-friendly and can do good face capture, and unlike a luxury iPhone it doesn’t require a small mortgage and a contract-shackle.
But until it arrives, after much search/research, here are some useful files for getting an old Asus Xtion Pro depth-sensing camera running on Windows. The links and notes may help others.
1. The official Asus drivers ISO are here (V1164_1202.zip), and also the firmware patch (FWUpdate_5_8_22.zip) to enable this camera to use OpenNI 2.0 or higher. (Your Web browser may need to turn off some blockers to see the javascript selection options).
2. Get the ISO unzipped and mounted. WinCDEmu is a good free driverless mounting utility, if your Windows baulks at driver-based ISO mounters. It’s important to note that once installed the firmware will require you also have the Primesense SDK Version 20.4.4.0 installed. The firmware-patched camera can only use OpenNI 2.x with this present. The installer from the ISO should get you this, as you’ll see by checking Uninstall…
Here there seems to have been some slight confusion, that now needs clearing up. I’m pretty sure that this “Primesense SDK Version 20.4.4.0” is what the official drivers page slightly misleadingly calls the “SDK OPEN NI Package 20.4.2.20 or higher”, from the Asus Xtion Pro’s sensor maker Primesense. There does not actually appear to have been a “OPEN NI Package 20.4.2.20″… and I think OpenNI Windows x64 2.2.0.33 and Primesense SDK Version 20.4.4.0 were confused and conflated by the person writing the driver listing. Easily done.
3. After the ISO install, check in Windows Uninstall to see the above version number is correct. Then connect the camera to a USB 2.0 port. I’m not yet sure what order the following two steps are to be done in:
i) install the updated firmware. Your original model Asus Xtion Pro should now work with OpenNI 2.0 and most motion-capture / robotics / 3D scanning software that requires 2.x.
ii) install the OpenNi 2.x drivers, presumably from your new C:\Program Files\OpenNI2\Driver folder. Possibly Windows will auto-install a driver as soon as the camera is plugged in, in which case you may need to ‘Update driver’ later.
On removing and then plugging back in your camera, the Primesense drivers should then — judging by screenshots from an old Windows 7 install guide — become visible in Windows Device Manager. The device shows as a “Primesense Carmine 1.08” (branded at retail as Asus Xtion).
OpenNI Cookbook has three pages which may help with this part of the process.
4. But the ISO appears to only install OpenNI 1.5.5. Now then… why does the drivers page say it contains the 2.x version? For the moment I’m guessing that the answer is that the Primesense SDK Version 20.4.4.0 may actually contain OpenNI 2.2.0.x within it or perhaps even 2.4.4.x. That would sense for a SDK (software development kit).
But if not, then as I’ve done here, also install OpenNI 2.2 from OpenNI-Windows-x64-2.2.0.33.zip at Stucture.io. This is a worthy community archive and so far as I can tell this appears to be the ‘last good’ version, before evil megacorp Apple stepped in and snaffled all the patents for use with their luxury iPhone.
5. Ok, you may then have something that will enable the camera to work when plugged in. If you look under C:\Program Files you should see these new folders.
If you get conflicts between OpenNI1 and OpenNI2, I guess you just uninstall version 1.
Note that there is also a firmware patch to take the camera’s USB 2.0 to 3.0, though one firmware patch will be enough for me for now. There are also various 32-bit installer versions of the above, if your old software requires 32-bit.
Note also that the Asus Xtion Pro is not to be confused with the Asus Xtion Pro LIVE version, which came out a year later and added an RGB camera to what is otherwise 99.9% the same model. Some old software appears to require Asus Xtion Pro LIVE, and I’ll test if it can run from a ‘firmware-updated Asus Xtion Pro capable of OpenNI 2.x’.
I’ll keep readers informed about progress, and if all this works when the camera arrives and is plugged in.
The original Xtion Pro works with:
* Unity (via various plugins and projects, or DIY your own)
* iPi Recorder + iPi Mocap Studio (body only, round-trips .BVH from Poser and DAZ)
* Fastmocap Professional (export supposedly via .BVH targets for M4 and Poser 6 and 8, body only – but see this Oct 2014 review before buying).
* Visikord (motion-controlled music for VJs, art installations, haunted houses etc).
* UNREAL4MIRROR (Virtual fitting / dressing mirror plugin for Unreal Engine 4).
* “The Claw is an arcade machine with futuristic controls. We replaced the traditional joystick and push button” with the Asus Xtion Pro.
* iClone 5.1 + one of three MoCap Plugins then offered (the latter now deeply unavailable).
* Blender (human motions automatically laid along timeline, to control a water surface).
* Artec Studio (scan 3D objects to meshes).
* Should work with most other object-scanner software. No textures, as that would also require the slightly later Xtion Pro Live’s added RGB camera. But ArmorPaint would do the job on the mesh fairly easily.
* Nuitrack.
* Faceshift (Facial mo-cap. Defunct now, purchased and killed by Apple. Later versions required(?) Xtion Live. But at 2015 they stated “All available cameras which produce good tracking quality are publicly supported by us or will be shortly. In 2015 you could get “a perpetual licence for non-commercial use for $150”, and don’t you now wish you did?).
* Matlab (for science/data analysis).
* GV-3D People Counter (counts the number of people entering a space).
Can also save a capture to an .ONI (OpenNI) file that appears to be .BVH-like… in that it packs all the frames as prerecorded skeleton movement data. This can be loaded to Unity via OpenNIContext. In addition, the .ONI timestamps can be queried with code, it’s said.
Also used in various robotics, medical, science projects etc. It has even been used in farming, as a “3D cow scanner” to detect lameness.
Fitted to some drones by ambitious drone-ers.
Also natively “supports push, wave, and tap gestures” for control of Windows software, and at launch shipped with the Kylo Browser (gesture-based Web browsing). Make “simple rotation gestures to zoom-out and zoom-in”, which sounds like it could get interesting with large digital maps.
At launch in Spring 2012, the list of compatible games included…
* SEGA’s Virtua Tennis 4.
* EA’s Need for Speed: Hot Pursuit.
* Capcom’s Street Fighter 4.
* Rovio’s Angry Birds.
* Beatbooster was a slightly later flagship sci-fi racing/exercise game for the device. Judging by YouTube videos, not one for gamers who dislike motion-sickness. Seems to have vanished.
* Related to games, the TurboTuscany demo. World’s first VR headset with full-body tracking, which used the first Xtion.
Collapsing code in Visual Studio Code
The free desktop PC software Microsoft Visual Studio Code (‘VSC’) is a sort of super Notepad++. It’s what you now want in order to copy-paste coloured code into the Renderosity Python forum, since a recent back-end forum update. Notepad++ on its own can’t do that particular job.
Here’s a handy tip for editing a non-Python Poser file with VSC…
Crtl + K.
Then hands off keyboard.
Then Ctrl + 3.
This collapses the zillion lines of nested code, as you can see here. Much more comprehensible now…
Then Mouseover the blank bit, to reveal the arrows that expand the hidden code block…
Not sure if this also works in Microsoft’s newly launched online version of Visual Studio Code, but it probably does.
Also, in the sidebar of this blog I’ve added links to a couple of free community-made editors for Poser file types (.CR2, .PZ2, etc).
Renderosity Forums URL changes
Renderosity has a new format for Forum URLs. Was…
https://www.renderosity.com/mod/forumpro/?forum_id=12589
or
https://www.renderosity.com//forums/?forum_id=10139
Now…
https://www.renderosity.com/forums/12589
The redirect is automatic, but a bit sticky and often fails. Thus manual fixing of bookmarked URLs is best.
Delete…
mod/forumpro/?
and replace
_id=
with
s/
Sadly the change appears to has bjorked the Python forum. Hopefully a proper code-display module will be plugged into the new forum software, soon.
Update: Copy-and-paste from a proper code editor (not Notepad++) now works. Suggested code editors are Microsoft Visual Studio Code, and PyCharm. Digital Art Live magazine has a two-page guide on how to install and set up the free Visual Studio Code.
Fix the DNS ‘not found’ problem
I’m currently experiencing severe website-lookup problems, effectively blocking my access to about 20% of sites. Especially smaller sites such as David Revoy, HiveWire Forum, GreasyFork, Blender Nation, eTools, Major Geeks, Stack Overflow, Nitter and many others, completely ‘not found’ seemingly due to patchy DNS on all DNS servers. Some are complaining about being unable to register Corel software.
Users of commercial services such as Slack have been very annoyed about being locked out…
It appears to affect Chrome-based browsers only, and is continuing for many sites. So… installing a Firefox based browser (I recommend Pale Moon) should at least get you to the missing sites. No amount of jiggering about with clearing DNS caches and adding new public-DNS addresses will cure the problem, I’ve spent the whole of Saturday morning trying to fix it for the Chrome-based Opera browser.
Update: It’s due to dodgy free root SSL certificates on the sites. Firefox has its own store of these at my end, so it was unaffected. Fixed by installing fresh root certificates. (The Slack outage mentioned above actually appears to have been down to its own stupid DNS jiggering-about, rather than SSL certificates).
Matcap for Poser 11
Matcap for Poser 11. You’re welcome.
The material is de-grunged, but its overall intended colour is retained. A more suitably artistic texture (i.e. that doesn’t scream ‘3D speckly grunge’ to regular comics readers) can then be subtly added via plugging a hand-inked texture tile into the Alternate Diffuse. Ignore the hat, it’s raw and untouched as yet. And obviously there are some missing ink-lines, that would need hand-inking. And you’d want the inked lines on another layer in Photoshop, and then blend them in better. But you get the idea.
Not for use on eyes, of course, and skin may need a softer treatment. Thus we can’t just have a script blast the entire character.
Can’t think why I didn’t think of it earlier. A basic form of matcap by simply blurring out the existing textures, while keeping the intended colour. You can of course build on this base, and try to develop the more complex forms that the animation industry understands as ‘matcap’, which involve fixed shadows and suchlike.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
### v.1.0 September 2021. # A quick degrunger script for Poser 11.x. Runs on the current texture # and removes grungy detail, while retaining their averaged base colour. # Not good for skin and eyes, but good for dark grungy and speckly # clothing etc textures. Intended for use with the Poser 11 Comic Book # Preview mode, to get colour flats to go underneath a seperate lineart # render from the same scene. ### import poser scene = poser.Scene() # Test if we have materials even present. mat = scene.CurrentMaterial() if mat: tree = mat.ShaderTree() root = tree.Node(0) # Test if we have a Diffuse_Colour node with something plugged into it. imgmap = root.InputByInternalName('Diffuse_Color').InNode() # Yes we do, so continue with the MATcap process. if imgmap: shaderTree = poser.Scene().CurrentMaterial().ShaderTree() root = shaderTree.Node(0) imgNode1 = root.InputByInternalName('Diffuse_Color').InNode() parameterU = imgNode1.InputByInternalName('U_Offset') uOffset = imgNode1.InputByInternalName('U_Offset') uOffset.SetFloat(12) texStrength = imgNode1.InputByInternalName('Texture_Strength') texStrength.SetFloat(1.3) # Create Noise node and set its paramaters and position noise1 = shaderTree.CreateNode(poser.kNodeTypeCodeNOISE) noise1.SetLocation(230,460) noise1.Input(0).SetFloat(3.0) noise1.Input(1).SetFloat(2.0) noise1.Input(2).SetFloat(1.0) noise1.Input(3).SetFloat(0.0) noise1.Input(4).SetFloat(1.1) # Plug the Noise node into the right slot. shaderTree.AttachTreeNodes(imgNode1,parameterU.Name(),noise1) root = shaderTree.Node(0) imgNode2 = root.InputByInternalName('Diffuse_Color').InNode() parameterV = imgNode2.InputByInternalName('V_Offset') vOffset = imgNode2.InputByInternalName('V_Offset') vOffset.SetFloat(12) texStrength = imgNode1.InputByInternalName('Texture_Strength') texStrength.SetFloat(1.3) # Create another Noise node and set its paramaters and position noise2 = shaderTree.CreateNode(poser.kNodeTypeCodeNOISE) noise2.SetLocation(230,600) noise2.Input(0).SetFloat(3.0) noise2.Input(1).SetFloat(2.0) noise2.Input(2).SetFloat(1.0) noise2.Input(3).SetFloat(0.0) noise2.Input(4).SetFloat(1.1) # Plug the Noise node into the right slot. shaderTree.AttachTreeNodes(imgNode2,parameterV.Name(),noise2) shaderTree.UpdatePreview() scene.DrawAll() else: print 'The current selection is not driven by an Image Map.\n\nThis means it cannot be MATcap-ed.\n\nTry making a selection in the Material Room.' else: print 'Please first select or set up a material in the Material Room.' |
A few G’MIC tips
Note that the free G’MIC will not launch in Photoshop if the layer being filtered has multiple areas of transparency, as you might get from a .PNG render of a 3D scene. In which case, right-click on the layer and ‘Convert to Smart Objects’ first. Then G’MIC will launch for that layer. Apparently the mighty Photoshop still cannot handle more than one area of transparency in a layer, without such a conversion being done. Other similar software has no such problem.
Also, when filtering real-time Poser Comic Book renders for detail, such filtering is usually aided by having good quality (rather than muddy / low-res) textures loaded. Here’s how you do that with a Preview render…
Obviously if you’re instead rendering for a Colour Flats layer in your Photoshop layer-stack, then the low-res textures don’t matter so much. Because you’re going to scour off all that unwanted grunge and noise with Topaz Clean 3.1 or G’MIC’s Comic Book filter. Ready to lay the Lineart layer on top.
“We’re going to need a bigger bucket, Doreen!”
In Poser 11, you may have long relied on your trusty old CPU-rendering Firefly custom render-presets. But they may not have kept up with what your PC can do.
For instance I have a 12-core PC, and so in Edit / Preferences I tell Poser 11 I have 24 threads available for its use (12 cores = 24 threads) when CPU rendering with Firefly.
To speed Firefly up enormously in such a case, I switch the Bucket Size on an old Firefly render preset up to 512. Most likely you have some old presets hanging around too and they are a decade or more old and are perhaps running at a Bucket Size of 32. Or you followed some advice that was that old. 32 was a good safe setting for the days when most people had only 1 or 2 threads available for rendering. Think of it as the size of each bucket of paint that Poser ‘throws’ onto the render canvas, in order to render your picture. Small processing power = small buckets each time.
If your PC can’t handle a big fast Bucket size of 512, I’m guessing you could perhaps try 256 for a 6-core / 12-thread PC with enough RAM. 256 also works well for a modern post-2010 graphics-card render. But beware of setting 512 for a graphics card, as it may well crash your driver on older cards.
“But I use SuperFly now”, you may say. Ah, but can SuperFly do the old-school Firefly lines, or the Firefly ambient occlusion? Here are two presets made very fast on a 12-core PC. Obviously you’re going to choose CPU not GPU on your render settings panel, for these.
Lines:
Ambient Occlusion (turn AO on for scene lights, first):
“Fast” as in… two or three seconds at 1800 pixels. For lines or AO. Obviously if it’s a character with hair and complex bits and inefficient lighting, then it’s going to be a lot slower. But still quicker than otherwise.
The probably applies to other software such as MojoWorld. Make the ancient default bucket-size bigger.
Update: Poser 12 is a bit different, with the move to Cycles 2. CPU users of “adaptive sampling” rendering with SuperFly will apparently paradoxically see better results from quite small bucket sizes.























