Impressive, a 3D isometric room created purely in CSS, and Ricardo has kindly supplied source-code.
Category Archives: Real-time animation
Release: Goo Blender
Goo Blender, also known as the “Goo Engine”, is a new non-photoreal toon Blender version for Windows (only)…
“our custom build of Blender that was made specifically to our team’s needs. Our team specializes in making 3D anime in Blender”
Goo Engine still seems to involve the usual head-banging wrangling of big node chains in order to get simple tooning done in the viewport.
I haven’t had time to look at the 30 minute intro tutorial and am currently uncertain if it’s real-time Eevee or Cycles rendered? Anyway, it’s available via a Patreon subscription if you want to download and try. I assume it’s a build for the latest Blender, which now has certain graphics-card requirements before it will even let you install. Note also the Goo Engine GitHub, which presumably means you can get it free if you know how to ‘build’ Blender from a code repository.
This new release reminded me to take a look at the progress of the competitor BEER, the free NPR system plugin for regular Blender. The 1.0 engine was all done, and a user-friendly UI was then being made. A magnificent effort by all concerned, and they’re to be congratulated for getting so far with it. However I see there’s been no public progress with the user UI implementation in the last year and it’s stalled at UI Milestone #2 (November 2021). Possibly they could use a volunteer UI expert, to get it finished and polished?
Text-based AI for mo-cap
Human Motion Diffusion Model is new text-based AI for generating mo-cap animation for a 3D figure. Still a science-paper + source code at present.
But it can’t be long before you type in a text description to generate a rigged and clothed 3D figure (plus some basic helmet-hair), and can then also generate a set of motions to apply to the figure’s .FBX export file. Useful for games makers needing lots of cheaply-made NPCs, provided they can be game-ready.
But for Poser and DAZ users, the ideal would be to have reliable ‘text to mo-cap’ exist as a module within the software. Even better would be to have an AI build you a custom bespoke AI-model by examining all the mo-cap in your runtime, thus gearing it precisely to the base figure type you intend to target.
Blender movies wanted
The Amsterdam Blender Conference, set for late October 2022, is calling for Blender-made films. No deadline that I can see, but… “The list of nominees will be published a few days before the Blender Conference.”
FurMark
An unusual bit of Windows freeware. FurMark will give your graphics-card a furry stress-test…
“FurMark OpenGL benchmark test will accurately measure the performance of the graphics card using fur rendering algorithms.”
First thoughts on MetaHuman
Hmmm… MetaHuman. First thoughts.
It’s obviously destined for semi-pro and small-studio videogames makers who want to shave a few days or weeks off a too-tight production schedule. It appears to output very good starting points for your hundreds of NPCs, that will later be optimised in-game for 60 frames per second. However, flip through the latest PC Gamer magazine for a second. Do you see any “uncanny valley” hyper-real characters, that look like the standard AAA humans MetaHuman is pushing? Very few, these days. In fact, the new U.S. edition has a manga/anime girl on the cover. She runs real-time in the latest hit game.
That said, the head-and-shoulders MetaHuman demo shows obvious superiority to previous “quickie avatar/NPC makers” for games, of the sort that now litter the bankruptcy registers. As such there’s also obvious potential for real-time motion-capture movie-making use, if (and it’s a big if) the mo-cap and AI-aided tweaking in postwork can get the footage past the “uncanny valley”. The average screen entertainment viewer wants Marilyn Monroe, not the 2020s equivalent of a Thunderbirds puppet. But 98% of digital art hobbyists have no interest in making storytelling movies, nor in the full-body motion-capture rigs needed to make that happen.
The MetaHuman tech-demo is set to evolve into a free ongoing cloud service, if the early reports are correct. As such I’d say they have two money making options which will dictate their add-ons…
1) make the exporter modules a paid item, if you’re not sending your figure to the Unreal engine. Exporters to push your 12Gb’s of .FBX figure to Blender, Cinema 4D etc, maybe even to DAZ. But, most likely, never directly to their competitors such as Unity, NVIDIA Omniverse etc. Since their build-a-human service is in the cloud, the exporters cannot be pirated. That sounds steadily lucrative, and for not much ongoing effort.
2) or build a vast sprawling content eco-system on this, complete with ‘anatomically-correct’ figures, skimpy frillies, ankle-bracelets etc. Then promise not to look at the ‘megaboobs’ and other naughty figures that people make and download. But that would damage their brand, and also be a big hassle to admin and do PR for. Why bother, when you have the money coming in via option one? For that reason, I can’t see that DAZ or Renderosity will have a great deal to worry about. The ‘silent majority’ of their users will not want to use cloud services, and will be content with clothes-swopping, kit-bashing and morph-tweaking in privacy. Even if it means staying a step below the current state-of-the-art in hyper-realism. Much the same is true of those who want the wealth of creative science-fiction and fantasy content that DAZ and Poser now provide, royalty-free. Not to mention toon, animals and monsters.
It also seems to me that the average dedicated DAZ user will fairly soon just say… “I got a new PC and a 30-series NVIDIA card, so I run DAZ iRay in realtime now”. True, there’s still a damnable graphics-card drought but that surely can’t last forever. This means the “ooh… it works in real-time” thing is a bit of a red-herring. The only caveat there is the hair. Adding 3D hair has always caused a huge drop in scene pliability, and it’s just possible that MetaHuman has done more than create awesome-looking ‘helmet hair’. Real-time hairs that are true ‘stranded grooms’ and which can be easily re-styled… that would be quite something.
Finally, the ‘elephant in the room’ is AI. We’ve recently seen the Deep Nostalgia service very ably auto-animate a still 2D vintage photo with head-turns and eye-blinks. How much further will that go in the next few years? We may yet see Reallusion popping out a ‘CrazyTalk AI’, so don’t count them out yet either.
Release: NVIDIA Omniverse
NVIDIA Omniverse has been released in open beta. In its current form it appears to be an extensible virtual production studio, giving teams the ability to… “simultaneously work together on projects with real-time photorealistic rendering” but also to “work concurrently between different software applications” via Omniverse Connectors which bridge into “leading” content creation software. Most interestingly, there is a promised Connector bridge to the free Blender in the near future. Naturally, your studio’s creatives all need to be brewing their wizardry on fast n’ shiny NVIDIA graphics cards and Windows.
The Omniverse platform is only in open beta at present, but already has several working modules within it. Including ‘Omniverse View’ for architects, and ‘Omniverse Create’ for designers and creators. It seems to use the Pixar USD format for universal ‘in-out porting’ of the 3D scenes and moving them around the various applications?
“Early next year” this virtual studio platform will see the release of…
“‘Omniverse Audio2Face’, AI-powered facial animation; and ‘Omniverse Machinima’ for GeForce RTX gamers”.
Machinima being the term for real-time WYSIWYG animation using a game-engine, and from the sound of it ‘Omniverse Machinima’ seems to be tilted toward Unreal Engine users and TV studios — rather than the hobbyist crowd that is currently using iClone.
The ‘Audio2Face’ module is more interesting and will aim to have an AI… “generate expressive facial animation from just an audio source” without any need for expensive and fiddly camera-based mo-cap. That makes a lot of sense. Train an AI to match millions of audio vocalisations with visual expressions, then have it generate expressions purely from audio. In fact I’m a bit surprised such a thing doesn’t already exist in software — beyond the existing ‘vocal audio to mouth phonemes’ lip-sync automation. But perhaps animating a full face and escaping from ‘the uncanny valley’ in real-time may need a Cloud connection and a zillion back-end NVIDIA GPUs to work? My guess is that you would need a second AI to weed out the “ugh, no… uncanny valley” results.
Anyway NVIDIA Omniverse looks good and may even be free(?), albeit after the entry-ticket price of a 30-series NVIDIA graphics card and (ugh) Windows 10. When it’s all polished up and hooked to a Blender bridge, that could make it very interesting for small indie animation studios. But what are the prospects for non-techie hobbyists? Well, DAZ is also an NVIDIA partner, so I guess if DAZ Studio implements a Pixar USD-format bridge then they could also enter the Omniverse?
Free ecosystem spawner for Unity
Take a big square landscape terrain into the free Unity game engine, automatically cover the ground with vegetation based on slope height etc. It’s not Vue’s ecosystems. But it is as free as Unity is, and on the Unity Store now.
If Poser 12 does indeed support ‘export to Unity’ with decimation, this could be a quick “background generator” for an imported Poser scene with posed characters.
FlowScape is back in the flow
The development of the $10 FlowScape landscape-creation software is back on track, now that Australia is recovering from lockdown and the kids are back in the schools again. There’s a preview video of the next version of this real-time tool. Lots of new stuff for makers of isometric dungeons (think all those Diablo-like dungeon-crawler videogames) which though fun is presumably mostly for RPG makers. But also, later in the video, we see ‘flocking’ fish shoals underwater, fast auto-growing grass and ground-cover plants that automatically follow the terrain without any ‘painting’ being needed, while auto-culling happens on terrain slopes and hollows. Plus there’ll be a configurable “stick it where you want it” user interface.
Release: latest U-Render is now in the C4D viewport
The real-time engine U-Render can now run in the Cinema 4D viewport. Just released, U-Render 2020.07.6 is a “really real-time” WYSIWYG render engine for Cinema 4D and runs on OpenGL. It costs around $350.
With 2020.07.6… “it is no longer necessary to install the Windows-only standalone renderer” as everything is integrated into Cinema 4D as “a viewport renderer, making it possible to see a live render directly within Cinema 4D”.
This change apparently also makes U-Render accessible to Mac users, at least for now — Apple is widely said to be set to ditch OpenGL entirely from Macs. However, Windows users should also note a problem — U-Render only appears to work on the toxic tangle that is Windows 10. Which means I can’t test it, even though there’s a demo available. But I wonder if the viewport integration has now solved this Windows problem for Windows 8.1 users?
All this is theoretically interesting to me as there’s still a way to get Poser 11 scenes to Cinema 4D, via the free PoserFusion plugin. That still works fine, for those who have the required bits to hand. The other question is, will the latest U-Render run in the older version of Cinema 4D required by PoserFusion (which was R19, when last heard about).
The other question is… how much extra the U-Render’s OpenGL real-time would actually add to an imported Poser scene… who knows? By that time the materials would have been through two conversion processes: Poser to C4D (automatic) and then C4D to U-Render (automatic). It might then be more trouble and cost than it’s worth to fiddle about for hours fixing skin and eyes and the like.
Still, if we could get a direct Poser 12 to U-Render plugin, ideally at a sub $100 price, that could be an interesting thing to have in terms of making Poser 12 work more like iClone. I haven’t heard anything about that happening, it’s just my complete guess. The less expensive route to WYSIWYG full-viewport real-time might be to plug some sort of stable ‘Eevee port’ from Blender into Poser 12. It would presumably be no use to ‘send Poser scene to Blender, for real-time display in Eevee’, since Blender is such a fast-moving target and the UI is still very daunting even now.
But judging by the recent call for Poser 12 beta-testers, it’s Unity that’ll be the real-time destination, not Blender. Unity is also free. It remains to be seen if Poser 12 users will be sent into a full “OMG, I have to learn to drive yet another nuclear submarine!” Unity UI, or into a “we’ve cunningly hidden all the complex bits” Poser-friendly Unity viewport.
It’s all gone Bendie!
If you haven’t been following Reallusion closely, you can catch up with a handy new 3-minute Reallusion 2019 video roundup. It briskly showcases all the new ‘big’ content made available for sale in 2019, in both 3D and 2D. Note that there will also have been content from smaller makers, and last time I looked they had a separate store for such items.
The show-reel usefully reminded me of Garry Pye Creations and through it I discovered new work from him that I hadn’t seen before: his The Bendies series.
The Bendies were first made for CrazyTalk Animator 3, then upgraded for Cartoon Animator 4 and its ‘360 head’ feature and new face puppeting. They also lack edge inking, which means webcomics artists could over-ink a bit to add their own look. They look excellent, and can be purchased individually at around $10-$12 each.
BodyPix 2.0
Google has released the free BodyPix 2.0. This offers automatic identification of people against a relatively noisy background, and then spots and tracks each person’s twenty-four body parts. It then segments, ID’s and colours each body-part. It can do this even while being fed around 20-25 frames per second, on fairly standard hardware such as an iPhone.
Version 2.0 adds “multi-person support and improved accuracy”.
They also offer the sister-software PoseNet, enabling a basic emulation of what a Kinect does but via standard Webcams…
both BodyPix and PoseNet can be used without installation and just a few lines of code. You don’t need any specialized lenses to use these models — they work with any basic webcam or mobile camera. And finally users can access these applications by just opening a url. Since all computing is done on device, the data stays private. For all these reasons, we think BodyPix is easily accessible as a tool for artists, creative coders, and those new to programming.
So… how to plug this stuff into a nice little DAZ/Poser-friendly Webcam utility? One that, at the flick of a drop-down menu, will happily real-time puppet and animate any stock figure from an Aiko 3 up to a G8 or La Femme?
DAZ Studio to Blender: the options
DAZ Studio to Blender 2.8 is a somewhat interesting possibility, re: getting “free real-time rendering” of a static scene in open source software. Without the $1,000 cost of an RTX graphics card + new PSU for real-time ray-tracing (now supported in the latest DAZ Studio). Or the faffing around and cost of converting figures for real-time in iClone.
Another reason you might want to do this it to get the real-time NPR comics-making possibilities of Blender’s Eevee. Although Eevee-with-toon-shaders is still very much a work-in-progress, and seems likely to remain so for a few years yet. I suspect that “really real-time” options like U-Render may yet overtake Eevee, using OpenGL real-time rendering that’s i) not shackled to a game-engine; and ii) is graphics-card agnostic. While advanced OpenGL may never give iRay-quality results on DAZ characters, even with a good texture conversion-script, one imagines that the NPR tooning capabilities should be comparable to those of a mature Eevee.
DAZ to Blender:
Anyway, the best three DAZ to Blender bridge options I can find at the end of 2019 appear to be:
* The Japanese DAZtoBlender8 is $15 on GumRoad and also on Booth priced in Japanese Yen. Dated August 2019. Is specifically designed to take Genesis 8 figures into Blender 2.8 and higher. Seems to be made by a very dedicated Japanese guy, looks like it works well, and can handle animations and geografts. Has some basic English translation on the videos and there’s a PDF manual in English with screenshots.
* The free Diffeomorphic: Daz Importer version 1.4. Import static .DUF scene/character files into Blender, though some texture tweaking is to be expected after import. The 1.4 version is dated August 2019, and is said to work with Blender 2.8.
* The free mcjTeleBlenderFBX, dated 6th October 2019. But it’s not ideal. Note that the maker admits that… “By default the FBX import/export process messes the animation and the materials are poor”.
One could also do a simple OBJ export of a posed character. Then spend lots of time wrangling materials in Blender.
FlowScape: deserts
The latest FlowScape has added “Desert Biomes“, after the oceanic deluge of the last update.
DAZ Studio 4.12 beta supports RTX
This week everybody seems to be announcing their software will support real-time ray-tracing for those with “Nvidia’s new RTX GPUs”, which at the consumer level means the GeForce RTX gaming graphics cards. Including DAZ Studio 4.12 (already enabled), Blender, and KeyShot (in the forthcoming version 9 in Autumn/Fall 2019).
In DAZ Studio 4.12, with “RTX On” according to DAZ you get…
RTX-accelerated ray tracing for both the interactive viewport and final renders … [with a claimed] 140 percent more performance in final frame rendering compared to previous generation GPUs, and an incredible 10.5x faster than CPU-only rendering.
Looks nice, but Amazon tells me that even a low-end budget gaming “GeForce RTX” is likely to cost me around £350, and that’s without a Power Supply Unit (PSU) upgrade to run it. An RTX version that’s going to last a few years starts at £700. Still, for a small commercial studio, that sort of amount could be written off against tax. Although I think I’d really need a new PC to go down that route. And that’s still some years away.