As good as dForce? “Helmet-hair” gone, forever? A new AI driven Hair Simulation on the GPU.
Loops three times.
Click here to replay
As good as dForce? “Helmet-hair” gone, forever? A new AI driven Hair Simulation on the GPU.
Loops three times.
Click here to replay
AI boffins re-invent Pandromeda’s Mojoworld…
Our method enables simulating long flights through 3D landscapes, while maintaining global scene consistency – for instance, returning to the starting point yields the same view of the scene.
I made a 65,000 word Dictionary of British Pronunciation for the TTS freeware Balabolka, with pre-made IPA pronunciation tags alongside each word. It’s in Balabolka’s .BXT file format, which it can load and which can handle the IPA phoneme symbols.
Possibly useful for those using TTS for making clearly-voiced English tutorials or animations, using the British IVONA 2 voices, and who’re stuck on the pronunciation of a word that they can’t easily substitute. With this you can write freely, knowing that it’s unlikely you’ll have to substitute a dozen or more words with simpler or different forms that don’t quite express what you want to convey.
You can load it in Balabolka and then keep it on a tab in the background, for easy consultation. A good test is getting Ivona 2 Brian to say “mature” in a sentence. It’s very difficult unless you use the IPA coded tag.
For use with the abandonware British voices Ivona 2 Amy, Ivona 2 Ivy, Ivona 2 Emma & Brian. Neospeech Voiceware Bridget is also a very good ‘posh’ British voice, though after install will wrongly show up as ‘United States’ in the list of voice names. Most of the time these do a good job on their own, but sometimes you may need more precision — especially for short comedy animation — and the IPA tags give you that.
It seems rather odd to consider old-school text-to-speech software and SAPI5 voices, at a time when Poland’s ElevenLabs is doing such great things with AI-generated voices. But I’m always one to cherish old Windows freeware, and at present all the new AI voices are online and require a monthly/yearly subscription. So I was pleased to find an alternative freeware to Balbolka for desktop PC text-to-speech using SAPI5 voices. Many such voices are also now abandonware on Archive.org, the key companies having since been sold on several times.
Made in Italy, the DSpeech TTS freeware used to be fairly basic, but it’s improved enormously since about 2016. Though this is not a fact reflected in its rather basic 1990s-style download page, which you’ll have to overlook. This freeware is now in version 1.74 (spring 2022), It’s genuine one-man freeware, made in Italy, and is feature-comparable with Balbolka though a bit rougher in UI and Help translation to English.
The DSpeech download link uses only a .GIF button, so if you have a .GIF blocker in your Web browser, then you instead right-click the page and ‘View Source’. You should then see a live working link to the download in the HTML…
The English manual is included in the software. There’s no Windows installer, just unzip where you want and run it.
SCRIPTING: Beyond the usual control tags, DSpeech supports basic scripting including voice-recognition and script loops. Which is unusual. Apparently it can even read out VLCplayer movie sub-titles in real-time, in a chosen SAPI5 TTS voice and speed.
TAGS: The tagging menus make switching voices easy. There’s better right-click support than in the latest Balabolka for adding tags, though that’s not saying much. When you highlight a word in DSpeech, and add a tag, the word is not wrapped with a closing and opening tag, it’s deleted. Urgh! Having right-click is great, but… the rest of the tag insertion system is not good.
LOQUENDO: DSpeech is supposed to support Loquendo ‘voice expressions’ (laugh, sigh etc) via the Italian Loquendo 6 Italian ‘Paola’ and ‘Luca’ TTS voices, combining words with special expressive tags such as \_Laugh and suchlike. Later the tag syntax was changed to \item=Laugh in Loquendo version 7 voices. But while these v6 voices work fine in any DSpeech, and v7 voices work fine in DSpeech v.1.72.29 (December 2018, not the latest 1.74.x), their expressive cues no longer vocalise in DSpeech. You just hear silence.
Spanish Loquendo 7 voices (not 6) can however ‘express’ when used in Loquendo’s own Java-based TTS Director, which came with the Loquendo SDK. See YouTube for examples and useful links.
Regrettably neither the Loquendo 6 or 7 voices can even be played in the other TTS freeware Balabolka, though they do show up on its voice menu. It thus seems that properly-working Loquendo voices are limited to…
* Loquendo 6 (any voice) on DSpeech 1.74.x or earlier. Loquendo 7 not supported on the latest DSpeech.
* Loquendo 7 (Spanish) on DSpeech 1.72.29 (or earlier?), or Loquendo 7 (Spanish) on Loquendo TTS Director with SDK and Spanish pack.
The Spanish version 7 voices do however have ‘expressives’ that work fine with Loquendo TTS Director 7, which was Windows freeware which shipped with the developer/API/SDK kit. This success at least showed me that the problem was not with my PC or a 32-bit / 64-bit Windows clash, at least for version 7 voices.
Yet it’s strange. Obviously DSpeech could, at one time, play the ‘expressives’ in the Loquendo 6 voices. But, no longer, it seems. Switching back to an older DSpeech 1.72.29 didn’t cure the problem, but it did usefully fix the playing of the Loquendo 7 voices. I suspect that Loquendo 6 voices now have a 32-bit / 64-bit problem on 64-bit Windows, despite the player and voices both being 32-bit.
Loquendo TTS Director voices have a complete list of expressives in the C:\Program Files (x86)\Loquendo\LTTS7\data\voices\Soledad\SoledadGildedParalinguistics.sde file (change name for each voice). Open it in Notepad++ to see the list in plain-text. For instance, Soledad has the following, and obviously you can also mix and match and tone-shift…
Easier to just paste these all in and cut out what you don’t want. Rather than wrestling with menu-based insertion.
VOICEWARE: DSpeech has support for reading with a VoiceWare TTS, but not for a vital aspect of the voice. The first version of the TTS VoiceWare voices (e.g. VW Bridget, British) had different inflections on words if you added ! ?! or !? (again, see YouTube for demo and useful links). But this feature of the voice is not supported in DSpeech. It is supported in Balabolka. So this is another deal-breaker for DSpeech.
CONCLUSIONS: Despite what at first glance seems to be DSpeech’s more intuitive right-click tag adding, Balabolka is on several counts the superior tool for longer-form editing. It properly wraps highlighted words in starting/closing tags, which is vital if you’re TTS-coding something longer than a paragraph. It also supports VoiceWare’s ! ?! and !?, useful for one of the best British voices.
I thus suggest using the latest Balabolka for freeware TTS scripting and recording, and the old Loquendo TTS Director + its Spanish voices for creation of vocal FX, pitch and speed-shifted to match the voice being used in Balabolka. Then embed these vocal FX as audio clips in Balabolka. This is not as ideal as having Balabolka support Loquendo (it refuses to even read their voices), but it’s a viable workaround.
The ideal would be to have a standard SAPI5 voice that was ‘expressives only’, for use in Balabolka. A sort of audio FX bank, that could be reliably called with a simple tag (such as \_sneeze etc). But so far as I can see, that doesn’t exist, other than by chopping bits from my Dictionary of British Pronunciation for TTS.
Finally, note that TTS Director only ‘sees’ its own Loquendo voices, and is therefore no good as a general SAPI5 TTS script editor. TTS can be done in Adode Captivate (used for super-Powerpoint ‘e-learning’ creation) and in CrazyTalk / Cartoon Animator, but the editing is not at all comparable to Balaboka.
Butaixianran has kindly created a free DazToBlender: Daz to Blender Bridge updated fork…
“I updated the official Daz To Blender Bridge, now Daz model can be exported from Blender with morphs and textures, so you can use Blender as a Daz Bridge to other 3D tools.”
It’s already had a number of bug-fixes, and animation import has been added. Normal maps can be saved to .JPG to reduce bloat. Also supports Genesis 8.1 and 9.
No texture or base mesh resolution changes are involved with the conversion, and the user is left to do that in Blender. Or just use Blender as a pass-through to other software. As always, geografts and complex geoshell and similar overlay things many not convert well.
Regrettably Blender 3.1 or higher is required, so you need a powerful enough PC to pass Blender 3.x’s “install or not?” test and get Blender to install. Update: Blender 3.51 for Windows 7 (early May 2023). Needing no installer, it will now launch on Windows 7! Hurrah.
Blend.Stream, a new showcase and aggregator site for all movies made with Blender. This means more than just the official open movies sponsored by the Blender Foundation, and the site is open to all quality films made with the software. Also keep in mind that they’re not all under Creative Commons, though they are all free to view.
The short film Heroes of Bronze has a release date and teaser-trailer with the date.
Read a long interview with maker Martin Klekner in the recent “Warriors” themed issue of Digital Art Live magazine (#71, August 2022).
Dream Textures is the Stable Diffusion AI, sending AI-gen textures direct to the Blender shader editor. Can be used with DreamStudio as the paid Cloud generator.
Since Poser does Python, I don’t see why something similar couldn’t be done for Poser. Doubtless there will soon be AI’s that can take a text-prompt and pop out a finished .PBR material. For example: “Make me a lava material that looks like glowing snake-skin”.
Blender 3.4 has some interesting new features, including storyboarding and PBR.
* A new storyboarding tool called Storypencil, said to be tested and production-ready. It works in tandem with the Video Sequence Editor, and is intended for making rough animatic sequences or saving out storyboard images. Multiple SVG files can also be imported.
Update: It was in the beta but appears to have been pulled from the final. To get it: i) Get the 3.4 beta and 3.4 final; ii) install both; iii) copy Storypencil folder from Scripts | Addons_contrib to the same folder in Blender 3.4 final.
* Yet more Grease Pencil improvements. It now has some improved maths ‘under the hood’, working to auto-close gaps in line-art when using the Fill tool to colour.
* PBR support. Apparently this wholly new, which if true is kind of amazing? Anyway, the .MTL material files that accompany .OBJs can now call the full range of PBR material sets, including Principled BSDF materials. Poser 11 and 12 now support Cycles BSDF, so there may be potential here for making PBR’d .OBJs in Blender for use in Poser.
Amazon has open-sourced its Krakatoa VFX particle renderer and the associated shader system. Appears to be Maya focused, so I guess they were/are using this for the Amazon TV VFX. The VFX world has many particle-generators / particle-renderers by now, but this one is said to be especially “fast”. That’s the only claim made for it, at least on the GitHub. Still, if you were looking to plug a fast particle system into Poser 13, I see lots of .PY scripts in the Krakatoa GitHub and it might be something to consider.
The worthy 2D Cartoon Animator 5 has been released (formerly CrazyTalk Animator), and there’s now an official Cartoon Animator 5 Demo Video released today.
* SVG support, templates, import. Round-trip to CorelDraw, InkScape etc.
* Spring dynamics and free-form deformation grid.
* Better library, integrated download of the free bits (scripts etc) that you could only get from the site.
* No cheap “Pro” version any more [the former division was Pro (i.e. Standard) and Pipeline (i.e. proper Pro, expensive)] and it appears there is now just one version. Currently a reasonable $129. This appears to include the After Effects scripts that were in Pipeline. So if you can now effectively get Pipeline for $129 that’s quite a bargain. Though, as always, beware that the paid add-on packs and plug-ins will ramp up the overall price considerably over time.
That said, there’s backwards compatibility for those with old character and prop libraries. You can still use characters back to G1, in Cartoon Animator 5, it’s said.
Not sure if it still supports conversion from .SWF to prop. It used to, because it had its own Flash module under the hood.
According to the Newsletter, Terragen 4.6 is about to be released. This being the first big update for the advanced 3D landscape desktop software in two years.
* Windows now has .VBD export (was previously Linux only).
* Export clouds in .VBD for use in Blender etc.
* Better sRGB support.
* Better .FBX import, better .FBX export compatibility with Unreal Engine.
* Now with import and export of population caches as XML, as well as binary.
* Rendering speed improvements, faster Preview renders.
* Pro users get an experimental pipeline for RPC integration with other third-party tools.
* An open-source RPC Python module, so you can write Python scripts enabling other software to ‘talk’ to Terragen.
* Geolocation (aka “Georeferencing”) is said to become free in the Terragen 4.6 Free (aka Learning Edition, Non-Commercial). So far as I can tell, this is about aligning tiles side by side, rather than grabbing a DEM landscape tile from a user-friendly Google Earth style world-browser.
Still supports Windows 7+, and the update is free. Terragen 4 Free is free, and then currently Terragen 5 Creative is $299, the Pro is $599. A Mac version is coming soon, and a fun nodes-free ‘sky making’ Terragen Sky tool is also coming in December 2022.
Planetside Software (website has yet to update to 4.6 details/downloads. but should soon) and see also the YouTube channel.
Reallusion’s Cartoon Animator 5 desktop cartoon production software is coming soon, and there are offers in the emails such as ‘buy an upgrade and get version 5 free when it appears’. It’s good software for making that kind of animation, especially for a small studio. Professional, fairly easy to use, very well documented and supported. Though you to be aware that you probably need to budget four times the initial ‘sticker price’, if you’re going to fully get into the expensive Reallusion ecosystem with motion add-ons and expansion packs and suchlike.
Pixar’s RenderMan 25 will for the first time feature its in-house AI denoiser, and this is “temporally stable”. Translation: when run on animation frames, this de-noiser is stable from frame to frame. When the frames are run as an animation, there’s no strange wavyness, jitter, or edges popping from sharp to blurred and back.
The devs and artist at Pixar report this feature reduces render times “two to four” times, and it also “has CPU and GPU implementations”.
But ‘what use is this to hobbyists’, you might ask. Ah, well… there will be a free non-commercial edition of RenderMan 25 by the end of 2022. The free version is reported to lack only RenderMan’s “XPU” feature — which is Pixar’s “new hybrid CPU + GPU rendering engine” that many are calling the future of high-end rendering.
Thus it sounds to me like hobbyists could have a pro-level ‘temporally stable’ AI denoiser, free and highly trained on 3D CG frames, by the end of the year. And presumably it will be able to process a folder of animation frames produced with other software. Poser 12, for instance, which has a superb Intel denoiser for stills — but this is apparently not “temporally stable” for animation.
Google is also reported to be working on an AI image denoiser, but it’s still in the Labs. Presumably this will be free and open source when it appears. Part of the larger NeRF from Google, a one-click quick image enhancer.
This week, NVIDIA finally catches up with the old $50 CrazyTalk Pro…
Before you get all excited about hobbyist potential… it appears to be an Omniverse thing for small production studios with $3,000 graphics cards and workstations. Lots of NVIDIA stuff is individually free, true, but you have to ask the price of such a production setup then you can’t afford it.
Also new this week for the ‘build it and they will come’ Omniverse system, auto-lipsync for 3D faces from an audio file. Again, playing catch-up with Poser and Mimic.