Human Motion Diffusion Model is new text-based AI for generating mo-cap animation for a 3D figure. Still a science-paper + source code at present.
But it can’t be long before you type in a text description to generate a rigged and clothed 3D figure (plus some basic helmet-hair), and can then also generate a set of motions to apply to the figure’s .FBX export file. Useful for games makers needing lots of cheaply-made NPCs, provided they can be game-ready.
But for Poser and DAZ users, the ideal would be to have reliable ‘text to mo-cap’ exist as a module within the software. Even better would be to have an AI build you a custom bespoke AI-model by examining all the mo-cap in your runtime, thus gearing it precisely to the base figure type you intend to target.