Krux

March 11, 2026
Korean Startup's AI Generates Minutes of 3D Motion from Text
Published: March 11, 2026 at 12:30 AM
Updated: March 11, 2026 at 12:30 AM
100-word summary
Bumblebee's new Motifect model generates minutes-long 3D character animation from simple text prompts like "start running" or "wave a hand." The Korean startup uses what it calls Cross-Diffusion architecture with biomechanical constraints to fix problems that plague earlier motion models, particularly foot sliding and jerky transitions. Type a prompt, get a continuous motion sequence you can drop straight into game engines or film pipelines. The pitch is clear: replace expensive motion capture sessions and tedious hand-keyframing with text. No pricing or access details yet, and no independent benchmarks to verify the claims. If it works as advertised, tiny animation studios could suddenly afford the kind of long-form character movement that...
What happened
Bumblebee's new Motifect model generates minutes-long 3D character animation from simple text prompts like "start running" or "wave a hand." The Korean startup uses what it calls Cross-Diffusion architecture with biomechanical constraints to fix problems that plague earlier motion models, particularly foot sliding and jerky transitions. Type a prompt, get a continuous motion sequence you can drop straight into game engines or film pipelines. The pitch is clear: replace expensive motion capture sessions and tedious hand-keyframing with text. No pricing or access details yet, and no independent benchmarks to verify the claims.
Why it matters
If it works as advertised, tiny animation studios could suddenly afford the kind of long-form character movement that used to require mocap rigs and specialist animators.