Tools & Support
Creating the future requires some inspiration and support - these quick links will get you going.
Our developer portal has full documentation, examples, release notes and much more.
The Didimo API is compatible with all major 3D engines and web platforms and supports app development in Apple & Android.
Reach out if we can help. We'd love to know what you're working on
and how we can help you succeed.
The Didimo platform gives you:
Design what you want; get a customizable template mesh & rig or use your own.
Input a text or audio file, and automatically generate all visual speech animations
Clean output mesh with known vertex and UV topology, rigged and ready for animation.
Choose FBX or glTF files. Our integrators and tools allow you to work the way you need.
Add elements from our growing libraries. Change hair, eye color, wardrobe and more.
Let your users create and share didimos on the fly in your apps and on social media.
Automatic generation in the cloud. Served where & when you need.
Bone-based rigs, FACS compatible and support MoCap systems like ARKit.
Empowering your vision.
You see the future: life-like, user-generated digital humans in your games, applications, support tools and experiences. We want to help you make it happen, with a powerful, easy-to-use, cloud-based solution that generates high-fidelity digital humans at scale and at runtime.
We offer a suite of tools & support to help you create life-like digital humans inside your applications, bringing them to your virtual environment in seconds. Generate as many didimos as you need to populate your digital environments or integrate our Didimo API to give users the ability to create didimos directly from within your app or software.
Frequently Asked Questions
Which platforms are supported?
Our technology is agnostic for the client development environment. Core components run on our server and are accessed through our Didimo API.
Do you provide an SDK?
Yes. We have built a Unity SDK that provides you with tools and examples to get you up and running at speed.
Do you support Unreal?
Sort of. We will be building an SDK to make it super easy, though in the meantime, developers can connect directly to our Didimo API docs and import a didimo using a glTF importer. Sign up for our newsletter to find out when new features will launch.
Do you provide technical documentation?
Do you provide support?
Yes, please talk to us here
Can I create a full-body digital human?
Of course! You can find all details here.
Do you include viseme support?
Yes, for detailed control of the mouth that can match specific vocal phonemes, visemes are a specific set of facial poses to mimic the unique shapes of the mouth when creating these phonemes. Included in many TTS solutions, such as Amazon Polly, is a data stream to drive visemes to a 3D model, thereby generating a perfect lip-sync and believable visual speech. The visemes option will provide your didimo with 21 viseme-specific blendshapes.
More questions? Go here.