Developers

The Didimo API is compatible with all major 3D engines and web platforms and supports app development in Apple & Android.

Unreal.png
unity.png
lumberyard.png
webgl.png
apple.png
android.png
 
Developer Tools, Documentation & Support

Creating the future requires some inspiration and support. We're here to help. Be sure to try our Showcase iOS app, see our tools and documentation below, check our blog for examples and signup for our newsletter below. And certainly feel free to reach out if we can help. We'd love to know what you're working on and how we can help you succeed.

Developer

Portal

Didimo
API

Didimo
Unity SDK

Didimo
CLI

Customer
Portal

Showcase
iOS App

Empowering your vision.

You see the future: life-like, user-generated digital humans in your games, applications, support tools and experiences. We want to help you make it happen, with a powerful, easy-to-use, cloud-based solution that generates high-fidelity digital humans at scale and at runtime.

We offer a suite of tools & support to help you create life-like digital humans inside your applications, bringing them to your virtual environment in seconds. Generate as many didimos as you need to populate your digital environments or integrate our API to give users the ability to create didimos directly from within your app or software.

The Didimo platform gives you:
gears.png

Customizability

Design what you want; get a customizable template mesh & rig or use your own.

speech.png

Speech

Input a text or audio file, and automatically generate all visual speech animations

movement.png

Animation

Clean output mesh with known vertex and UV topology, rigged and ready for animation.

file.png

Format Choice
Choose FBX, glTF, JSON files. Our integrators and tools allow you to work the way you need.

library.png

Libraries

Add elements from our growing libraries. Change hair, eye color and more.

lightening.png

Instant Results

Let your users create and share didimos on the fly in your apps and on social media.

cloud.png

Cloud Support

Automatic generation in the cloud. Served where & when you need.

rig.png

Rig

Blendshape & bone-based rigs, FACS compatible and support MoCap systems like ARKit.

Introducing our newest didimo appearance.

Our new Unity Universal Rendering Pipeline (URP) shaders with subsurface scattering generates didimos that look more like the real user. Didimo textures are compatible with physically based rendering (PBR), so didimos can adapt to any 3D environment illumination setup.

didimo03_02_edited.jpg
didimo03_03.png
didimo03_04.png
Frequently Asked Questions

Which platforms are supported?

Our technology is agnostic for the client development environment. Core components run on our server and are accessed through our API.

Do you provide an SDK?

Yes. We have built a Unity SDK that provides you with tools and examples to get you up and running at speed.

Do you support Unreal?

Sort of. We will be building an SDK to make it super easy though in the meantime, developers can connect directly to our API docs and import a didimo using a glTF importer. Sign up for our newsletter to find out when new features will launch.

Do you provide technical documentation?

Absolutely. Go to our Developer Portal to access documentation and support. We have documentation of the API as well as code examples for you to test the Service.

Do you provide support?

Yes, please talk to us here

 

Can I create a full-body digital human?

Not yet, but soon. At the moment, our publicly released tools support the generation of human heads. We will expand public support for full-body generation soon. If you need to attach pre-existent bodies to our head mesh, contact us so we can help you set it up and automate the process.

Does the platform accept photogrammetry data?

Yes, to support solutions that can create more comprehensive 3D data capture processes for their users, our pipeline can ingest photogrammetry data. Utilizing a high-poly photogrammetry model, as well as a texture map and a single front-on input image of the user, our pipeline can convert this data into a clean-topology, rigged, and animatable model. See an example here.

Do you include viseme support?

Yes, for detailed control of the mouth that can match specific vocal phonemes, visemes are a specific set of facial poses to mimic the unique shapes of the mouth when creating these phonemes. Included in many TTS solutions, such as Amazon Polly, is a data stream to drive visemes to a 3D model, thereby generating a perfect lip-sync and believable visual speech. The visemes option will provide your didimo with 21 viseme-specific blendshapes.

More questions? Go here.

 
Be in the know. Subscribe to our newsletter.