DeepAR Creator Studio is a tool for AR asset creation within the DeepAR SDK. Create various effects that are driven by face motion and expression, body movement and hair shape.
Creator Studio supports.fbx models created by any 3D modelling tools, including Maya and Blender.
Fuse rigid objects, deformable masks, morph masks and post processing effects to create original AR experiences.
Create advanced AR effects quickly with presets common use cases like makeup, background segmentation and face filters.
Rigid objects are the basic type of effects that are supported by DeepAR SDK. It is a type of effect that is driven only by head position, meaning translation, rotation, and scale in 3D space. Typical effects in this category are:
Deformable masks are the type of effects which, besides translation and rotation, are driven by facial expressions. Models are driven by the DeepAR reference head model in two ways – via vertices and bones. For more information see the Reference mask model section.
Morph mask effects are effects that utilize blend shape as geometry deformators. When you want to create a mask and use the reference model vertices to drive its deformation, the easiest way is to make a blend shape for the reference head model.
The vertices/bones that drive the resulting model will be preserved. As a side effect to this, the transformation effect can be implemented – you can change the blend shape weight over time and user will see itself transformed into the desired effect/mask.
Post processing effects are applied over the entire frame (including all the objects rendered). Effects in this category use the Post processing Layer and specific post processing shaders, such as the LUT Fixed Intensity shader.
Reference mask model is the underlying model that is driven by DeepAR face tracking technology. Users of the SDK use it as a starting point in creating their own effects in 3D modelling tools (like Blender or Maya).
Geometry (vertices) and deformators (bones and blend shapes) are used to drive the user created masks. Below is the image of the reference model imported in Blender.
The reference model has a geometry that consists of 730 vertices, all of which are driven by the underlying face tracking technology. The engine differentiates every vertex by its unique color property and for that reason, it shouldn’t be changed – otherwise, it will not be able to track a given vertex.
Other than vertices, user created masks can be driven by bones. Bones are used when the model isn’t created from the reference model. In that case it is easier to map significant points of interest in the reference to the resulting model since the number of bones is smaller than number of vertices.
No need to define every bone in the resulting model. Bones are defined by their names which is in the format of “jxx” where the “xx” part is the number of the bone in the reference model. Below is the map of bones available in reference model: