We’re very excited to share the new iOS and Android SDKs to our developers. Over the few months we’ve worked hard to create the best face AR SDK out there and are super happy with the results and are sure you are going to like it too. Let’s jump right into all the cool new features we’ve developed for you (thanks for all the feedback!).
Improved tracking performance and quality
Our deep learning team has made significant improvements to the face detection and tracking algorithm which resulted in better performance and quality of DeepAR face tracker. DeepAR now uses less processing time for better overall results. This is really important for the apps requiring the highest performance like video conferencing apps, or the beauty and make-up tools which need the best tracking precision possible.
Better background and hair segmentation
Aside from face tracking, our segmentation features received a major accuracy and performance boost. Now that remote work and video conferencing has become the norm, we’ve focused on our background segmentation technology which now performs better on both iOS and Android — and is also now available on macOS. We’ve achieved real time background segmentation performance on our WebSDK too so stay tuned for a new version shortly.
We’ve had emotion detection from the start, however it was only available to users within the DeepAR Studio as a custom behaviour component. Now DeepAR has EmotionsAPI exposed which estimates users’ emotions in each frame including anger, sadness, surprise happiness and neutral emotions. We’ve been testing this with some big clients, and the results are very powerful, and super insightful — we can’t wait to see what you build with it.
iOS API changes
iOS has received an API update. Mainly we’ve refactored the code that handles the iOS camera in a separate class called CameraController. You can use it as is or you can provide your own implementation just make sure to implement the interface provided in CameraController.h. This is now more in line with Android’s CameraGrabber approach which gives more flexibility to developers. Check out the example within the SDK download on how to use it. It’s really easy and it should not take you more than 15 minutes to make changes to your current implementation.
In addition to API changes we’ve included the requested simulator and latest xCode support for iOS developers.
We’re already working on the next iteration of DeepAR SDK which should bring even more exciting features and performance improvements. Here are just some of the exciting new stuff we have in plan:
New WebSDK with performance and API changes to match the iOS and Android. Also, we are very close to providing real-time background segmentation for HTML5.
ActionUnits API – 21 values of parameterized 3D model which can be used as an additional way to drive your filters.
Better support for frame-by-frame and off screen frame processing. This will give developers more freedom to use the SDK in various scenarios other than on screen rendering – video or batch image processing to name a few.
New DeepAR Studio – we are quite excited about this one. DeepAR Studio is a major component in our tech stack and we can’t wait to share it with you – it will make the content production easier than ever before. Before you get your hands on it here is a sneak peak of how it will look like.
We hope you’ll like what we’ve prepared for you and all the exciting new stuff that are coming in the future. Feel free to contact us any time if you have any questions, our support team is more than happy to chat with you.