In a few days, we’re launching DeepAR SDK on Product Hunt. We saw this as an opportunity to write a short recap of our story so far. It’s a great chance to remind ourselves of what we’ve been through, and why we started doing this in the first place.
Let’s just explain what DeepAR is if there are any of you who aren’t familiar with it.
Simply put, it’s a set of tools used to insert 3D Face Filters, Masks, and Lenses in any App or Website. There are 3 main parts of DeepAR:
SDK — the devkit itself
DeepAR Studio — our asset creation tool for AR content (filters, effects, mask, etc.)
Asset Store — online store where creators can download readymade assets for their apps
We’ve been dabbling in Augmented Reality for some time now. 4 years ago, we created MRRMRR. It was a fairly straightforward selfie app where users could take photos and videos and add real-time filters, face-swaps, and special effects. MRRMRR got some traction, and it was featured on different App stores for more than 20 times. That type of exposure got people interested. They started contacting us, asking if there’s a way to put some of the AR content we had in MRRMRR in their own apps.
It didn’t take long for us to realize there’s significant potential in that idea, so we thought: “Let’s build something to help developers insert face tracking AR content in apps!”
And with that, our project began.
When we started working on DeepAR, our goal was to create a multi-platform and lightweight SDK that’s easy to use. We didn’t want the learning curve to be too steep and it had to work well with other apps.
Anyone who tried to do something similar knows how challenging that can be.
Nonetheless, we started working and the first thing we’ve finished was the iOS version of our SDK.
Of course, we didn’t want to disregard the huge Android user base, so the Android version came next. Simultaneously, we started working on DeepAR Studio. It was supposed to make it easy for devs and 3D artists to create and test AR effects before deploying them to production. Studio imports 3D FBX models created with modelling tools such as Maya or Blender and exports them in a format that can be used by DeepAR SDK.
Finally, to support the growing market of online AR ads we’ve developed the Web/HTML5 version of our SDK which can be used in all popular browsers.
Currently, we’re putting the finishing touches on what should become our new platform. We’re launching our self-service dev portal with the asset store. Together with DeepAR Studio, those two are imagined as one-stop service for developers interested in working with face tracking AR. They can register for free and try out some AR effects. If they like an effect, they can buy it and use DeepAR Studio to continue working on it and modify it any way they want, or they can use it as it is.
You might think that all this seems a bit familiar, especially if you know about Facebook’s Spark AR and Snapchat’s Lens Studio. And you’d be right. A lot of talented people publish amazing AR experiences daily. That’s because two social media giants created amazing tools for publishing stuff on their platforms. And that’s the main difference between their tools and DeepAR. Our Studio can do pretty much the same as Snap’s and Facebook’s. But, using our SDK, developers can publish their work on their terms, in their own apps and websites.
We wanted to empower independent creators and to enable them to do their own thing.
We are extremely passionate about AR since its inception. For nearly two years now, we have been tirelessly working on DeepAR. Hopefully, developers, designers, and 3D artists globally will recognize our efforts and help us grow the platform.
All the code and tools we have built are meaningless without the help and feedback from our community. There’s always a place for improvement, so please, feel free to share your experiences with us, and help us create better AR products.
Some additional reading if you’re interested:
We write about AR case studies, insights and the newest AR tech we're creating.