ARKit, what it means for the future of AR

Apple’s recent announcement regarding the launch of ARKit highlights the company’s aspirations to become a key player in the rapidly emerging space of augmented reality. Finally, it all starts to make sense from a commercial perspective, with the launch of iOS11, it’s become clear that Apple plans to enable billions of iPhones and iPads to be instantaneously equipped with AR functionality.

 

This essentially puts AR capability directly into the pockets of every Apple user in the world. This is the reason why Apple has been so firmly focused on equipping all of it’s new iPhone’s with dual camera functionality. But how does the dual camera technology actually work?

How does ARKit work?

Dual cameras are an essential component of AR technology, as they provide more sophisticated zoom capabilities and depth sensing. If you create a scenario with two entirely different viewpoints, and you have the capability to calculate the distance between each viewpoint, it becomes possible to calculate the distance of a given object point using triangulation. This means that for every pixel the camera is able to recognise, each iOS device can produce depth maps of what the camera is able to see.

 

This is basically how the human brain functions, enabling us to visualise our surrounding environment in 3D. This is an effect commonly referred to as stereopsis (the perception of depth produced by the reception in the brain of visual stimuli from both eyes in combination; binocular vision). The immediate benefit of using this type of technology is that the ARKit software is able to immediately respond to which objects are in the foreground of a particular environment, and which objects are in the background.

 

It’s this technology that enables the iPhone 7+ to create portrait images that position specific objects clearly in the foreground whilst blurring out the background. By implementing dual cameras across the iOS suite of projects, Apple clearly intend to go way beyond portrait images by using the hardware functionality to cross the divide into the world of augmented reality.

 

Overview of ARKit features

Hardware and rendering optimisation

ARKit is fully equipped to run on the Apple A9 and A10 processors. The processors seem to deliver performance that enables Apple smartphones and tablets to understand the user’s immediate real world environment. If your business involves 3D modelling of any description, this is crucially important as the technology enables you to develop extremely detailed and immersive digital content on top of the real-world environment.

 

This means that 3D objects can be viewed in an entirely new way, providing a compelling user experience. This is also crucially important for AR developers, as it provides the ability to use the optimisation features of ARKit in conjunction with existing technologies and 3D engines such as Unity, Unreal Engine, SceneKit and Metal.

Lighting Estimation & Scene Understanding

ARKit is packed with tonnes of incredible features. The ARKit scene understanding technology can use the camera of your iPhone or iPad to detect all horizontal planes in your surrounding environment such as floors and tables, and then dynamically place 3D objects in that environment.

 

The ARKit lighting estimation tool is also really useful, as it utilises the camera sensor of any Apple device to calculate the amount of light that is available and then applies the optimal amount of light to 3D objects. For brands and businesses interested in the technology, this can enable 3D objects and renderings of real products to be visualised in a hyper realistic way.

ARKit Visual Inertial Odometry

ARKit utilises a technology referred to as Visual Inertial Odometry (VIO) to track the surrounding environment with an incredible degree of accuracy. VIO works by merging together real-time camera data with CoreMotion data and these two inputs enable it to accurately understand how it’s moving within a real-world environment – without any additional need for calibration.Also referred to as ‘world tracking’, VIO is able to create the illusion that the virtual world is part of our surrounding real-world environment.

 

This enables you to seamlessly develop AR apps that provide the correct shadowing of objects, change the perspective and scale of 3D objects and position digital props on real-world objects. If you’ve already developed a mobile app that utilises existing features such as the accelerometer, bluetooth LE, GPS and gesture recognition, this presents a plethora of opportunities when thinking about how to extend your app into AR using ARKit.

 

Conclusion

It’s fair to suggest that WWDC 2017 was Apple’s most exciting event in a while in terms of announcing game changing new technologies. The announcements relating to machine learning, voice activated search and augmented reality will have a profound impact on your business. It’s worth spending some time digesting each of these transformative technologies understanding how they’re likely to impact what you do and working out how you can potentially harness them to your advantage.