December 4th 2019 | Prospero House, 241 Borough High Street London SE1 1GA

David Ripert, Chapter President - UK, The AR / VR Association

How can the public sector harness the power of AR/ VR technology?

In a survey of 900 AR and VR developers on their development roadmap, 50% of developers said they were developing for AR and VR in gaming, 33% in education and 17% in training. This shows there’s an increase in using this technology for public sector training, currently the third biggest sector.

There’s many ways AR/VR is being used in public sector training (e.g. in the Police Force, Fire Brigade Services and in the NHS). This technology is also being adopted in the private sector training (e.g. in factories, the technology can help increase retention and productivity through step-by-step AR visual instructions).

How can this immersive technology assist in staff training? What are the benefits?

In healthcare, immersive technology has been used for a long time. Examples include the use of 360 cameras in the recording, then live-streaming through VR headsets for surgeries, in order to provide students with more effective training, wherever they are. You can also recreate an operating room in VR using 3D models, so that students can practice without risk and at a much lesser cost.

A recent study conducted by UCLA has shown that users of the virtual reality surgical training solution, OSSO VR, improved productivity by 230% and increased their speed by 20%. This is in line with research showing that VR users are faster and more competent than non-VR users.

With AR, it is possible to overlay information onto the real world. Cameras can be used to recognise body parts or tissues, using AI/ML for computer vision, and data can be overlaid. This is an effective way of supporting system tools and training.

Police, fire and rescue services have also benefited from the use of VR. Historically, firemen have to build a whole physical environment for training, with real fire, and it’s expensive and risky. The use of VR for the recreation of a similar, but virtual, environment has demonstrably improved scalability, speed, cost-effectiveness and safety.

To make the experience even more realistic, users can wear haptic gloves, which provide physical resistance to fingers, in order to interact with the VR content. Some Location Based Entertainment venues have started adding wind, heat and smells to enrich experiences further.

This practice has also improved collaboration, through the use of virtual rooms and labs that allow simultaneous access to multiple users. And with persistence (ie virtual content created by one user appears to other users in the same location), there is potential for further realism and interactive training.

How is AR cost-effective and how does it return on investment?

Globally, the spend on traditional training is equal to approx. $350 billion each year.

Training through AR and VR can be slightly costly, as it requires 3D modeling and coding/development; however, there are great benefits in not having to recreate a physical environment: not just in health and safety but also in speed, retention, overall performance and costs.

How can AR/VR transform the use of data?

The world of artificial intelligence is fast-growing but it can only work if large pools of data are made available, to make ‘computer vision’ work - that is to say, so that a camera can recognise an object or locate it. For example, Google has been building large libraries of object visual data for years now, and you can already try out some of their applications in computer vision and cloud computing through the Google Lens app- which enables you to use your mobile phone camera in order to live-translate foreign languages, identify a plant species or recognize your friend’s shoes and purchase them directly online.

5G will bring great advantages to the public sector – police, fire and rescue services could, for instance, use AR, computer vision and cloud technology to overlay real-time virtual data on top of the World, as well as obtain very precise location, more than GPS can provide today (using maps information combined with computer vision- which will recognize buildings and landmarks vs data stored in the cloud). Google Maps will eventually become completely visual, as will the overall “search” function. Mobile phones still make the experience uncomfortable, but AR headsets will help make visual search and navigation seamless and it will democratise its use. We’re only a couple years away from affordable and comfortable eyewear that mass consumers will adopt.

As always, one of the first users in immersive tech has been the military: data can be projected onto helmets and gear and combined with heat maps, with a link from command. This way, action can be monitored and guided remotely, but with precision.

Data is needed to produce meaningful AR/VR experiences. However, privacy remains a major issue. In order to create the “AR cloud”, people’s environments need to be scanned and the resulting 3D data stored. We’re talking about external and internal visual data. Just like when Alexa and Google Home products store voice data in the cloud, in order to return intelligent data quickly, in sound format. How will visual data be stored? Who will own it? Can it be hacked? How will it be used?