1

Wolfgang Stelzle Reflects on New AREA Member RE’FLEKT

AREA: RE’FLEKT’s growth seems quite robust; to
what do you attribute your success?

WOLFGANG
STELZLE: There are four major factors. First, we’ve been in the industry for a
while; both my co-founder, Kerim Ispir, and I have been working in the AR space
since 2010 and we founded the company a little later in 2012. This is a lot of
time in the tech/AR space to gain in-depth experience. Second, our REFLEKT ONE
and REFLEKT Remote products closly follow our differentiated and innovative
approach as a company with a customer base in the industrial sector. Third, we
have good relationships with our partners and investors, including Bosch, BASF
Ventures, Microsoft, Prosegur and Siemens. This is essential as we receive
strong support for our products, which helps grow the company. Last but not
least, we’ve built a strategic partner ecosystem of technology providers,
resellers, and service partners. All of these factors have contributed to our
growth, and we hope, will help build a successful future.

AREA: Has RE’FLEKT made a deliberate effort to
make it easier for companies to ease their way into AR adoption; for example,
by using their existing CAD drawings to build solutions?

STELZLE:
Yes. Our solution is not just using the CAD information by manually copying and
pasting it into a “what you see is what you get” editor for Augmented Reality. Instead,
we designed a tool that really leverages your existing IT infrastructure so
that your technical information or CAD systems can be used for creating any
sort of AR content based on what you already have. Our product approach is to enhance
those systems and give technical authors the ability to publish to Augmented
Reality in addition to the manuals, PDFs, and websites where they currently produce.
Since it all happens in one place, technical authors don’t have to change their
existing authoring structure, which makes it much easier to establish AR in
technical authoring.

AREA: Why has RE’FLEKT been particularly
successful in the automotive market?

STELZLE:
One key factor is that our HQ is located in Munich, Germany, close to some of
Europe’s largest OEMs. Second, Bosch’s involvement with us as a partner and
investor has opened many doors in the automotive industry. Also, a core feature
of our product is object recognition, which is particularly helpful when you’re
dealing with cars and other vehicles. And finally, the automotive industry is continuously
driving progress and innovation. They’re always looking for new ways to reduce
costs and develop new products – two areas in which AR has huge potential.

AREA: Could you share with us any examples of
ROI from companies using RE’FLEKT?

STELZLE:
This is still one of the most difficult questions to answer reliably and
accurately, particularly for RE’FLEKT, as we are a platform vendor and do not
always have direct access to customer data after deployment. However, I can
tell you that, based on a study of about 100 users, Bosch has solid data
showing savings of 15% in training costs by using RE’FLEKT solutions. I would
also point out that ROI is very use case-dependent, so that number could be
completely different for another customer and use case.

AREA: What do you see as the biggest obstacles
to AR adoption today?

STELZLE:
There are many. One research project has identified more than 40 hurdles to AR
adoption. Here are three that I think are the most significant. First is the
need. A machine manufacturer needs a screw if a machine breaks, but he doesn’t
necessarily need AR to address that; the pain isn’t big enough yet in many
areas. That will change as manufacturers face the knowledge gap in the future. Second,
in various areas, the technology is still not yet mature enough. Many of the AR
glasses are still balky, battery life is short, or they are simply not yet
enterprise-ready. We also take a close look at tracking possibilities, where
there is also still room for improvement, particularly in outdoor environments.
Finally, it’s still costly for companies to get started with Augmented Reality
and create the first content. That last hurdle is one RE’FLEKT is working hard
to overcome. But even so, sometimes the data within a corporation is not very
well structured. Many customers have told us that before they can implement AR,
they have to reorganize their data. I believe all of these things will change
in the near future.

AREA: RE’FLEKT has recently opened offices in
the United States. Can you tell us what your near-term strategic priorities
are?

STELZLE:
A top priority is to expand our partner ecosystem in many areas – technical
information systems, CAD systems, and service providers. Second is investing in
expanding our sales and marketing efforts to make it easier for our customers
to get started with AR – with proper content, case studies, ROI studies, and so
on. Then of course, we’re working hard to take advantage of all the new
developments, such as the Microsoft HoloLens 2 and other new hardware products.
Longer term, it’s all about fulfilling our vision of making the maintenance and
operation of complex machinery as easy as using a simple home appliance. We
don’t just look at Augmented Reality, but instead we always think of actual
problems as well as the systems around that can help solve that problem. For
example, if a machine breaks, AR isn’t the only thing that needs to be
considered. It’s the communication of the machine with the Internet of Things.
It’s the smart selection of a solution for the user. It’s the feedback to the
system to learn from the environment. We will continue our product development
to make that vision a reality.

AREA: What do you hope to gain by being a
member of the AREA?

STELZLE:
First and foremost, it is important for all of us to shape the market with an
independent, objective organization like the AREA that provides content that we
can all make use of. We also want to leverage the network and its activities
for joint sales and marketing. Equally important is knowledge exchange with the
other partners in the AREA network and get to know different perspectives on
the market. It’s all about learning – learning from the work of the AREA,
learning from research institutions, and learning from customers. Our people
are already participating in the committee work, including security, marketing,
and research.




How to Get Beyond the “Cool Demo” to Full Deployment

But the cool demo can result in a so-called “proof of concept purgatory”
where enterprises get locked into a sequence of demonstrations but fail to move
beyond these to proceed with solution deployment within their businesses.

In keeping with the AREA’s commitment to advancing the AR ecosystem for
the benefit of technology suppliers and enterprise users, we believe this is an
important consideration to overcome. That’s why we asked AREA members for their
perspectives on how best to proceed from the cool demo to enterprise adoption.
Here’s what they told us:

Peter Antoniac, CTO, Augumenta:

An
industrial AR project should always start with solving a concrete customer
problem. A cool demo does not mean it is useful for the end user. A best
practice is to start with a clear problem and find a usable and efficient way
to solve it for the end user – taking into account all the variables, like
device usability, environment, workers habits, and narrow it to the most
reliable way possible including picking the best hardware for the deployment.
That means working very closely with end users, listening to their feedback,
and responding to it as diligently as possible.

Harry Hulme, Marketing and Communications Manager, RE’FLEKT:

Scaling AR solutions into production
and breaking through the pilot purgatory is a problem faced by many businesses
today. Countless companies are making substantial technological investments but
fail to plan correctly before implementation. Like any investment, it is unwise
to simply rush in. Instead, you should ensure to optimize AR deployments around
the factors that will make or break its success. 

The name of the game is to set up an
AR deployment to succeed. That happens by winning over the key stakeholders who
can share in its victory (people); solving the biggest operational problems
(product); and doing all of the above in ways that are methodical, strategic
and follow best practices (process).

The success gained by following
these steps will protect your technology investment. After investing time and
money in vetting AR, launching pilots and proving its value, that value will
only be realized if it’s given the chance to succeed. And once it does succeed,
there is real bottom-line value to be gained.

Damien Douxchamps, Head of R&D, Augumenta:

In
manufacturing use cases,
deployment requires integration with the factory
backend, and that’s where the big challenge is. In addition to that, sturdy
hardware, reliable applications and means of interaction are needed when the
user base increases. With that larger user base also come different people, and
the hardware and the software must fit each and every one of them.

David Francis, CMO, Theorem Solutions:

It is commonplace for an
organisation that wants to start an XR project to either go to an external
agency or to develop a capability in-house as a limited-scope proof-of-concept
(PoC). That’s because it is difficult to go “beyond the cool demo” until you
know in some detail what you need to do and how XR will benefit your
organisation. So, the only way to get the answers is to run a PoC.

The problem with this
approach is that the scope of the exercise either hasn’t been fully considered,
or it is extremely restricted simply because it is a PoC. Getting buy-in from
the senior leadership is difficult as you are trying to get approval for
something that hasn’t been tried and tested. Therefore, the budget is usually only
sufficient for the one use case that is within the scope of the PoC.  Of course, I have identified these as
negatives, but if there is a likelihood that the PoC will not lead onto
something bigger, or might fail, then this is the best and most pragmatic
approach, isn’t it?  But, what if it
doesn’t fail?

It doesn’t have to be this
way. There are now technologies and technology partners that can help develop
the business case. The technology doesn’t have to be suitable for that one-off
PoC.  But, if you develop in isolation
(i.e., in-house or with a creative agency) then it probably will be. 

One of the largest problems
of using this technology is getting the 3D content into XR in the first place.
There are lots of importers on the market that are transactional (i.e., they do
conversions one at a time manually), which for a PoC may be fine, but this
isn’t scalable.  If, for example, your
use case is manufacturing, then you don’t want to be manually importing 3D CAD
assemblies every time something changes. You’ll need a scalable, automated
process. And there is absolutely nothing wrong with identifying this as a
must-have requirement right from the start. Just having some isolated data in
XR will not adequately prove that your solution is fit-for-purpose; you’ll only
be testing that one aspect.

So you need to really
understand why you think you need XR; what is the value to your business and
how would you implement it if you weren’t doing “just a PoC”?  In fact, if you don’t do this, then your PoC
isn’t really valid. Often, we are so keen to get our project started, that we
skip past these steps, and even reduce the scope in order to get just enough
money to be able to “have a go” with exciting new technology. You must resist
the urge to do this, as whilst this may get your project off the starting
block, it will not do you any favours further downstream.

In order to prove the value,
you must adequately specify your project to prove the value of all of the
requirements.  If, to achieve the
business value you require, regular 3D data changes, then specify it.  If you require a collaborative experience,
then you must specify it.  Additionally,
you must also consider the output device; these things change regularly with
new devices popping up on the market every few weeks, so make sure you specify
a device-agnostic approach.

Tero Aaltonen, CEO, Augumenta:

Measuring
results is vital for making any decisions about continuing to a wider
deployment from a pilot. You should make sure that there are proper meters in
place to observe the productivity, safety, quality or other factors, so that
customers can calculate the ROI based on facts, not opinions and guesses.

There are two common themes that run
through these various perspectives from AREA members. The first is that
diligence and planning, as in most successful endeavors, are critical to ensure
that there is a tangible way forward from the cool demo to enterprise
deployment. This can help mitigate any possible perception that the cool demo
is simply a dead end. Secondly, ensure that the cool demo adds identifiable
business value by solving a problem or enabling an opportunity.

These are just some ideas for
getting past the cool demo to full deployment. You can find more ideas and advice
at thearea.org.




2019 H1 AR Event Season Reveals Growing Interest and Momentum

The months of May and June bring a flurry of AR events that
the AREA and its members support. So, with my travel bag in hand and a list of
industry experts with whom I would like to meet and discuss the work of the
AREA, panels to chair and talks about the growth of the enterprise AR
ecosystem, I say goodbye to my family and travel to the US.

VRX Immersive Enterprise – Unleash the Full Potential of
XR in Enterprise – 21/22 May, Boston

I’m always excited to attend enterprise-focused events like
VRX. This was my second year and it continues to grow. The speakers were
excellent, albeit perhaps focusing more on VR than AR. This year I moderated a
panel titled – AR for enterprise – What’s happening now and what does the
future hold?

I was joined by:

  • Rich Rabbitz, Lockheed Martin – I’ve been
    lucky to chair a number of panels with Rich, he is an expert and practitioner
    in the AR space.
  • Doug House, Porsche Cars of North America
    – Doug and his team have pioneered the use of remote assistance to make service
    at its dealerships hugely more efficient.
  • Caroline McManus, PTC – Caroline is a Senior
    Market Strategy Analyst, Augmented Reality and was able to provide real insight
    and experience to the panel.

 We covered the key
use cases being implemented in the industry, the benefits / ROI (both tangible
and intangible) that the companies deploying AR have received, and the
challenges they have had to overcome (including convincing stakeholders to
invest, safety, security and technology issues). A great panel and engaged
audience!

As part of the AREA’s media agreement with VRX, I offered
and ran a mini-master class on Enterprise AR. This intimate, interactive
workshop was a great way to present the work of the AREA. The focus is on
highlighting its members thought leadership, explaining the key use cases
(problems being solved), case studies (examples of companies deploying use
cases) and how to overcome potential barriers to adoption. A good session with
lots of positive feedback!

AWE US – 29– 31 May, Santa Clara

After a few days back in the UK, I once again boarded a
plane to the US – this time, Santa Clara. This was my fourth time at AWE US, an
event that is always well-attended with the expo floor growing year over year.
The number of speaker tracks has also increased.

The AREA was unable to secure a speaking slot this year but
did run a very successful AREA meetup. With an 8 am start on Thursday morning,
coffee and doughnuts fuelled the conversation. Our thanks to AREA member PISON for allowing us to use their
meeting room. Over 50 earlier birds attended and heard from AREA sponsor /
board members. Brian Vogelsang from Qualcomm, Marc Schuetz from PTC, Christine
Perey from PEREY Research and Consulting, Jay Kim from Upskill, and Peter Tortorici
from Medtronic all discussed the experiences, insights, and benefits they have
gained from AREA membership.

We also heard from a few of the AREA Committee chairs (the
AREA runs monthly calls focusing on working together to reduce barriers to AR
adoption including Research, Safety, Security, Human Factors, Marketing and
Requirements). Tony Hodgson from Brainwaive (Security Committee Chair) and Christine
Perey (Research Committee Chair) spoke about the objectives and deliverables
from their respective groups.

Back home for a few days, bags packed and I’m excited to
attend…

LiveWorx 2019 – 11 – 13 June, Boston

It was my first visit to LiveWorx and really enjoyed the
experience! The event is focused on helping companies digitally transform. PTC
CEO (and AREA member) Jim Heppelmann delivered a keynote that covered a wide
range of solutions including the latest PTC AR technology, featuring live demos
of many of PTC solutions. There were also many speaking tracks, including one
dedicated to AR.

I was delighted to present the work of the AREA in a 45-minute
session titled “The State of Enterprise AR.” We drew a sizeable audience, and I
was able to cover many subjects including: the history of AR (using Gartner’s
hype cycle); the key use cases being deployed; and the how the AREA’s research
work – including the AREA
ROI Calculator
,  Research
in AR Wearable Security
and AR
Human Factors and Safety framework
– are helping to overcome barriers to
adoption.

The AREA members wanted to offer an in-depth, insightful and
comprehensive workshop, so I was honoured to be joined by Shelley Peterson
(Lockheed Martin), Mary Claire McLaughlin and Rachel Boykin (Newport News
Shipbuilding) and Chris Ambrose (Strategy Analytics) in running a 90-minute
workshop.

With a focus on interaction, the attendees were able to ask
any questions to the team. We covered a wide range of topics on enterprise AR,
from detailed questions on use cases to ROI and benefits. We also addressed a
wide range of barriers and how these companies had successfully deployed AR. It
was a great session and the feedback was amazing, with everyone who attended
thanking the team for their honest insight and feedback. It was a great way to
finish the event season!

Thank you to the event producers, the companies sponsoring
and the attendees. Our industry continues to grow and with more real-world case
studies and enterprises deploying AR solutions, I look forward to kicking off
the 2019 second-half event season!




A New Division of Labor: IoT, Wearables and the Human Workforce

As in previous generations of technology innovation, the deployment of desktop computers initially required a considerable amount of abstraction and steep learning curves: creating even a simple sketch on a screen required coding and math skills. Experts and many mitigated layers of knowledge were required to effectively use this newly-created work resource. As computers and software evolved, using them became more intuitive, but their users were still tied to desks.

The ensuing era of mobility helped greatly. It unchained the device and led to the creation of wholly new solutions that overcame the challenges of location, real time and visual consumption of the world.

But there is one group that – comparatively speaking – benefited much less from all these changes: the legions of non-desk workers — those on the factory floor, on telephone poles, in mines, on oil rigs or on the farm for whom even a rugged laptop or tablet is impractical or inconvenient. The mobile era unchained desk workers from their desks but its contribution to workers in the field, to the folks who work on things rather than information, was negligible. Working on things often requires both hands to get the job done, and also doesn’t map well to a desktop abstraction.

Enter the wearable device, a new device class enabled by mobile-driven miniaturization of components, the proliferation of affordable sensor technology, and the movement to the cloud.

Wearable devices started as a consumer phenomenon (think smartwatches), mostly built around sensors. Initially, they focused on elevating the utility of the incorporated sensor and their market success was commensurate with how well the sensor data stream could be laddered up to meaningful and personalized insights. With the entrance of the “traditional” mobile actors, wearables’ role expanded into facilitating access, in a simplified way, to the more powerful devices in a user’s possession (e.g., their smartphone). The consumer market for wearables continues to pivot around the twin notions of access and self-monitoring. However, to understand the deeper and longer-term implications of the emergence of intelligent wearable devices, we need to look to the industrial world.

An important, new chapter in wearable history was written by Google Glass, the first affordable commercial Head-Mounted Display (HMD). Although it failed as a consumer device, it successfully catalyzed the introduction of HMDs in the enterprise. Perhaps even more importantly, this new device type led the way in integrating with other enterprise systems, aggregating the compute power of a node and the cloud – centered on a wearer. Unlike the shift to mobile devices, however, this has the potential to drive profound changes in the lives of field workers and could be a harbinger of even deeper changes in how all of us interact with the digital world.

Division of Labor: Re-empowering the Human Workforce

Computers and handheld devices had a limited impact on non-desk workers. But technological changes such as automation, robotics, and the Internet of Things (IoT) had a profound impact, effectively splitting the industrial world into work that is fit for robots and work that isn’t. And the line demarcating this division itself is in continuous motion.

Early robotic systems focused on automating precise, repetitive, and often physically demanding activities. More recent advances in analytics and decision support technology (e.g., Machine Learning and Artificial Intelligence [AI]) and integration via IoT have led to the extension of physical robots into the digital domain, coupling them with software counterparts (software agents, bots, etc.) capable of more dynamic response to the world around them. Automation is thus becoming more autonomous and, as it does so, it’s increasingly moving out of its isolated, tightly controlled confines and becoming ever more entwined with human activity.

Because automation inherently displaces human participation in industrial processes, the rapid advances in analytics, complex event processing, and digital decision-making have prompted concerns about the possibility of “human obsolescence.” In terms of the role of bulk labor, this is a real concern. However, the AI community has perpetually underestimated the sophistication of the human brain and the limits to AI-based machine autonomy in the real world have remained clear: creativity, decision-making, complex, non-repetitive activity, untrainable pattern recognition, self-directed evolution, and intuition are still largely the domains of the human workforce, and are likely to remain so for some time.

Even the most sophisticated autonomous machines can only operate in a highly constrained environment. Self-driving vehicles, for example, depend on well-marked, regular roads and the goal of an “unattended autonomous vehicle” is very likely to require extensive orchestration and physical infrastructure, and the resolution of some very serious security challenges. By contrast, the human brain is extraordinarily well adapted to operating in the extreme fuzziness of the real world and is a marvel of efficiency. Rather than try to replace it with fully digital processes, a safer, and more cost-effective strategy would be to find ever better and closer ways to integrate human processing with the digital world. The role of wearable technology provides a first path forward in this regard.

Initial industrial use cases for wearables have tended to emphasize human productivity through the incorporation of monitoring and “field appropriate” access to task-specific information. The first use cases included training and enabling less experienced field personnel to operate with less guidance and oversight. Some good examples are Librestream’s Onsight which creates “virtual experts,” Ubimax’s X-pick that guides warehouse pickers, or Atheer’s AR-Training solutions. Honeywell’s Connected Plant solution goes a step beyond: it is an “Industrial Internet of Things (IIoT) style” platform that already connects industrial assets and processes for diagnostic and maintenance purposes, a new dimension of value.

The introduction of increasingly robust autonomous machines and the consideration of productivity and monitoring across more complex use cases involving multiple workers and longer spans of time will drive the next generation of use cases.

Next Reality

Consider the following – still hypothetical, although reality based – use case:

Iron ore mining is a complex operation involving machines (some of which are very large), stationary objects and human workers – all sharing the same confined space with limited visibility. It is critical not only to be able to direct the flow of these participants for safety reasons but also to optimize it for maximum productivity.

The first step in accomplishing this requires deploying sensors at the edge that create awareness of context: state, condition, location. Sensors on large machines or objects are not new and increasingly, miners carry an array of sensors built into their hard hats, vests, and wrist-worn devices. But “sense” is not enough – optimization requires a change in behavior. For this, a feedback loop is needed, which is comparatively easy to accomplish with machines. For workers, a display mounted on the hard hat, and haptic actuators embedded in their vest and wrist devices close the feedback loop.

Thus equipped, both human and machine participants in the mining ecosystem can be continuously aware of each other, getting a heads up – or even a warning – about proximity. Beyond awareness, this also allows for independent action: for example, stopping vehicles or giving directional instructions via the HMD or haptic feedback.

Being connected in this way helps to promote safety, but isn’t enough for optimization. For that a backend system that uses historical data, rules and ML algorithms to predict and ultimately prescribe optimum paths is required. This provides humans with key decision support capabilities and a means to provide guidance to machines without explicitly having to operate them. Practically speaking: they operate machines via their presence. Considering the confined environment, this means that sometimes the worker needs to give way to the 50-ton hauler and other times the other way around. What needs to happen gets deduced from the actual conditions, decided in real time, on the edge.

As this use case illustrates, wearable devices are emerging as a new way for humans to interact with machines (physical or digital). The sensors on these devices are also being used in a new and more dynamic way. Whereas each sensor in a traditional industrial context provides a very tightly defined window into a specific operating parameter of a specific asset, sensor data in the emerging paradigm is interpreted situationally. Temperature, speed, vibration may carry very different meanings depending on the task and situation at hand. The Key Performance Indicators (KPIs) to be extracted from these data streams are also task- and situation-specific, as are the ways in which these KPIs are used to validate, certify, and optimize both the individual tasks and the overarching process or mission in which these tasks are embedded.

A key takeaway in considering this new human-machine interaction paradigm is that almost everything is dynamic and situational. And, at least in the industrial context, the logical container for managing all of this is what we’re calling the “Mission.” This has important ramifications for considering what systems need to be in place to enable workers and machines to interoperate in this way and to make possible an IIoT that effectively leverages the unique features of the human brain.

A bit about the authors:

Keith Deutsch is an experienced CTO, VP Engineering, Chief Architect based in the San Francisco Bay area. Peter Orban, based out of New York, leads, builds and supports experience-led, tech-driven organizations globally to grow the share of problems they can solve effectively.




Interview with Brian Vogelsang of Qualcomm

AREA: How would you describe Qualcomm’s role in the enterprise AR ecosystem?

Vogelsang: We’re a technology
provider in the ecosystem, delivering chipsets that power AR experiences. Our
Qualcomm Snapdragon platform provides the best silicon/chipset that we can
customize to meet the needs of the XR enterprise ecosystem. You’ll see them in
products today from customers like Vuzix and RealWear. Then there’s the
Microsoft HoloLens 2 that was announced at Mobile World Congress; it uses our Snapdragon
850 Mobile Platform. Vuzix also announced at Mobile World Congress their M400
platform, which is powered by the Qualcomm Snapdragon XR1 platform. Finally,
there are new, emerging OEMs, such as nreal, Realmax, Shadow Creator, and ThirdEye.
Our goal is to optimize technology to put more capability in lighter weight
designs that can drive more immersive experiences at the lowest possible power
levels, but with full connectivity.

AREA: People might have thought that Qualcomm was getting out of AR
when it sold the Vuforia business to PTC three years ago, but the company is
still very much committed to VR and AR, isn’t it?

Vogelsang: That’s correct. We’ve
been working for over a decade in this space. We have a long history of
computer vision expertise and exploring how to build the technology and optimize
it in hardware in ways that will allow more immersive experiences while running
at the lowest possible power. To date, that has been predominantly on smartphones.
However, our long-term vision is that within a decade, we will start
transitioning from a handheld device (smartphone) to a head-worn device or a
sleek AR glass that people use the whole day. And that’s really what we’re
looking at: how do we accelerate that innovation and make those kinds of experiences
happen – initially for enterprises, but long term for consumers.

AREA: So, you expect enterprises to be the early adopters of
wearables, then the consumer market will develop from there?

Vogelsang: That’s right. Today,
in the wearable form factor, there’s a spectrum of devices, from Assisted Reality
devices for remote expert or guided work instructions, to full augmented or
mixed reality devices like HoloLens or Magic Leap. Enterprises are willing to
adopt these technologies if they solve a problem and deliver an ROI – and we’re
excited about that. But long term, we think that the technology needs to get
smaller, lighter weight, and more ergonomic.  More like your standard eyeglasses. Because of
these size requirements, that’s going to be particularly challenging
technically. To deliver an immersive experience at the lowest possible power
requires deep systems expertise. That’s right in Qualcomm’s wheelhouse. It’s
going to take a few years for the industry to deliver mass adoption of consumer
class AR eyewear. So for the short term, the enterprise is going to be doing a
lot to drive the market.

AREA: How closely do you work with wearables manufacturers?

Vogelsang: We work really
closely with them on their products and roadmaps, collaborating with them to
achieve their market objectives. There are always tradeoffs as OEMS balance
cost, weight, form factor and ergonomics, optics and display capability, performance,
thermals, and often these impact immersiveness. And so we work really closely
with them to understand their use cases and objectives and then help them with
hardware, software, and support to meet their objectives. We also give them
insight into future technology developments and their future requirements
inform our chipset roadmap. We can’t solve all the problems. Things like
displays and optics as well as camera modules are a big part of the equation in
building an AR device, and while we don’t build those technologies, we work
closely with the suppliers of these components and assist OEMs with integration
through our reference designs and HMD Accelerator Program, which pre-validates
and qualifies components so OEMs can get to market more quickly.  

AREA: It seems as if technologies are starting to converge in new
ways: 5G networks, Artificial Intelligence, the Internet of Things, and AR. Do
you get that impression as well?

Vogelsang: Definitely. We see
5G as the connectivity fabric that’s going to allow the mobile network to not
only interconnect people, but also interconnect and control machines and
objects and devices. 5G is going to deliver performance and efficiency that
will enable these new experiences and connect new industries, delivering multi-gigabit-per-second
rates of connectivity at ultra-low latency. Latency is hugely important when it
comes to Augmented and Virtual Reality experiences. And of course, 5G means more
capacity. But AI is already being used in Augmented Reality experiences, enabling
things like head tracking, hand tracking, 3D reconstruction and object
recognition or estimating light. AI is a really important part of that. And I
think 5G also will enable some capabilities to be moved off the device to the
edge of the mobile network – taking some capability and moving it to be
processed at the network edge. And that ultimately will help us enable lighter
weight designs with richer, more immersive graphics at that low power threshold
that we need. So all three – 5G, AI and AR – are coming together. And I think
IoT will be a part of AR in terms of syndicating information contextually about
the environment in an enterprise to an AR experience. IoT will feed the
insights, which will be bubbled up as AR experiences.

AREA: What do you hope to get out of being a member of the AREA?

Vogelsang: Qualcomm’s
customers are OEMs. We don’t sell to end customers, the people who would buy
those devices or experiences. However, we do need to understand what their
needs are so that we can better evolve our technology roadmap to support where
those end users want to go. So, one of the things that excites us about
becoming a member of the AREA is to begin hearing directly from some of the end
customers who are deploying wearable AR technology. We know this is a marathon
and we believe XR – spanning both Augmented and Virtual Reality – will be the
next computing platform. So, we’re taking a long-term view and investing now in
the technology that will enable this market. As a result, we’re very interested
in learning from other AREA members about how the technology is being applied
today to solve concrete problems in the enterprise so we can inform our roadmap.
Those learnings will help us deliver products that can accelerate the pace of innovation
and grow the overall AR wearable market. 

We’re doing some trials and
proofs of concept and other things where we get more directly engaged with end
customer use cases. So, being able to collaborate with other AREA members in
that space would be really good. Also, we’d like to get involved in the
committees. We have a human factors team here, and I’d like to get them engaged
with the work that’s being done on the human factors side. While we don’t build
end devices ourselves, we still need to understand as we’re building out
technology how human factors, such as weight, size, or thermals impact the user
experience and ergonomics.

We’d also like to get involved
in requirements. We think we’d really benefit from learning more about requirements
from a horizontal cross-section of the AREA membership. And finally, I think
we’d like to get involved in the marketing side, as well. We would be
interested in using our platform to help tell the story and accelerate industry
adoption.

AREA: Where do you see things headed in XR over the next three to five
years? What are the next big milestones people should be looking for?

Vogelsang: I think that we’ll
see a transition from smart glasses or Assisted Reality experiences to more
Augmented Reality or spatial immersive computing type experiences. Over the
next few years, that transition will really start to accelerate. We’re already seeing
the early promise of what’s to come with technology such as HoloLens or Magic
Leap. I’m really excited about seeing the companies who are deploying smart
glasses or Assisted Reality experiences today start to adopt Augmented Reality
or immersive computing in a much larger way.




What is Assisted Reality – and How Can You Benefit from It?

AREA:
Jay, perhaps you could begin by giving us a quick update on the progress of
Upskill.

Kim: As the person in charge of our product roadmap
and product development, I’ve been very busy taking advantage of the
advancements that have been happening in this space. Upskill’s flagship software
platform for industrial AR applications, Skylight, had been historically
focused on simpler, wearable 2D Augmented Reality applications.

However, we’ve been busy over the last year or so
expanding our portfolio to include Mixed Reality solutions running on Microsoft
HoloLens, for which we announced a partnership with Microsoft, as well as
supporting mobile phones and tablets and transforming Skylight into a multi-experience
platform. We announced this a week before Mobile World Congress in February,
which was very exciting because a number of our customers had already been
taking advantage of these feature sets and are excited that they can publicly
talk about them. We continue to be doing well on the business front, engaging
with customers globally, really being focused on bringing the digital
enterprise to the hands-on worker. There’s a lot to do and there’s a lot of
growth that’s happening within our business, which is also a microcosm of the
AR industry.

AREA:
In The AREA-sponsored enterprise track at AWE, you delivered a really
interesting and exciting keynote talk that introduced the phrase “Assisted
Reality,” a concept that Upskill has brought to the ecosystem. Can you explain more
about what Assisted Reality is and how it’s different from Augmented Reality?

Kim: Assisted Reality is a wearable, non-immersive
visualization of content that still has contextual awareness. So, what do we
mean by non-immersive? A good hardware example would be what Google introduced
with their Glass product – a heads-up display that’s in your line of sight and
you can glance at the content, but the goal of the user experience isn’t to
deliver object tracking or any kind of immersion – no 3D visualizations, object
overlays, or the like.

It’s really intended to deliver pre-existing
information, text, diagrams, images – maybe short videos – as-is to help the
user understand what needs to be done at any given point in time and enhance
the person’s situational awareness. The
goal is no different from Augmented Reality.

Assisted Reality was born as Upskill was striving
to define and differentiate the various user experiences within the broader
Augmented Reality context. If you think about Augmented Reality, it can be delivered
in mobile forms, wearable forms, it can be even a projected display or an
audible experience. The term is actually very broad. So, Assisted Reality was coined to specifically focus on non-immersive wearable experiences that
boost the person’s situational awareness.
We consider Assisted Reality a subdomain or an experience within the Augmented
Reality spectrum.

AREA:
What are some of the benefits of Assisted Reality?

Kim: The benefits span several areas. First of
all, because Assisted Reality is a wearable user experience, it’s important to
talk about the different types of devices that it supports. Generally speaking,
Assisted Reality devices tend to be more wearable than their Mixed Reality counterparts.
That gap may be closing a bit with the introduction of HoloLens 2, but this has
historically been the case. Because Assisted Reality has less stringent
hardware requirements, it can deliver a positive user experience when worn for
a full work shift, and the battery life is quite good. Assisted Reality devices
are simpler and frankly, a bit cheaper. So, the leading vendors in that space
would be companies like RealWear, which has been gaining a tremendous amount of
traction recently and Vuzix, which is a vendor that invested in the industry
very early on, as did Google. These companies have been driving quite a bit of
success with this and that’s certainly one benefit!

The other benefit to enterprises is that Assisted
Reality often doesn’t require any kind of data preparation or formatting. It’s
really focused on being able to deliver content that was historically being
delivered to your hands-on workers on the manufacturing floor or out on the
field or moving about in the warehouse, on paper forms and PCs.

So, it takes away the need to go and figure out
your 3D content pipeline, understand how to convert that into an AR-ready
format, and other tasks. Enterprises can focus on leveraging content that exists
within their organization, which significantly cuts back on the cost, as well
as the time to get that initial return on investment from the solution.

So, it’s a lower cost and faster time to
implement. That’s how we’ve been able to steer a number of our customers
towards starting the journey, beginning with Assisted Reality and eventually building
up their capabilities with more immersive, Mixed Reality solutions.

AREA:
What would you recommend to a company reading this blog post; what steps do
they need to take to learn more about Assisted Reality or the broader Augmented
Reality spectrum?

Kim: Because there’s been so much activity in the
marketplace, we’re very fortunate as a community to have a number of very strong
case studies describing successful implementations of Assisted Reality, Mixed Reality,
mobile phone AR, projection AR – you name it, they’re all there. Chances are,
there is another company within your particular industry that has successfully
deployed AR solutions and has actually spoken about them at events or published
papers about them.

I would highly encourage some peer learning and
of course, the impetus is on the people who want to experiment with the
technology that’s out there – whether that’s starting with a proof of concept
and graduating to a pilot and hopefully getting into full deployments, or
making deeper initial investments because you have a better sense of the
business case. It doesn’t matter, but being able to get started in any kind of
capacity is critical for your learning. There’s only so much that you can learn
from research. But you can certainly be inspired by hundreds of companies out
there now deploying these kinds of solutions, reading about what worked well
for them and what hasn’t worked well for them.

My other advice is to talk to your own end users within
your organization to better understand their pain points. AR, like most
successful tools, shouldn’t be considered a hammer looking for nails, but rather
a solution to a well-defined set of problems.

And of course, I would be remiss if I didn’t
mention that people who are in the learning phase would actually benefit the
most by joining the AREA. The AREA is a global community of organizations that
have been doing this for a very long time, whether they’re providers, end
users, or research institutions. The formal and informal interactions that
people can have as a part of the AREA could really accelerate your learning.

AREA:
Jay, thanks to you and Upskill for bringing the term Assisted Reality to the
fore because it’s a really important part of the solutions spectrum.

Kim: You’re welcome. We’re very excited for the
continued growth of the industry and look forward to working with the rest of
the community.




Research: Augmented Reality Marketing Can Be Effective


A study on Augmented Reality Marketing and
Branding

The study was conducted by AREA research partner Prof. Philipp Rauschnabel (Universität der Bundeswehr München, Germany) in partnership with Prof. Reto Felix (University of Texas Rio Grande Valley, USA) and Prof. Chris Hinsch (Grand Valley State University, USA) and published in the Journal of Retailing and Consumer Services.

In their
study, the authors measured consumers’ brand attitudes before and after using a
branded AR application. Half of respondents used an IKEA app and half used an
app for a German Hip Hop band. Even among the IKEA app users, the authors
detected improvements in brand evaluations. This is significant because
attitudes towards established brands are notoriously difficult to change.

The
researchers also asked consumers to rate their evaluation of the app and how
inspired they felt after using the app. Based on statistical driver analyses,
they could then explain why and when brand evaluations improved.

Counterintuitively,
the extent to which consumers rate an app as positive or negative seems to be
unrelated to overall brand attitude. However, the extent to which consumers
felt inspired is a major driver of improvements in brand attitude. More
specifically, among highly inspired consumers, the brand improvements were
about four times stronger than among the less inspired users. In addition, the
quality of the augmentation is a main driver of inspiration. Users who
experienced problems in AR technology (e.g., a virtual object behaved
unrealistically) felt less inspired then those who did not.

Findings: AR can be effective!

The study
provides some key findings and calls to action for marketers:

  1. Augmented Reality Marketing can
    improve brand attitudes and positively impact a brand’s bottom line. Marketers should consider adding
    AR apps to their marketing and branding toolbox.
  2. The degree to which the AR app
    inspired the user was more predictive of brand attitude change than an
    evaluation of the app itself. Marketers should measure the degree to which app users are inspired by
    the app.
  3. A bad augmentation of the real world
    can negatively impact evaluations of the overall brand. Marketers interested in pursuing
    AR should invest in high quality 3D content and state-of-the-art AR technology.

As the
study authors wrote, “Consumers will operate in a reality that is consistently
enriched with virtual content,and marketers need to find ways to integrate
these new realities into their marketing strategies.”

The entire
research report can be downloaded here for free during the month of April 2019. After
April, the report will be found here. To read a more academic summary,
please visit Philipp Rauschnabel’s personal website.




The AREA’s Annual Workshop

The Advanced Manufacturing Research Centre (AMRC) kindly
hosted the workshop which saw more than 70 participants from a range of
industries, including energy/utilities, buildings and infrastructure,
aerospace, defence, industrial equipment, mining, automotive and consumer high
tech, converge on the shop floor of Factory 2050 for a jam-packed series of
presentations, interactive workshops, demonstrations and networking.

Day 1 was opened by AREA Executive Director Mark Sage and
AREA President Paul Davies, who delivered a high-level overview of AR,
supported by leading companies and AREA members who have deployed AR.
ExxonMobil, Welsh Water and Boeing all helped paint a detailed picture by
sharing their use cases, experiences and challenges.

We then heard from Jordi Boza of Vuzix who shared his
thoughts and ideas of how to get started in AR followed by a presentation by
Atheer that took attendees through a case study showing how Porsche transformed
automotive dealer services with AR.

The last session of the day was an intense, hands-on session
presented by the AREA’s Dr. Michael Rygol who helped attendees get under the
skin of AR by discussing and documenting use cases and their key requirements
in working groups. Presentations by attendees led to some healthy debate and
interesting insights. The day was finished off with an informal networking
session where participants had the opportunity to take a closer look at some of
the organisations who were there with demo tables and to connect with
colleagues both old and new.

The second day was an early start at 8am and then straight
into a presentation from Theorem Solutions on the cognitive gap and potential
of XR technologies followed by a lively panel discussion on workforce
challenges led by AREA Board member Christine Perey of PEREY Research &
Consulting with representation from Boeing, ExxonMobil and VW Group UK. We then
explored more on the AREA’s Research capability by looking at past projects
before jumping into a master class on AR human-centred design from London-based
ThreeSixtyReality. A full agenda took us into a presentation on Human Factors
and related safety challenges and a pre-recorded session on overcoming the
challenges of AR security followed by a polished presentation from Microsoft on
their MR strategy and the eagerly anticipated HoloLens 2. A three-minute
provider pitch finished off a jam-packed day before participants headed home.

In summary, the depth and range of
content and sessions provided participants with a framework within which to navigate
(or continue navigating) their own AR journeys. Among the takeaways:

  • Staying in the AR game
    is tough. Organisations should consider both the opportunities and limitations of
    the current evolving environment.
  • The AR supplier ecosystem
    is continuing to grow, offering new and varied opportunities.
  • Clearer understanding
    and definition of the barriers to adoption (including safety, security, user
    experience) and paths forward to overcome these is essential.
  • Sound, appropriate use
    cases are key to learning more about AR. The number of use cases where AR
    delivers value continues to grow (and we need to capture and share these –
    hence the ASoN initiative from the AREA).
  • Digital eyewear to
    support AR is maturing rapidly (e.g., new models from Vuzix and Microsoft).
    Ensure you stay informed on new developments.
  • There is broad interest
    in AR across a number of industries – from industrial flooring to mining.
  • Considering the business
    benefits of AR is essential to obtaining buy-in from stakeholders and
    decision-makers.
  • There may be significant
    issues around safety and security where AR is concerned. Don’t ignore them.

The AREA annual workshop is an opportunity for members and non-members to connect, learn and share more on AR. We at the AREA are fortunate to have the opportunity to do this annually and it wouldn’t be possible this year without the valuable support of AREA members and our sponsors: Theorem Solutions, PTC, Vuzix and Atheer.




Embry-Riddle Prof. Barbara Chaparro on the Human Factors Aspects of AR

AREA: Tell us how you became interested in joining the AREA.

Dr. Chaparro: I first heard
about the AREA from Brian Laughlin at Boeing. Brian was my human factors
doctoral student when I was at Wichita State and we’ve kept in touch over the
years. I’ve seen the kinds of things he’s been working on at Boeing and how it
overlapped with my research interest in human/computer interaction and usability
and user experience. I saw an opportunity to pursue them further through the
AREA group.

AREA: Could you tell us more about your background as it relates to AR?

Dr. Chaparro: My background is
in the area of usability and user experience. I have worked with a number of
different companies and technologies focusing on implementing design principles
to make it as easy as possible for people to use devices and tools.

I became interested in AR when
Google Glass was introduced. I could see the potential in industries such as aviation,
medical, and consumer products. My initial interest with Glass was to use it as
a training tool for my students. I also worked with a colleague at Wichita
State to study user interactions with Glass versus a cell phone.

And then HoloLens came out, and
for a year and a half now, we have been exploring the user experience side of
HoloLens. We want to get an idea of how the average person experiences this
technology. For instance: What are some of the issues from a UX standpoint? The
gesturing, window manipulation, texting, voice input – all of these methods of
interaction bring usability and user experience issues to the human-technology
interaction. A lot of the literature is focused on the usability of a particular
app, but there is very little out there on the integration of multiple technologies,
working across a multitude of tasks at the same time, or task-switching between
the physical and augmented environment. That is my interest, and then seeing the
application of this to a variety of domains. I consult, for example, with
healthcare professionals who believe that AR has great potential. Whatever the
domain, there is going to be this core issue of usability that will determine whether
it takes off or not. Eventually, it comes down to the comfort and the seamlessness
of the user experience in the tasks that they are doing.

AREA: How do you expect to benefit from your membership in the AREA?

Dr. Chaparro: I see the AREA as
a fantastic mix of academic researchers and industries that are applying the
technology. Human factors is an applied field, so we’re always looking for
practical applications of the things we’re studying in the lab. So I see that
as a huge benefit of the AREA. Then we’ll benefit from the work of the various
committees. We’ve been participating in the Safety and the Research Committees,
and hopefully, the Human Factors Committee in the future. We need to understand
what the issues are, because any problem that an industry is having is a potential
research project for one of my students. And that’s the other benefit: to
recognize the needs of industries that will need to hire students that have
knowledge of this technology. We want to understand what those needs are so we
can build them into our curriculum if they are not already there.

AREA: Based on what you have learned so far, what do you see as the
major outstanding issue that needs to be addressed to make AR more usable to
the average person?

Chaparro: With these new
glasses and head-mounted devices, certainly comfort is an issue, especially in
industries where they will need to wear them for an extended period of time.
That’s going to be huge. And not from just a comfort standpoint but also visually
– going back and forth between the physical and augmented world and what that
experience is like.

AREA: In addition to the research projects you mentioned, what other
areas of AR are being explored at Embry-Riddle?

Chaparro: My colleague Dr. Joseph
Keebler has been conducting research related to marker-based AR in medical
training. His area of expertise is medical human factors, teams, and training, so
he is excited about the technology from both a training standpoint and as a
real-time use tool for high performing teams. The issue is that, while it
appears that this technology is great and effective, we really need more research
to demonstrate how and when it is working, and how to best integrate it into
modern day systems.

One challenge is that there’s
a novelty effect problem. For instance, there are research projects being done
that show AR is better for performing a task, but it is really hard to tease
away the novelty side of that. In other words – are people improving due to
increased learning from the AR system? Or is it simply the fact that it’s this
fascinating and visually impressive technology that is garnering people’s
interest and keeping them engaged? Joe and I are interested in how to structure
a study so that we are looking at the true effectiveness of the technology above
and beyond the effects of its potential novelty. Joe has published a few papers
on AR, including a chapter in the Cambridge Handbook of Workplace Training and
Employee Development (Keebler, Patzer, Wiltshire, & Fiore, 2017)[1].

Another one of our colleagues,
Dr. Alex Chaparro, has been working on the use of AR in transportation. For
example, AR has many applications in aviation, maintenance documentation, and
driving environments. His main interest is in the uses of AR and VR in these environments
to train individuals to perform complex tasks.

We also have a VR gaming lab.
Joe and I have also done some psychometric work on the validation of a new
satisfaction instrument for video games that we’re now trying to apply to the
AR world (Phan, Keebler, & Chaparro, 2016)[2]. We
definitely see the benefits of this technology and would like to see it
succeed.


[1] Keebler, J. R., Patzer, B. S., Wiltshire, T.
J., & Fiore, S. M. (2017). 12 Augmented Reality Systems in Training. The
Cambridge Handbook of Workplace Training and Employee Development
, 278.

[2] Phan, M. H., Keebler, J. R., & Chaparro, B. S. (2016). The
development and validation of the game user experience satisfaction scale
(GUESS). Human Factors, 58(8),
1217-1247.




The AREA & NIST Survey on AR Standards for Industry

To complete the survey will take approximately 5 minutes and aims to provide valuable information which will help drive and inform standards development strategies for the AR enterprise industry.

Please access the survey by following this link https://survey.zohopublic.com/zs/OwB3Gq