1

5 Reasons Why the DMDII/AREA Requirements Workshop Was a Milestone Event

At first glance, the two-day event promised to be a worthwhile exchange among parties with shared interests. On one side was the Digital Manufacturing and Design Innovation Institute (DMDII), which had invested considerable time and effort into creating a detailed set of requirements for enterprise AR with the assistance of American industry heavyweights Lockheed Martin, Procter & Gamble, and Caterpillar. On the other side was the AREA, the organization leading global efforts to drive adoption of AR in the enterprise. The AREA is to take over responsibility for the requirements document and its future.

But when the parties gathered in Chicago, the event proved to be more significant than anyone could have expected. Here’s why:

  1. It demonstrated the burgeoning interest in enterprise AR throughout the developing ecosystem. The event attracted 90 attendees from 45 companies – all deeply committed to AR and eager to share their thoughts with one another.
  2. It provided an unprecedented opportunity for AR hardware and software providers to engage directly with enterprise AR users. With the detailed requirements to refer to, participants were able to engage with each other substantively and specifically.
  3. It signified the beginning of a global effort to make the process of implementing AR projects simpler and more orderly. With a set of requirements that will grow, become more defined and use case-specific over time under the aegis of the AREA, enterprises will have the power to define their AR solution needs clearly and confidently. Our goal at the AREA is to make the requirements accessible and usable to the wider AR ecosystem.
  4. It gives AR solutions providers a vital resource for developing their product development roadmaps. The direct feedback of the user community made it clear to hardware and software providers where they need to invest their R&D budgets in the near and medium term.
  5. It created the basis for a more open, vibrant, and participatory AR ecosystem. As the AREA makes the requirements a “living document” to which all organizations can contribute, they will become an increasingly useful resource to a wider range of organizations and will accelerate the adoption of successful AR projects in the enterprise.

More information on how to review and participate in activities around the requirements will be announced soon at www.theAREA.org.




Augmented Reality and the Internet of Things boost human performance

Smart connected objects allow extensive optimizations and accurate predictions in the production line. However, this is not the only benefit that IoT can generate in industrial settings.

The purpose of this post is to explain how Augmented Reality (AR) can provide additional value to IoT data serving as a visualization tool on the shop floor. Operators can achieve better results in less time in a number of use cases by using AR devices to consume up-to-date contextually relevant information about IoT-enabled machines.

Industry 4.0 and the Internet of Things

The extensive use of Information and Communication Technologies (ICT) in industry is gradually leading the sector to what is called the “fourth industrial revolution,” also known as Industry 4.0. In the Industry 4.0 production line, sensors, machines, workers and IT systems will be more deeply integrated than ever before in the enterprise and in the value chain. The complete integration will ultimately optimize the industrial process, fostering its growth and driving greater competition within markets. A report from the Boston Consulting Group, summarizes the nine technology advancements that are driving this revolution and will eventually define its success:

  • Big Data and Analytics
  • Autonomous Robots
  • Simulation
  • Horizontal and Vertical Integration
  • The Internet of Things
  • Cybersecurity
  • Cloud Computing
  • Additive Manufacturing
  • Augmented Reality

The Internet of Things (IoT) leads the advancements in the field as an enabling technology. The IoT concept is based on building intelligence into objects, equipment and machinery and enabling data about their status to be transmitted over the Internet for human or software use. Through connectivity and unique addressing schemes things are able to cooperate in order to reach a common goal. Research has identified the three basic characteristics of smart objects:

  • to be identifiable through unique addresses or naming systems,
  • to be able to connect to a network,
  • to be able to interact with each other, end users or other automatic components.

Industrial settings are paving the way for the introduction of IoT into modern society. In the Industrial IoT (IIoT) vision, any single segment of the production line can be constantly monitored through the introduction of sensors, and intelligent machine and pervasive networking capabilities. Central data gathering systems can collect and analyze data about the status of the entire supply chain and dynamically react in case of failures, resource shortages and demand variations. The value brought to industry by IoT is cumulative, as more devices are brought online and their interactions captured and analyzed. In fact, data gathering and aggregation of supply chain variables can help to optimize production in terms of reduced waste of resources, reduced downtime, improved safety, sustainability and greater throughput.

Big Data Analytics and Machine Learning are the core technologies through which the enterprise can make sense of this enormous flow of data coming from industrial facilities. These enable the creation of mathematical models that constantly improve the precision with which they represent the real-world settings as more data feeds into them. Called “digital twins”, these models are then used not only to analyze and optimize the behavior of the equipment and the production line, but also to forecast potential failures (preventive maintenance is a byproduct of Big Data analysis).

IoT as a tool for human effectiveness

The abovementioned benefits that come from the integration of IoT into advanced process automatization (using technology to allow processes to take place without human input) are not the only advantages. The introduction of smart objects into industrial contexts provides the possibility of greater effectiveness among the people working on the shop floor.

Data gathered from sensors is essential for on-site decision-making and correct completion of tasks as workers operate with smart equipment. Smart objects, also called cyber-physical systems, can support workers, improving proficiency and safety, on different levels.

Design, maintenance, repair and fault diagnosis are complex tasks that require a human operator to interact with sophisticated machinery in the new industrial paradigm. The information needed to successfully carry out these tasks is proportional to the complexity of the tasks and the equipment involved. Real-time and historical data about the functional activities of the equipment are therefore critical for the decision-making process as the complexity of the systems increases. Access to this information on the site where the operator is performing these tasks becomes essential to correctly and efficiently perform them.

To give an example, the recovery procedure of a complex machine experiencing a failure needs to be informed by the current status of the components of the machine itself. Similarly, the proper configuration of complex mechanical systems is conditional on the values of certain internal variables measured by equipped sensors. The operator in charge of these procedures needs to be able to diagnose the problem and pinpoint the exact location of the failure when in front of the equipment in order to immediately revert it to an optimal state. Generally this is done by analyzing real-time sensor data, computer-generated analyses or historically aggregated data.

Current issues with human consumption of IIoT data

In the current state of integration, in cases where IoT technologies are deployed, the data is sent to central repositories where operators in control rooms are in charge of monitoring and analyzing it. However, in most situations, these central control rooms are distant from the location where the data is actually needed. The engineer in front of the machine in need of assistance is required to cooperate remotely with the central control room in order to diagnose a fault. The interaction in this scenario can be very slow as the on-site engineer needs to verbally interpret the information provided by the remote operator, while the operators in the control room do not have the on-site engineer’s spatial reference information to guide them, thereby slowing down the cooperation and increasing the time required to solve the problem.

Some organizations have attempted to address this problem by deploying laptops on the shop floor that can access remote data. Despite being somewhat effective, laptops are only a partial solution to the problem, as the devices are usually not aware of the physical surroundings and the intention of the operator, thus dividing his attention between the object of interest and the interaction with the mobile device. In general, mobile devices currently used to interact with IoT data on the shop floor lack the ability to interpret what the operator is looking at and the intent of the operation unless the operator manually interacts with the software interface, filtering out the unneeded data.

Other companies are deploying advanced touch interfaces directly on the smart equipment. While this partially solves the issue, it also multiplies the number of screens on the shop floor and does not provide a solution for equipment that cannot be fitted with a screen (e.g., outdoor heavy machinery, oil and gas pipes, etc.).

Another crucial piece of information missing from current Human-Machine Interfaces (HMIs) is the spatial reference of the data stream. In certain situations, it is very important to visualize how the data sources are physically located in the three-dimensional space in order to diagnose a fault. This information gets lost if the data streams are visualized exclusively using 2D interfaces or schemes that do not take into account the physical structure of the equipment. For example, the figure below references two different visualizations of an oil pipeline with IoT-connected valves that stream data about their functional status. The representation on the left is not aware of the spatial disposition of the valves, while the visualization on the right makes it much easier to diagnose that the problems with the valves are caused by an external interference around the southern portion of the pipeline.

spatial Iot augmented reality
Two different representation of the same pipeline. The one on the left does not take into account the spatial disposition of the system.

AR and IoT: a match made in heaven

Augmented Reality provides an effective answer to all the aforementioned issues with IoT data consumption on the shop floor. Modern AR-enabled devices (both handheld and head-worn) provide a media-rich ubiquitous interface to any type of network data via wireless connection. Using sensing technologies, these devices are capable of understanding what the operator is looking at and therefore only display the data that is actually needed for the operation at hand. Using AR devices, the operator is empowered with the ability to visualize processed or unprocessed IoT data in an incredibly intuitive way.

The worker starts the interaction by pointing the AR-enabled device towards the piece of equipment in need of assistance. The device scans the equipment using cameras, identifies the object and reconstructs a spatial model of it. The application automatically gathers the list of available sensors connected to the machine interrogating the central repository and displays the gathered information on the equipment itself, in the exact location where the sensors are currently measuring the data. Interacting via the interface, the operator can also search for historical data needed to diagnose the fault. The data thus visualized not only contains the same informative power as it does on other mobile devices, but also provides the operator with the spatial relationship of the data with the machine itself.

AR provides a display for anything. As all the objects/screens AR devices can render are completely digital, there are no restrictions as to how and where IoT data can be visualized. Even the dirtiest and most remote oil pipe, the hottest jet engine or the loudest metal printing machine can be overlaid with a number of virtual data visualizations for the operator to analyze during the process. All in all, if an object generates IoT data, AR can visualize it.

In addition, AR allows the same information to be displayed in different, more intuitive ways. Traditionally, sensor data is visualized using a mix of numbers, graphs and gauges. However, using AR, new forms of visualization, customized for the purpose, can be designed. These visualizations can speed up the interpretation of data and better highlight faults. For example, the pressure and temperature measurements along a pump output pipe can be displayed using a color mapped three-dimensional flow visualization overlaid directly on the pipe itself, allowing the operator to virtually “visualize” the behavior of fluids inside the pipe, speeding up parameters for tuning or fault detection processes.

Use cases

AR and IoT can be combined to address a number of use cases that benefit both private and public sectors. There are some common factors shared by most of these use cases, such as mobile access to data in remote locations, the inaccessibility to certain parts of the equipment, the difficulty to fit a screen on the object of interest or the need for extreme operative precision.

  1. Complex machinery service efficiency:  for organizations that operate and maintain large fleets of complex machinery, from aircraft to locomotives, service and repairs can be slow and costly. Without specific data on particular components in need of repair or the ability to predict when service is needed, assets may be taken out of service unexpectedly and service technicians may need to spend valuable time testing and isolating issues. Organizations can accelerate the process and improve efficiency by combining IoT and AR technologies. Arming assets with sensors enables them to stream data directly from the assets. Using this data to create digital twins of the assets, organizations can self-analyze and self-predict when and how components need to be maintained. Using AR, that data can be translated into visual information  for example, highlighting which fuel injectors in an engine are causing oil pressure problems and need to be replaced. By guiding the repair technician immediately to the source of the issue, the AR/IoT combination limits the scope of the work to only what is needed. Step-by-step instructions delivered via AR ensure that the repair work is performed correctly and efficiently. GE Transportation is applying PTCs ThingWorx and Predix software to realize efficiency gains in the 1,300 locomotive engines it repairs every year.

  2. Mechanical equipment monitoring and diagnosis:  many mechanical parts, such as engines, pumps, pipelines and industrial machines, are fitted with a large number of sensors to control physical variables, such as temperature, pressure, speed, torque or humidity. These measurements are used not only to control the machine itself, but also to monitor and verify its correct functioning. During configuration and fault diagnosis, it is essential for the operator to visualize these values in real time in order to properly set up the machine in one case, and correctly identify the root of the fault in the other. Using an AR device, the operator can visualize patterns directly from these real-time measurements on the components while the machine is operating, allowing for instantaneous functional diagnosis. DAQRI implemented a similar solution to help engineers at KSP Steel to visualize data from heavy machinery directly on the shop floor.
  3. Data-driven job documentation and quality assurance:  Job documentation as well as product certification and testing usually involve long procedures during which operators test structural and functional variables of the equipment. These tests are then documented in lengthy manually written reports that are sent to a central database to serve as the basis for certification and quality assessment. The whole process can be made faster and more accurate using AR devices; the operator goes through the procedure in a step-by-step fashion, approving or rejecting the measurements taken using IoT-enabled equipment. Using AR interfaces, measurements can be visualized on the component being tested and any anomaly can be reported using automatically generated non-conformance reports sent directly to the central database alongside the related IoT data coming from the machine itself or the measurement equipment.
  4. Product design visualization:  during the process of designing electro-mechanical objects, testing prototypes is very important to identifying design flaws as early as possible. However, many of the objects of analysis during this process are variables not visible to the human eye that, after being measured through embedded sensors, are analyzed to provide feedback for the following design iterations. In some cases, AR can provide instantaneous visual feedback on these variables so that design teams can discuss the issues during the test phase and simultaneously tune the object settings at run-time, accelerating the decision-making process. This video presentation by PTC president Jim Heppelmann includes an example of how CAD tools and IoT can be combined with AR to provide real-time feedback on design choices for physical objects.

  5. Smart urban infrastructure maintenance:  similar reasoning can be applied to the public sector. Most urban infrastructure is located outdoors and in hard-to-access areas, making embedded screens very difficult to use. Operators can use AR to scan large objects and detect the point of failure from real-time data visualizations. In addition, they can easily document the status of infrastructure in a digital, data-rich manner, just by pointing the device at the system.

  6. Enhanced operator safety:  AR can also be used to provide safety information to operators interacting with machines that can cause physical harm if improperly handled. DAQRI shows how a thermal camera can be used not only to visualize a thermal map, but also to indicate to the operator where it is safe to touch the object. Although the technology used by DAQRI involves the use of a thermal camera mounted on a hard hat, the same result can be easily obtained using thermal (and other types of) sensors installed directly on the machine to inform the operator of potential hazards.

The challenges

Despite being a suitable solution for the unsolved problems of IoT data consumption on the shop floor, AR still provides challenges that AR providers are currently working on in to make it more practical and useful in real life scenarios.

The first challenge is related to the way IoT data is displayed using AR devices. As mentioned earlier, sensor data can be displayed in new, intuitive modalities using bespoke 3D visualizations, facilitating the decision-making process on-site. However, it is difficult to automatically create and scale up this type of visualization. Providers are working on systems that integrate 3D CAD models with IoT real-time data to automatically generate “datafied” 3D models that can overlay on top of physical objects to display extra layers of information.

In addition to this, the problem of visualizing multiple data points in one single visual entity is still an open issue. While there are consolidated methods that work for traditional displays (like sub-menus or scrollable areas), UI/UX designers are currently working on techniques to condense large amount of data and make it interactive using AR displays.

Another important challenge has to do with data security and integration. As operators are performing their jobs with mobile-connected AR devices that access sensitive data, providers must be sure that these devices are not vulnerable to threats using both software and hardware security protocols. The AREA has recently issued a Request for Research Proposals to members in order to foster an investigation into the issue and propose some solutions.

The future

IoT data is currently used mostly for offline processing. Many techniques allow the creation of very accurate mathematical models of the production line that enable not only cost reduction and production optimization, but also predictions of equipment performances. However, the value of this data resides also in its real-time consumption. The valuable insights generated from the real-time information produced by machines and equipment can greatly accelerate many procedures and integrate human labor even further into industrial information systems. Not taking advantage of this side of IoT means a partial waste of the deployment investment.

AR is considered one of the best tools for workers and engineers to access real-time IoT data on the shop floor, directly where it is needed. AR devices are aware of the spatial configuration of the environment around the worker and can intuitively visualize real-time data, filtering out unnecessary information. As these devices get smaller and lighter, the number of use cases to which this combination of technologies can be applied is growing rapidly, covering scenarios that could not be addressed before.

Eventually, the convergence of AR and IIoT will empower human operators with greater efficacy and will add to their skills in a knowledge-intensive working environment. With the advent of fully integrated automatization and robotics, AR provides a great opportunity for workers to retain the indisputable value of human labor and decision-making.

What the AREA is doing

The AREA is a great supporter of the integration of AR with the rest of the Industry 4.0 technologies. For this reason the AREA recently partnered with the Digital Manufacturing and Design Innovation Institute (DMDII) for a two-day workshop on AR requirement for Digital Manufacturing. The result of this workshop – a list of hardware and software requirements for the introduction of AR technology in the factory of the future – will guide both providers and users towards efficient AR adoption.




Three Lessons We’ve Learned Developing AR Solutions

Industries and enterprises are adopting AR solutions to strengthen their competitive advantage and get customers engaged in business activities. As an organization that has faced AR development challenges every day, we at Program-Ace have learned three essential lessons that could be handy for those seeking to create powerful and engaging augmented reality experiences.

1. AR helps tell a product’s story, so make it important for users.

Augmented reality technology enables storytelling. It makes us see everyday objects in a different light by making visible what has been invisible, enabling us to visualize 2D images, and bringing life to inanimate objects. In other words, it has the capacity to humanize the technology. This, in turn, dramatically increases the value and recognition of the product (or any other object of your choice). A good story not only positively influences the presence but also allows users to be closer to the product and engaged in the tech community.

To deliver valuable applications for the business world, Program-Ace conducts extensive marketing research that studies existing products, possible competitors, and consumer behavior in both B2B and B2C markets to discover weaknesses and consider the most profitable potential opportunities. In our development adventures, the Program-Ace team has drawn one important conclusion: AR development is not just about the smooth integration of CG content with the physical environment; it is about allowing consumers to be connected to the virtual realm. Moreover, the app ideation process (the phase in which you create the concept, define the technological feasibility, and understand the time constraints) can also be supported with product usage data and information regarding solutions already available in the market along with their strengths and weaknesses.

2. Gamification can be a successful way to drive user acceptance and productivity.

Augmented reality technology has had a significant influence on the development of various wearables, headsets, and head-mounted devices. And, of course, gamers are among the early adopters of these advanced accessories. For that reason, many people hold the opinion that it is necessary to develop games in order to be noticeable in the market. While that might be true for some industries, such as education and defense, when it comes to retail, government, or banking, you need a serious approach to the business. Still, gamification can be an effective approach for you.

Even though it originated in the gaming world, gamification has proved to be an extremely effective tool for user acquisition, virality, and customer conversion. At Program-Ace, we have long realized that companies should focus on what the gaming experience can bring to the AR application, instead of creating games. When you deliver proofs of concept to your clients using basic and advanced gamification features, such as multi-layered storytelling, competition, rewards, lifelike avatars, etc., you can drive user engagement and increase productivity.

3. Platform-specific apps are an endangered species.

Contrary to the conventional wisdom that, in the near future, one winning platform will become an AR market monopolist, we do not see any indication of this yet. Instead, the market is full of various products designed for different user needs and demands, and it is highly unlikely that in the next five years, the diversity of platforms will disappear. Accordingly, our experience has taught us to build platform-agnostic applications choosing a cross-platform approach that has worked well for our customers for more than 20 years now, helping them to pursue market supremacy while being platform independent and relevant to user requirements.

Multi-platform (or cross-platform) AR development, especially creating one application that can be deployed to any platform, is preliminarily customized to respect the features of a particular platform or device. However, in some cases, these approaches are ineffective, especially when the target audience is used to native apps. In this situation, our team eventually creates experiences aimed at a specific type of device. For instance, one of our mini games, Archy the Rabbit, was initially designed cross-platform for iOS and Android. With the introduction of HoloLens, we have ported it to this platform by changing the game UI, adding new features, and programming the app to recognize gestures, voices, and gazes. A combination of the Unity game engine and HoloToolKit helped our team to develop important app functionality such as spatial sound, voice recognition, and spatial mapping with minimal effort and improved human-computer interaction (HCI).

Shaping the future

As the next phase of computing, augmented reality offers an opportunity to shape the future of HCI and technology itself. In order to be creative and deliver compelling AR experiences, we have begun to focus on the principles above. These lessons have enabled us to design applications that maximize the value of the technology. By remembering these AR development lessons, you can crystallize your thinking and focus your efforts on developing successful and engaging AR applications.

 

Anastasiia Bobeshko is the Chief Editor at Program-Ace.




Features Worth Seeking in an Augmented Reality SDK

Interest in AR SDKs has intensified since last year, when one of the leading solutions, Metaio, was sold to Apple, leaving an estimated 150,000+ developers in search of a replacement. Vuforia remains the market leader, but there are many good alternatives in the marketplace, some of which are already quite well known, such as EasyAR、Blippar、and Wikitude.

So, what criteria should a developer apply in evaluating AR SDKs? The answer to that question will vary. There are many factors developers need to consider in choosing an SDK, including key features and cost. Portability is another issue, since some SDKs only work on certain hardware.

However, there are a handful of key features and capabilities that all developers should look for when evaluating their next AR SDK:

  • Clould-based storage to support a greater number of 2D markers. 2D object tracking is the most basic form of mapping and allows an application to recognize a flat surface which can then be used to trigger a response, such as creating a 3D image or effect to appear on top of it, or playing a movie trailer where a poster used to be. This is simple to do and all SDKs support it; however, a key difference among SDKs is the number of markers that can be recognized. Many SDKs support around 100 markers as standard, but others allow for a nearly unlimited number of markers by using very fast cloud storage software to store a much larger database of markers. When an AR application can recognize more 2D objects, it enables developers to create more robust applications that trigger more AR effects.
  • 3D object tracking. 3D object tracking expands the opportunities for AR developers by allowing 3D objects, such as a cup or a ball, to be used as AR markers that can then be recognized by the app to trigger an AR effect. This can be useful for advertising-related applications, and also for use in games. For example, toys can come alive and talk in AR because they can be recognized as unique entities by this type of tracking. While 3D tracking is not yet a universal capability among SDKs, it is becoming more common and affords a developer greater latitude in creating compelling, lifelike AR applications.
  • SLAM support. Simultaneous Localization And Mapping has become an increasingly desirable feature in an AR SDK because it allows for the development of much more sophisticated applications. In layman’s terms, SLAM allows the application to create a map of the environment while simultaneously tracking its own movement through the environment it is mapping. When done right, it allows for simple depth information to convey to the camera where things are in a room. For example, if there is a table and an AR image is appearing over the table, SLAM allows the application to remember where the table is and to keep the AR image over the table. SLAM also allows users to look around a 3D image, and move closer to it or farther from it. It combines several different input formats and is very hard to do accurately. Some SDKs offer this functionality, but it is quite challenging and processor-intensive to make it work smoothly, particularly with a single camera. Look for an SDK that can handle SLAM effectively with a single camera.
  • Unity support + native engine. For some applications, it is important that an SDK supports the Unity cross-platform game engine. Unity is one of the most accessible ways to produce games and other entertainment media, but it also simplifies the development process, since Unity applications can be run on almost all hardware. Most SDKs operate through Unity to allow for some very sophisticated AR experiences. However, using Unity as a framework can be disadvantageous in certain applications because it is highly resource-intensive and can slow down AR experiences. As a result, some SDKs offer their own engines that function natively on iOS or Android devices, without the need for Unity. This can be used to create much smoother experiences with robust tracking for each. However, it does introduce the issue of having a coding team for each. This is not an issue if a developer is only planning to release on one platform. In this case, a developer could find that an application runs substantially faster when coded natively, rather than through a Unity plug-in.
  • Wearables support. Smart glasses and other wearables allow AR experiences to be overlaid on the world we see before us, while offering a hands-free experience. As the use of wearables grows, developers producing content for future devices need to ensure that the software they are working with will support the devices they are building for.

When you have narrowed down your candidate SDKs based on these and other evaluation criteria, I recommend that you try them out. Many providers offer free trial versions that may include a subset of the features found in their professional versions. This will enable you to determine whether its interface suits your style of working and the type of application you are developing.

My final piece of advice is to examine the costs of SDKs carefully. Some have licensing models that are priced on the number of applications downloaded or AR toys sold. This may be the most critical purchase criterion, particularly for independent developers.

Albert Wang is CTO of Visionstar Information Technology (Shanghai) Co., Ltd., an AREA member and developer of the EasyAR SDK.




Global Smart Glass Market 2014-2021

This provides the value chain analysis, market attractiveness analysis, and company share analysis along with key player’s complete profiles.

Information about Smartglass:

Also known as switchable glass, is a glass which can alter its light transmission properties on application of voltage, light or heat. These glasses are used in windows, skylights, doors, partitions and have extended their range in automotive industry, aircrafts and in marine applications.

The Smart Glasses Market is segmented on the basis of types as architectural, electronics, solar power generation and transportation, architectural segment being the major market segment.

According to this report the marketed is estimated to grow at a significant rate in the next few years. The major players that are driving this increase are to be in architectural and transportation sectors however the article states that energy efficient building technologies will also contribute to the growth.

Some key facts about this Market Report:

  • Electronics segment is expected to be a prospective market owing to its innovations and research to produce highly advanced devices such as digital eyeglasses and screens.
  • Certain aspects are preventing the growth of the global smart glass market
  • Comparable cost with its substitutes and lack of awareness about its benefits are inhibiting the market growth.
  • North America accounts the major share in the global smart glass.
  •  European market is expected to overtake the North American smart glass market in the forecast period. This will be resultant to the increasing demand for large size advanced windows in residential and commercial architectural structures.
  • Further, market is distributed in regions of Latin America, Asia-Pacific Western Europe, Eastern Europe and Middle East & Africa.

 




Global Smart Glass Market 2014-2021

The following is a summary of a report by DecisionDatabases.com, titled: “The Global Smart Glass Market Research Report – Industry Analysis, Size, Share, Growth, Trends and Forecast”.  This provides the value chain analysis, market attractiveness analysis, and company share analysis along with key player’s complete profiles.

Information about Smartglass:

Also known as switchable glass, is a glass which can alter its light transmission properties on application of voltage, light or heat. These glasses are used in windows, skylights, doors, partitions and have extended their range in automotive industry, aircrafts and in marine applications.

The Smart Glasses Market is segmented on the basis of types as architectural, electronics, solar power generation and transportation, architectural segment being the major market segment.

According to this report the marketed is estimated to grow at a significant rate in the next few years. The major players that are driving this increase are to be in architectural and transportation sectors however the article states that energy efficient building technologies will also contribute to the growth.

Some key facts about this Market Report:

  • Electronics segment is expected to be a prospective market owing to its innovations and research to produce highly advanced devices such as digital eyeglasses and screens.
  • Certain aspects are preventing the growth of the global smart glass market
  • Comparable cost with its substitutes and lack of awareness about its benefits are inhibiting the market growth.
  • North America accounts the major share in the global smart glass.
  •  European market is expected to overtake the North American smart glass market in the forecast period. This will be resultant to the increasing demand for large size advanced windows in residential and commercial architectural structures. 
  • Further, market is distributed in regions of Latin America, Asia-Pacific Western Europe, Eastern Europe and Middle East & Africa.

 




AREA Interview: Ken Lee of VanGogh Imaging

AREA: Tell us about VanGogh Imaging and how the company started.

KEN LEE: The reason I started VanGogh was I noticed an opportunity in the market. From 2005 to 2008, I worked in medical imaging where we mainly used 3D models and would rarely go back to 2D images. 3D gives you so much more information and a much better visual experience than flat 2D images. But creating 3D content was a very difficult and lengthy process. This is the one huge problem that we are solving at VanGogh Imaging.

We started when Microsoft Kinect first introduced their low-cost 3D sensoring technology. It allowed you to map in a three-dimensional way, where you can see objects and scenes and capture and track them. Van Gogh started in this field around 2011 and we’ve been steadily improving our 3D capture technology for over five years, working with several clients and differentiating ourselves by delivering the highest quality and easiest way to capture 3D models.

AREA: What is Dynamic SLAM and how does it differ from standard SLAM?

KEN LEE: Standard SLAM has been around for years. It works well when the environment is fairly static – no movements, a steady environment, no lighting changes. Dynamic SLAM is a SLAM that can adjust to these factors, from moving objects and changing scenes to people walking in front and lots of occlusions.

AREA: Are there certain use cases or applications that are particularly suited to dynamic SLAM?

KEN LEE: Dynamic SLAM is perfect for the real world, real-time environment. In our case, we are using dynamic capture mostly to enhance the 3D capture capability – so making 3D capture much easier, but still capturing at a 3D photorealistic level and fully automating the entire capture process plus dealing with any changes.

Let’s say you’re capturing a changing scene. You can update the 3D models in real time, just as you would capture 2D images with a video camera. We can do the same thing, but every output will be an updated 3D model at that given point. That’s why Dynamic SLAM is great. You can use dynamic SLAM just for tracking – for AR and VR – but that’s just one aspect. Our focus is on having the best tracking, not just for tracking purposes, but really to use that tracking capability to capture models very easily and update them in real time.

AREA: Once you have that model, can you use it for any number of different processes and applications?

KEN LEE: Sure. For example, you can do something as basic as creating 3D content to show people remotely. Let’s say I have a product on my desk and I want to show it to you. I can take a picture of it, or in less than a minute, I can scan that product, email it, and you immediately get a 3D model. Microsoft is updating its PowerPoint software next year so you will be able to embed 3D models.

There are other applications. You can use the 3D model for 3D printing. You can also use it for AR and VR, enabling users to visualize objects as true 3D models. One of the biggest challenges in both the VR and AR industry is content generation. It is very difficult to generate true 3D content in a fully automated process, on a real-time basis, that enables you to interact with other people using that same 3D model! That’s the massive problem we’re solving. We’re constantly working on scene capture, which we want to showcase this year, using the same Dynamic SLAM technology. Once you have that, anyone anywhere can instantly generate a 3D model. It’s almost as easy as generating a 2D image.

AREA: Does it require a lot of training to learn how to do the 3D capture?

KEN LEE: Absolutely not. You just grab the object in your hand, rotate it around and make sure all the views are okay, press the button, and then boom, you’ve got a fully-textured high-resolution 3D model. It takes less than a minute. You can teach a five-year-old to do it.

AREA: Tell us about your sales model. You are selling to companies that are embedding the technology in their products, but are you also selling directly to companies and users?

KEN LEE: Our business model is a licensing model, so we license our SDK on a per-unit basis. We want to stay with that. We want to stay as a core technology company for the time being. We don’t have any immediate plan for our own products.

AREA: Without giving away any trade secrets, what’s next in the product pipeline for VanGogh imaging?

KEN LEE: We just filed a patent on how to stream 3D models to remote areas in real time. Basically, we’ll be able to immediately capture any object or scene, as soon as you turn on the camera, as a true 3D model streaming in real time, through a low bandwidth wireless data network.

AREA: Do you have any advice for companies that are just getting into augmented reality and looking at their options?

KEN LEE: At this moment, Augmented Reality platforms are still immature. I would recommend that companies focus, not on technology, but on solving industry problems. What are the problems that the companies are facing and where could AR add unique value? Right now, the biggest challenge in the AR industry, and the reason why it hasn’t taken off yet, is that so much money has gone into building platforms, but no one has built real solutions for companies. I think they should look for opportunity in those spaces.




Mixed Reality: Just One Click Away

Author: Aviad Almagor, Director of the Mixed Reality Program, Trimble, Inc.

Though best known for GPS technology, Trimble is a company that integrates a wide range of positioning technologies with application software, wireless communications, and services to provide complete commercial solutions. In recent years, Trimble has expanded its business in building information modeling, architecture and construction, particularly since the company’s 2012 acquisition of SketchUp 3D modeling software from Google. Mixed Reality is becoming a growing component of that business. This guest blog post by Trimble’s Aviad Almagor discusses how Trimble is delivering mixed reality solutions to its customers.

Many industries – from aerospace to architecture/engineering/construction (AEC) to mining – work almost entirely in a 3D digital environment. They harness 3D CAD packages to improve communication, performance, and the quality of their work. Their use of 3D models spans the full project lifecycle, from ideation to conceptual design and on to marketing, production, and maintenance.

Take AEC, for example. Architects design and communicate in 3D. Engineers design buildings’ structures and systems in 3D. Owners use 3D for marketing and sales. Facility managers use 3D for operation and maintenance.

And yet, we still consume digital content the same way we have for the last 50 years: behind a 2D screen. For people working in a 3D world, the display technology has become a limiting factor. Most users of 3D content have been unable to visualize the content their jobs depend on in full 3D in the real world.

However, mixed reality promises to change that. Mixed reality brings digital content into the real world and supports “real 3D” visualization.

The challenge

There are several reasons why mixed-reality 3D visualization has not yet become an everyday reality. Two of the primary reasons are the user experience and the processing requirements.

For any solution to work, it needs to let engineers, architects, and designers focus on their core expertise and tasks, following their existing workflow. Any technology that requires a heavy investment in training or major changes to the existing workflow faces an uphill battle.

Meanwhile, 3D models have become increasingly detailed and complex. It is a significant challenge – even for beefy desktop workstations – to process large models and support visualization in 60fps.

One way around that problem is to use coding and specialized applications and workflows, but that approach is only acceptable to early adopters and innovation teams within large organizations – not the majority of everyday users.

To support real projects and daily activities – and be adopted by project engineers — mixed reality needs to be easily and fully integrated into the workflow. At Trimble, we call this “one-click mixed reality” – getting data condensed into a form headsets can handle, while requiring as little effort from users as possible.

Making one-click mixed reality possible

The lure of “one-click” solutions is strong. Amazon has its one-click ordering. Many software products can be downloaded and installed with a single click. The idea of one-click mixed reality is to bring that ease and power to 3D visualization.

Delivering one-click mixed reality requires a solution that extends the capabilities of existing tools by adding mixed reality functionality without changing the current workflow. It must be a solution that’s requires little or no training. And it means that any heavy-lifting processing that’s required should be done in the background. From a technical standpoint, that means any model optimization, including polycount, occlusion culling, and texture handling, is performed automatically without the need for manual, time-consuming specialized processes.

At Trimble, we’re working to deliver one-click mixed reality by building on top of existing solutions. Take SketchUp for example, one of the most popular 3D packages in the world. We want to make it possible for users to design a 3D model in SketchUp, click to publish it, and instantly be able to visualize and share their work in mixed reality.

We’re making sure that we support users’ existing workflow in the mixed reality environment. For example, we want to enable users to use scenes from SketchUp, maintain layer control, and collaborate with other project stakeholders in the way they’re accustomed.

And we’re taking it one step further by making it possible to consume models directly from SketchUp or from cloud-based environments, such as SketchUp 3D Warehouse or Trimble Connect. This will eliminate the need to install SketchUp on the user’s device in order to visualize the content in mixed reality. As a next step, we are exploring with our pilot customers a cloud-based pre-processing solution which will optimize models for 3D visualization.

We’re making good progress. For example, in his Packard Plant project (which was selected to represent the US at the Venice Architecture Biennale), architect Greg Lynn used SketchUp and SketchUp Viewer for Microsoft HoloLens to explore and communicate his design ideas. In this complex project, a pre-processing solution was required to support mixed reality visualization.

“Mixed-reality bridges the gap between the digital and the physical. Using this technology I can make decision at the moment of inception, shorten design cycle, and improve communication with my clients” 

– Architect Greg Lynn

One-click mixed reality is coming to fruition. For project teams, that means having the ability to embed mixed reality as part of their daily workflow. This will enable users to become immediately productive with the technology, gain a richer and more complete visualization of their projects, and build on their existing processes and tools.

The advent of one-click mixed reality indicates that the world of AR/VR is rapidly approaching the time when processing requirements, latency, and user experience issues will no longer be barriers.

Aviad Almagor is Director of the Mixed Reality Program at Trimble, Inc.




AREA Members Featured in IndustryWeek Article on AR in Manufacturing

AREA members Newport News Shipbuilding (NNS), DAQRI, and Upskill and AREA Executive Director Mark Sage are featured in an article on AR at IndustryWeek, the long-running manufacturing industry publication. The article explores the state of AR adoption in manufacturing, weaving in the experiences and insights of NNS’ Patrick Ryan, DAQRI’s Matt Kammerait, and Upskill’s Jay Kim, along with observations from executives of GE Digital and Plex Systems. Find the article here.




The 1st AREA Ecosystem Survey is Here!

The Augmented Reality (AR) marketplace is evolving so rapidly, it’s a challenge to gauge the current state of market education, enterprise adoption, provider investment, and more. What are the greatest barriers to growth? How quickly are companies taking pilots into production? Where should the industry be focusing its efforts? To answer these and other questions and create a baseline to measure trends and momentum, we at the AREA are pleased to announce the launch of our first annual ecosystem survey.

Please click here to take the survey. It won’t take more than five minutes to complete. Submissions will be accepted through February 8, 2017. We’ll compile the responses and share the results as soon as they’re available.

Make sure your thoughts and observations are captured so our survey will be as comprehensive and meaningful as possible. Thank you!