How To: The Quick & Dirty Guide to the Augmented Reality Terms You Need to Know

The Quick & Dirty Guide to the Augmented Reality Terms You Need to Know

Every industry has its own jargon, acronyms, initializations, and terminology that serve as shorthand to make communication more efficient among veteran members of that particular space. But while handy for insiders, those same terms can often create a learning curve for novices entering a particular field. The same holds true for the augmented reality (also known as "AR") business.

This list of terms is your fast track cheat sheet (or glossARy, if you will) of terms often used when discussing how augmented reality technology works. We expect to be able to add new terms to this list as additional industry innovations are revealed.

Image by Snapchat/YouTube

Augmented Reality (AR): A technology that places interactive virtual objects, text, or interfaces within the user's field of view, often within a context- or location-specific relationship to the real world surrounding the user. Mainstream media will often point to Pokémon Go or Snapchat as examples of smartphone-powered AR, however, the goal for many software and hardware makes is to deliver AR through wearable glasses that are more advanced than a product like Google Glass.

Computer-Aided Design (CAD): Popular among architects and engineers via programs like AutoCAD, CAD is an automated format that's used in place of manual drafting of designs and technical specifications for built or manufactured products. CAD-based models can often be viewed in augmented reality.

Extended Reality (XR): An increasingly popular initialization, XR is used as an umbrella term to refer to both augmented and virtual reality, where the X really acts as a variable to substitute any flavor of computer-assisted visual modification to reality.

Field of View (FoV): Field of View represents the visual area in which users can see virtual content in an augmented reality headset. Also known as "FoV," this term can also be explained as a measurement of the angle formed by the distance from the user to a fixed point in space and the bounds of vision to the left and right of that point.

GL Transmission Format (gITF): Run as an open-source project by Kronos, glTF is an royalty-free format for exporting 3D models and scenes from one program and importing them into an application to view in augmented or virtual reality.

This image above shows a hologram of a truck viewed in HoloLens, while also demonstrating the 35-degree field of view of the device, depicted by the faint box surrounding the hologram. Image by Microsoft HoloLens/YouTube

Hologram: Typically three-dimensional, often animated, and sometimes accompanied by audio, a hologram is digital content formed by light that is projected on a transparent display or into open space. In augmented reality, viewers are typically able to interact with holograms; otherwise, holograms are generally passive/non-interactive content that can be displayed for an audience to view.

Inertial Measurement Unit (IMU): According to Xsens, IMUs are "a self-contained system that measures linear and angular motion usually with a triad of gyroscopes and triad of accelerometers."

Light Field: This refers to an optical technology employed by companies like Avegant and Magic Leap that enables objects to be displayed at varying focal planes, allowing for the illusion of depth in an augmented reality experience.

Mesh: A web of identified points in space and lines drawn between them that represent a computer's raw view of a three-dimensional space. This is commonly seen in the HoloLens when an application is mapping its environment, or when viewing layers of a 3D model.

Optical Engine: This refers to a component in a head-mounted device that generates visual content for the user. An optical engine includes the device's GPU, light-generating element, and mirroring elements, all connected to a CPU and interface for input, as well as a transparent display for output.

Platform: This is a term that's not limited to augmented reality, and it's often used other sectors of the tech industry. In short, a platform is a major software environment in which smaller applications run.

Simultaneous Localization and Mapping (SLAM): A system originating from robotics and computer vision, SLAM is a procedure by which a computer scans an environment and constructs a digital map of the area. This has become a standard for anchoring augmented reality content in real world, physical spaces. This is the process ARKit apps undertake to detect surfaces.

Six Degrees of Freedom (6DoF) Tracking: In AR and VR, 6DoF describes the range of motion that a head-mounted display allows the user to move on an axis in relation to virtual content in a scene. As demonstrated in the video below, three of the degrees refer to the motion of the user's head—left and right (yaw), backwards and forwards (pitch), and circular (roll)—while the remaining three pertain to the movement within the space—left and right, backwards and forwards, and up and down.

Software Development Kit (SDK): An SDK consists of a group of development tools used to build an application for a specific platform.

Toolkit: A set of software tools that enable specific functions on a platform. For instance, ARKit is a toolkit that enables AR functions for apps running on the iOS platform.

Tracking: In augmented reality, tracking is the method by which a computer anchors content to a fixed point in space, allowing users to walk and/or look around it, as defined by the degrees of freedom allowed by the display device. In marker-based tracking, computers recognize a two-dimensional image or code on which it anchors the content. In markerless tracking, the computer uses some other mapping technique (usually SLAM) to determine a surface on which to anchor content.

Virtual Reality (VR): Whereas augmented reality place virtual objects into a user's view of the physical environment, virtual reality replaces the user's view of the physical environment with a virtual environment, in which virtual content resides.

Visual-inertial odometry (VIO): According to Qualcomm, VIO pairs a camera and inertia sensors in a device to estimate its position and orientation.

Waveguide Displays: These are transparent displays through which digital content is projected for users to see in their field of view.

This is just the first version of our AR terminology quick guide. Be sure to bookmark this page to keep up with the updates we'll be adding in the coming weeks and months!

Cover image via ARLOOPA/YouTube

Be the First to Comment

Share Your Thoughts

  • Hot
  • Latest