As I look back over the last decade, there have been huge leaps in the augmented- and virtual-reality (AR/VR) world as innovators have been trying to mimic more realistic, immersive display experiences. However, we still have a long way to go to blur the lines between the real world and the digital world.
To really drive the future of AR/VR, the software, sensors and display hardware all need to work in concert to deliver better display experiences for humans. The display industry could benefit greatly from more collaboration with software, sensor and chip companies to get us to the next level. We need these innovators to push the limits of their software to help enable more realistic human interactions. This cross-collaboration will no doubt fuel more innovation in this space. Very few markets have interconnectivity with other markets like AR/VR, so we should take advantage of that.
This is an area I’m especially interested in right now. It’s all about the immersive experiences we are trying to create.
I had a conversation about this topic recently with display industry pioneer and visionary Tara Akhavan, who told me, “The post-Covid era and events such as the chip shortage are proving to us more than ever that the siloed industry is not sustainable. We need cross-industry and ecosystem partnerships more than ever to create the most optimized display UX [user experience] demanded by consumers. Targeted collaborations have never been as key a part of the display ecosystem before as they are today. We see across the board more and more announcements in this regard.”
This is great advice for our industry. Displays are improving every day, but I believe software, sensors and chip companies can contribute more to help us overcome these challenges and vastly improve the human experience.
This has inspired me to take a deeper dive into human-centric design and think more about how we as an industry can push the envelope on creating better human experiences in digital environments, not just from the display side but also by more innovation on the software, sensors and chips that drive these experiences.
What would it take to create a display system capable of passing this test and fooling people into thinking they were seeing reality? From a display technology perspective, it’s pretty straightforward — you just need a lot more, a lot better and faster pixels.
It may sound easy, but there are some pretty major technological leaps required before we can imagine a display that can physically produce light that fools our eyes into perceiving reality. Just think about the display in a high-end smartphone that most of us are familiar with today: How much more would it need to improve?
The human visual system turns out to be pretty impressive. Resolution and brightness would each need to increase by about an order of magnitude. Color is also woefully inadequate compared with what our eyes can perceive. Today’s displays only recreate about 45% of the range of colors that our eyes can detect, so that would have to increase by more than double. Today’s pixels are also a little slow compared with our speedy visual system, with about a 50% increase in frame rate needed compared with a common smartphone.
Display technologists are already hard at work on all of these big challenges: LCDs, miniLEDs, OLEDs, quantum dots and microLEDs all offer different roadmaps to deliver these amazing experiences. It will be fun to watch over the next few years, but it also won’t be enough.
Software, sensors and chips will play an increasingly critical role in driving the future of more realistic display technologies and immersive experiences. Let’s take a closer look at each technology area.
Sensors
Sensors are a key component of any human-centric display system. They enable displays to adapt to the environment to deliver better, more immersive experiences and can even improve battery life.
Many of us are familiar with the ambient light sensor on most modern smartphones that constantly adjusts the display brightness to match the ambient light. This saves power by always keeping the display brightness at the appropriate level and makes the display more readable for users.
This can be taken much further with deeper integration between sensors, chips and displays. One example is Intel, which is doing some really cool work with its Visual Sensing Technology. In this case, the sensors allow the device to be aware of not just the ambient light but the user’s attention. Users can unlock and turn on the device just by looking at it, and when they look away, the screen dims to save power. It’s a bit like the engine in your car turning off at every stop light. A bunch of small reductions in battery use can really add up to increased battery life.
In extended-reality (XR) applications, sensors can create an even more impactful experience. Here, sensors can track where we are looking to deliver sharper images with less GPU power with foveated rendering. They can see our facial expressions to enable us to communicate more effectively and also map the physical environment around us so we don’t bump into any walls and/or so we can interact with the physical world in AR.
One downside to sensors, however, is that they can get in the way. But display notches and cutouts for sensors may soon be a thing of the past. One company doing some interesting work on under-panel sensors so that displays can go edge to edge for a more immersive experience is OTI. I hope to see more of these innovations in the near future.
Chips
Today’s VR experience is still at a relatively primitive stage in terms of realism yet still pushes processing power to the bleeding edge. Immersive digital experiences will require a tremendous increase in computing horsepower and efficiency. Perhaps the most obvious example is the sheer GPU power needed to render a realistic environment or human avatar. Reconstruction and rendering of lifelike avatars alone will demand tremendous computational power that is not available today; this is only one piece of the processing power puzzle.
The industry must continue to push the envelope on more powerful and highly energy-efficient processors that can, for example, enable an average VR headset to handle XR experiences, such as a spherical view that captures the scene in vivid detail, sensing the surroundings via light detection and ranging (LiDAR), while delivering rich 3D high-fidelity audio in all directions.
Will we ever get there? I’m optimistic that the industry will steadily chip away at this. That, combined with innovative cloud computing solutions that can offload some of the most intensive rendering work and faster networks for low latency, can deliver the required horsepower in the near future.
Software
Even if we can deliver on all of the above hardware challenges, the experience won’t be compelling without great software. Content creators and studios will be required to build new tools to create compelling digital experiences that bring the most out of the hardware.
Modern software development kits now enable powerful functionality that brings together the physical and digital worlds in new ways. The latest headsets allow virtual avatars to be more expressive by reading facial expressions and recreating those virtually. Software can also read the environment and recognize 3D objects. The tools are becoming increasingly sophisticated and easier to use. I would challenge the software industry to “show us what you got” for future human digital experiences. The display industry needs you.
‘Turing’ a corner
There are some innovators that are already upping the ante on creating enhanced AR/VR experiences. For example, in 2022, Meta CEO Mark Zuckerberg announced that the company aims to pass a “Visual Turing test” with a VR headset that provides visuals indistinguishable from the real world. Since then, Meta and its Reality Labs have been working to figure out what it takes to build next-generation displays for its virtual-/augmented-/mixed-reality headsets.
“Displays that match the full capacity of human vision are going to unlock some really important things,” Zuckerberg said in a Meta video. “The first is a realistic sense of presence. That’s the feeling of being with someone or in a place as if you’re physically there. … The other reason why these realistic displays are important is they are going to unlock a whole new generation of visual experiences.”
This may sound like an outrageous and borderline-impossible goal to many display technologists. No current display technology has been able to come even remotely close to passing a visual Turing test, something I’m sure Alan Turing would challenge the industry to do.
But as many of us in the industry know, the “impossible” can become “possible.” We just need to overcome a few challenges first. The problems that need to be solved in software and hardware to create truly immersive AR/VR experiences are crazy. But I believe we can overcome these with the right innovations and collaborations. So what are you waiting for? Let’s keep innovating. I can’t wait to see what you all come up with.
About the author
Jeff Yurek is a former creative-professional-turned-marketer passionate about storytelling and technology. He is vice chair of marketing for the Society for Information Display (SID’s Display Week 2023 will be held in Los Angeles, California, May 21–25, 2023). Contact Jeff at jeff@jeffyurek.com.