- Inside VR
- Posts
- Will VR Ever Replace Laser Tag? Plus AI Dali, More VR in Museums, New Synthesis Games, and more...
Will VR Ever Replace Laser Tag? Plus AI Dali, More VR in Museums, New Synthesis Games, and more...
Bob Cooney explores the biggest market opportunity for VR in FECs, while VR continues to change the art and museum landscape.

Did someone send this newsletter to you?
Subscribe here - and give them a hug from us! 🤗

Lots of new museum VR installation, Synthesis adds Springboard games, penguins invade Scotland, and all the news from around the world on LBVR. And I unpack what has to happen for VR to overtake laser tag. It’s inevitable, but when? Read below the news to find out in One Big Thing.
When Austin Powers said "Why won't you die?" was he talking about laser tag as a business?
New Developments

Synthesis VR Gets More Springboard Titles
FECs and Arcades
Synthesis VR expands PCVR library through SpringboardVR
Over 20 new titles—some never before seen in arcades—are now available to Synthesis operators thanks to a new SpringboardVR integration. Read more →
Virtual Reality World Galway launches immersive family fun
This new Irish venue blends VR arcades with escape rooms and motion platforms—an all-ages play space tailored to locals and tourists. Read more →
Museums and Science Centers
Solitude VR premieres at Taiwan’s Museum of the Moving Image
A slow-burn, poetic VR experience that challenges conventional museum storytelling through isolation and immersion. Read more →
Virtual walk through the 1970 Osaka Expo Pavilion
The past meets presence in this detailed reconstruction of Panasonic’s Expo ’70 exhibit, now fully explorable in VR. Read more →
RAF Museum launches WWII VR dogfight
Visitors suit up and take flight in a room-scale Spitfire simulator designed to educate while thrilling. Read more →
Dali Museum’s surreal new VR blends art and AI
“Call Dali” puts you face-to-face with a deepfaked Salvador Dali inside an interactive VR dreamscape. Read more →
Art, Music, and Culture
Bronze Era Odyssey opens in Wuhan
This large-scale, site-specific experience invites users into a mythologized ancient China through cinematic VR and spatial storytelling. Read more →
Travel and Tourism
L.A.'s Lunar Light VR brings improv to the moon
A curious hybrid of immersive theater and free-form exploration, this lunar VR journey invites playful interaction with narrative fragments. Read more →
Antarctica VR experience travels to Scotland
Designed to simulate life on the ice shelf, this educational VR piece wraps climate messaging in cinematic wonder. Read more →
Technology
Wireless PCVR setup enables backpack-free free-roam
This standalone rig runs high-fidelity PCVR wirelessly, unlocking free-roam multiplayer with no need for back-worn computers. Read more →
One Big Thing
Why Hasn’t VR Eaten the Laser Tag Market?
Laser tag arenas represent a massive opportunity for VR transformation. With thousands of venues worldwide, these spaces can be reinvented as fully immersive VR attractions, delivering enhanced gameplay and rich mixed reality interactions.
Creative Works attempted this go-to-market strategy a few years ago with Limitless VR. They used Matterport scans to convert arenas into virtual maps. The cost of doing this was too high, so they pivoted to smaller arenas with a few portable barriers. This requires FECs to clear precious floor space, from 600 to 1800 square feet, which has limited the adoption.
Recent posts from developer Julian Triveri and picked up by Upload VR showcase how Meta Quest’s newly released depth sensor API can be used to build a continuous, real-time map of any space. This proof of concept suggests it’s feasible to create a VR solution where the system generates detailed 3D meshes of rooms in real time, while players are running around playing.
Understanding Continuous Scene Meshing Technology
Continuous scene meshing technology creates a live, evolving 3D scene mesh of the environment, updating spatial data in real-time as users move through it.
A scene mesh is like a digital spiderweb that wraps around everything in a room so a VR system knows what’s there.
Imagine you're blindfolded and trying to feel the shape of everything in your room with your hands. The mesh is like that—but done by cameras and sensors. It creates a 3D map of surfaces like floors, walls, tables, and chairs.
Then in VR, that map lets the system:
Avoid putting virtual objects where physical objects exist so players don’t get confused.
Let’s player hide behind walls and other physical barriers in a game.
Make virtual bots walk around real obstacles, behaving in a more lifelike fashion.
Make sure that laser blasts don’t go through walls, but could leave burn scars for realism.
Example
Say you’re playing a VR game in a laser tag arena. You would scan the arena with a headset, capturing all the nooks and crannies of the physical barriers. The headsets build a mesh—a wireframe model of your walls, floor, ceilings, and ramps. If the game wants to place a monster crawling on your ceiling or a base to be captured on a floor or wall, it uses the scene mesh to put it in the right place.
The way it’s done now is called static scene scanning, where only one snapshot of the environment is taken, saved to a computer or the cloud, and then distributed to all the headsets in the system. Static scans become outdated when objects change. Meta Quest struggles to remember maps, whereas platforms like HTC and Pico allow map storage and sharing as part of their LBE platforms.
Continuous scene meshing replaced the one-time static map created in advance, for a real-time creation of the scene, shared by all the headsets as it’s created. Ideally, each headset uploads the parts of the mesh it scans to a central system, which combines all the disparate meshes into one mesh to rule them all. Sauron would be happy.
Dynamic Occlusion
The Meta Depth API, which was recently opened to third-party developers, is essential for Quest because it provides continuous, real-time depth frames from the headset’s sensors. But Quest offers another feature that neither HTC VIVE nor Pico has released: Dynamic Occlusion.
Last week, I put the Meta Quest 3 through its paces inside XR Mission – Battle World 2045, a free-roam mixed-reality shooter that opened in late 2024 at Tokyo Dome City. The attraction is a straightforward, compact, free-roaming experience for 4-6 players. When a player steps behind a digital object, the headset immediately clips their avatar; you lose visual contact, exactly as you would if a physical crate were in the way. That single capability changes how the game is played—sight-lines matter, flanking works, and suppressive fire is meaningful. The mapping held up under fluorescent lighting, and I didn’t notice any artefacts around thin railings or signage.
I asked Thomas, the developer, why they went with Quest despite its notorious free roam calibration problems. He said that dynamic occlusion was a must-have for their experience, which I understood as soon as I started playing. It’s unfortunate because between games they have 2 employees checking and recalibrating each headset, eating up precious minutes of high-rent space, while customers waited in line for more than an hour to play.
For VR to work in a laser tag arena, we are going to need a combination of:
Hardware-level depth sensors, like HTC VIVE Focus Vision and Pico 4 Ultra to reduce processor loads
Shared continuous scene meshing between headsets for fast, accurate arena mapping
Dynamic Occlusion so virtual objects mixed with physical ones appear real to the players.
So far, no headset I am aware of offers all three of these. Get to work, everybody!
Outdoor Downloadable Theme Park
Beyond indoor settings like Laser Tag, continuous scene meshing can also revolutionize outdoor mixed reality applications like Dream Park from Two Bit Circus. By leveraging GPS data and advanced depth-sensing capabilities, users can engage in immersive experiences in outdoor environments. Picture a scenario where users participate in a treasure hunt or interactive storytelling experience in a park, with virtual elements seamlessly blending into the real-world surroundings.
Dream Park in Santa Monica, CA
Balancing Performance and Detail
Continuous scene meshing requires finding the right balance between performance cost and the level of detail needed for an immersive XR experience. This balance is crucial if high-throughput mixed reality deployments are to gain traction. Turning 5000 laser tag arenas into XR attractions requires real-time environment mapping and must not compromise device responsiveness.
The reliance on machine vision in the Quest puts too much strain on the XR2Gen 2+ processor, despite it being the current state-of-the-art (not counting the Apple Vision Pro M-class chips). After creating the real-time mesh, there’s not enough oomph left to render high-resolution, multiplayer gaming experiences. For mixed reality laser tag to work, headsets must render avatars for as many players as can be seen at once, plus gun and environmental effects, lighting, shading, particle effects, etc.
Key performance considerations include:
GPU and CPU load: Devices like Quest 3 & 3S rely on computationally intensive computer vision algorithms to generate continuous meshes, placing significant strain on both GPU and CPU resources. This contrasts with hardware-level depth sensors found in Apple Vision Pro or Pico 4 Ultra, which offload some processing and reduce latency.
Battery consumption: Continuous meshing increases power draw due to sustained sensor usage and complex calculations. Swappable batteries like those on BoboVR accessory headstraps or HTC Focus Vision make this trade-off negligible.
Mesh resolution vs. update frequency: Higher mesh detail improves spatial accuracy but requires more processing power. Developers must optimize the frequency of mesh updates to maintain smooth frame rates without sacrificing critical environmental data.
Device-specific optimizations: Pico 4 Ultra and VIVE Focus Vision devices often demonstrate better thermal management under continuous load, enabling longer use in mixed reality scenarios with less throttling than Quest devices. However, Pico’s and VIVE’s lack of Dynamic Occlusion makes them a non-starter for now in the mixed reality world.
Balancing these factors lets you harness continuous scene meshing effectively, ensuring VR laser tag arenas deliver dynamic and reliable mixed reality interactions without overwhelming hardware limitations.
Developer Resources for Implementing Continuous Scene Meshing
Developers aiming to integrate continuous scene meshing into VR and MR applications now have access to Meta SDK features designed to expose spatial mesh data. Also, Julian Triveri has posted his continuous Quest 3 meshing code on GitHub.
Vive Focus Vision has the right sensors, but as of the Wave 6.2 SDK, it still limits you to mesh-based occlusion. Real-time per-pixel masking isn’t available, so moving limbs and other players will show through virtual objects. Good enough for training sims; not good enough for tactical shooters and high-end LBE experiences.
Looking Ahead: Challenges and Future Directions in Continuous Scene Meshing Technology
Meta’s current scene mesh system has several key limitations that affect seamless mixed reality experiences.
1. Manual Update Requirement for Scene Meshes
One of the main challenges is that scene meshes need to be updated manually. Even though Meta plans to automate updates in the future, the current system still requires an initial scan and regular rescanning to accurately capture changes in the environment. This manual process creates friction and disrupts the smoothness expected in MR applications.
Triveri’s code ignores Meta’s scene meshing. But future updates to Meta Quest software and firmware could easily break any integration of third party code.
2. Static Nature of Scanned Meshes
Another limitation is that scanned meshes are static. Scene meshes only represent a specific moment and do not adapt over time. Moving objects or rearranging barriers and obstacles in playing environments make these meshes outdated unless actively refreshed.
The nice thing about continuous scene meshing is that it dynamically updates. Laser tag arenas with movable obstacles don’t require new scans.
3. Performance Balancing Act
Performance is also a crucial factor to consider. Quest 3 and Quest 3S rely on complex computer vision algorithms to create spatial meshes, unlike devices such as Apple Vision Pro or Pico 4 Ultra that use hardware-level depth sensors. This difference leads to increased GPU/CPU workload, affecting battery life and limiting extended use of continuous meshing features.
4. Complexity of Networking Multiple Devices
Networking multiple devices for shared scene understanding is another challenge but adds synchronization and data consistency complexity. While experimental methods like networked height mapping show potential, they are not yet ready for production use.
To overcome these limitations, we need innovations in:
Automated real-time mesh updating without user intervention
Optimized algorithms to reduce computational overhead
Enhanced hardware integration for precise depth sensing
Scalable multi-user environment mapping
Continued development will bring continuous scene meshing closer to truly immersive and adaptable mixed reality experiences. This is crucial for large-scale VR attractions like laser tag arenas and beyond.
Conclusions about Occlusion
Continuous Scene Meshing is key to converting 5000 laser tag arenas to VR attractions. It’s the biggest addressable market for location-based virtual reality.
The future of interactive entertainment experiences lies in the hands of hardware manufacturers or creative developers coding workarounds, like Julian Triveri. By exploring the implementation of continuous scene meshing using modern SDKs and hardware advancements, we can create next-generation attractions that seamlessly blend the physical and virtual worlds.
It’s time to embrace this technology and unlock new possibilities for immersive entertainment. The potential is vast, and those who dare to innovate will reap the rewards in this ever-evolving industry.
Stay immersed,
Bob
PS. Have you joined The VR Collective in Circle yet? We are over 300 members strong, and it’s where I am posting all the most up to date insights and information. Join Now!
Need More Leads for your VR Business?
Advertise with The VR Collective and leverage Bob Cooney’s influence and audience.
Get More Information