Lenovo and How ‘Star Trek: The Next Generation’ Got the Holodeck Wrong

Like many of you, I was enthralled with the holodeck concept when “Star Trek: The Next Generation” (TNG) came out. For many of us, that became the bar for the coming metaverse or 3D web implementation.

The holodeck was a photorealistic virtual environment built on the concept of hard light, which could render solid objects out of light (it is a thing) to provide entertainment for the crew. A starship spending months or years away from its home port would need some form of recreation. Large, Soviet-era submarines needed swimming pools for much the same reason.

Though not new, the concept of creating a virtual world also hadn’t become real outside of games that were primarily used for entertainment. While there were and are simulations for more practical purposes, like military training, going back decades, only a small minority of people ever experienced them. That, coupled with what was likely substantial cost restrictions, prevented the show from taking this technology where it should have gone. The glaring error is obvious now that we actively explore recreating holodeck-like experiences.

Let’s explore how TNG got holodeck technology wrong or at least didn’t apply it as widely as will be done in real life. Then we’ll close with my product of the week, a phone and smartwatch service from Gabb Wireless that will keep your kids, and maybe some of our aging adults, far safer.

Simulation-to-Interface Optimization

The problem with TNG’s holodeck technology didn’t come to me while watching the show, either initially or later. It came to me while watching the various keynotes from Lenovo’s virtual Tech World event last week.

Lenovo arguably has what is currently the best suite of devices to explore commercial interfaces into metaverse-like constructs. It showcased a set of deep relationships with core technology providers that will help the company execute in areas where mixed reality is used, like in holodeck-like, VR-based video conferencing offerings. In contrast to Meta’s prototypes, these products seem to include legs.

A D V E R T I S E M E N T

Lenovo’s tools include a variety of glasses and conference/huddle room offerings to blend improved avatars, including one where they scan the participants in real-time with a 3D scanner to create a more holodeck-like experience than the ones pioneered by Facebook, which uses cartoon-like characters.

This is somewhat similar to the virtual medical doctor in “Star Trek: Voyager” and could, with a unique badge, not only exit any area where there were holo-emitters but exit Voyager as well.

In several episodes of both series, there were examples of the ability to not only recreate the bridge and control interfaces for the various ships but to fool the participants into thinking they weren’t in the holodeck.

So, what’s the mistake?

Well, if you can create anything with hard light, including people, why wouldn’t you have fixed interfaces on the ship, or why would you be limited to a living crew?

How the Metaverse Could Change Human-to-Machine Interfaces

We’ve often talked about how the big AI revolution will eliminate the need for us to learn how to use technology-based tools. Much like we’ve seen with AI-based artists or writers, users only need to be able to describe what they want to get a result. If they want a paper on a particular topic, they summarize the assignment, and AI generates the written result. Or they describe what they’d want in a picture, and, again, AI creates it.

Now fast forward hundreds of years into the time of “Star Trek” stories.

Wouldn’t this mean that the human-machine interfaces all over the Enterprise would be hard light-based, dynamically change to address both the operator’s and the situation’s unique needs, and potentially be redundant because the AI was already doing much of what the crew does automatically?

Physical Drones vs. Hard Light Human Digital Twins

“Star Trek: Discovery” recently showcased the use of drones. It did have the android called Data, but why do you need the massive staffing levels on a starship if you can create digital people who are indistinguishable from humans?

Also, if you can create complex objects virtually, why wouldn’t you have control interfaces that would adapt to the situation rather than being fixed? Further, given that you can put the crew in almost every place, why would you put them in a vulnerable position inside the ship’s skin on the top deck and instead put them in a central armored position deep within the vessel?

I’m pointing this out because, often, with new technology, we first emulate how we used to do things. Then, over time, we break from those out-of-date constructs and eventually optimize around the latest technology. As we move into the metaverse, we are talking about the concept of digital twins, but what if we only need the twin and don’t need the actual physical device?

Leave a Reply

Your email address will not be published.