How We’re harnessing the power of GPUs for both virtual immersion and large-scale data processing.
“As Hiro approaches the Street, he sees two young couples, probably using their parents’ computers for a double date in the Metaverse, climbing down out of Port Zero, which is the local port of entry and monorail stop. He is not seeing real people, of course. This is all a part of the moving illustration drawn by his computer according to specifications coming down the fiber-optic cable. The people are pieces of software called avatars. They are the audiovisual bodies that people use to communicate with each other in the Metaverse.”
–From Snow Crash, Neal Stephenson (Bantam, 1992)
A brand-new era
Although the concept, an evolution of cyberspace along more social lines, has been around since the early 1990s, the Metaverse has been having its spotlight moment, with companies such as Epic Games, Microsoft, and most recently Facebook (now Meta) announcing plans to create interconnected worlds spanning across both physical and virtual environments.
The names involved imply the scope of the Metaverse: work, games, social. But there will be room for others, from education to therapy to politics. Many of these things were also part of the Second Life experience of the early noughties, but this time, thanks to the use of augmented and virtual reality (AR/VR), the Metaverse promises to be a more immersive experience. It’s probably best to think of it as a virtual version of the world in which many of the things you can do in the real world might also be possible in the virtual.
Bringing this ambitious concept to life does come with its challenges. As interoperability between platforms, programs, and apps will be key for a fully functioning ecosystem, software will need to evolve to facilitate that. We are likely to see some impressive software jumps over the coming years that will allow for massive amounts of data to be managed in new and exciting ways.
Click here to read more ...