Skip navigation
Like what you’re reading?

Privacy standards in the metaverse: An end to the wild west days of innovation?

Experienced Researcher

Senior Researcher, IoT technologies

Experienced Researcher

Senior Researcher, IoT technologies

Experienced Researcher

Contributor (+3)

Senior Researcher, IoT technologies

  • The sheer innovation rate of metaverse-related tech in recent years has outpaced the development of critical privacy standards in key tech areas such as AI, biometrics, and environmental sensing.
  • Today, as regulators and standards bodies extend their reach into metaverse-related tech domains, innovators face new demands to align with emerging consumer privacy, security, and autonomy safeguards.

Over the past decade, a veritable flood of new developments in eXtended Reality (XR) technologies has created a new “Wild Wild West” of innovation. As the capabilities of emerging technologies far outstripped the ability of governments and key ecosystem players to impose rules in a timely manner, one might have reasonably wondered: when will the music stop? When will the well-dressed, slickly coiffed regulators and standards bodies push through the doors of the virtual saloon to bring regulatory order and standardization to the metaverse?

Regulations and standards: What’s the difference?

 

A regulation is simply a binding legislative act. It is essentially a law which has been passed with the intention of controlling or dictating the course of operations. Alternatively, a standard is an agreed set of rules within an industry. A standard defines how things are built and work. Standards make it possible for different regulatory players and industries to interact.

Well, we have good news for fans of Wyatt Earp in the metaverse: that time is now. Over the past decade, regulators and standards organizations have extended their reach into the foundational technology domains underpinning emerging XR applications. Innovators have found themselves facing new uncertainties about whether the new capabilities their technologies enable are aligned with newly enshrined rules of play protecting consumer privacy, security, and autonomy.

Laws, directives, regulations, and standards simultaneously influence and restrict innovation. But how will these processes impact the technologies that underpin the emerging metaverse and what we can expect in the bold new world of XR innovation? We shed some light on all those difficult questions below.

How regulations and standards map onto XR in metaverse tech domains

Some emerging and existing regulations and standards already constrain the development of XR technologies at the foundation of metaverse innovation. Of these foundations, we discuss three of the most pivotal: artificial intelligence, facial recognition and biometric technology, and environmental sensing technology:

Artificial intelligence: Regulations and standards

The domain of artificial intelligence (AI) has quickly become one of the most rapidly growing technology areas – proliferating to all corners of life and technology areas. Perhaps most specifically, machine learning (ML) techniques have been adapted to aid nearly every process that consumes nearly any volume of data to produce outputs or insights.

However, the widespread proliferation of AI technologies has also called into question the risks to privacy posed by the widespread sharing, ingestion, and transformation of potentially sensitive user information across tech domains.

These concerns have translated into a series of regulations, standards, and industry responses that seek to curtail privacy vulnerabilities with well-understood restrictions, policies, and tools. In turn, these measures prevent the exposure of personal data without full user understanding of how their data will be used and disclosed.

Regulatory proposals, such as the EU AI Strategy, mirror existing working recommendations such as the GPRs and guidelines addressing AI trustworthiness to propose a legal framework that clarifies model liability, fundamental data subjects’ rights, privacy and safety risks, and product liability for AI deployed within EU legal jurisdiction.

While not yet in place, regulatory proposals have already begun to move the needle on business approaches to AI investment, deployment, and auditing. These place further focus on the compliance of existing and emerging development and uses for AI/ML technologies worldwide.

In terms of standards, 3GPP has supported working groups, studies, and specifications related to the AI domain since Release 17. 3GPP Release 18 focuses specifically on specifications related to AI and ML that play a critical role in new technologies put forward that govern the use and applications of AI models in telecommunications technologies. The ITU standards body also has working groups and specification subgroups focused on AI/ML standardization as these technologies continue to play a leading role across sectors in the emerging XR ecosystem.

Ethical AI frameworks

How can regulators ensure AI is both ethical and innovative?

Read the article

Facial recognition and biometric technologies: Regulations and standards

Facial recognition technology is already nearly ubiquitous in emerging mobile phone and other computing devices and stands to play a significant role in future metaverse technologies. For companies in the metaverse domain that require such unique biometric data as part of their services, it is likely they will face new questions concerning the processes, permissions, protocols, and safeguards in place protecting users’ likenesses.

The use of facial recognition technology is already widely regulated. In the US, several landmark court cases and state-level laws govern the specific use of biometric data – including, but often not limited to, facial recognition technologies.

Several existing measures would seek to further extend these regulations. This includes the proposed legislation currently introduced in the US Senate that seeks to prevent the US federal government and related agencies from collecting or storing facial recognition data at all. Furthermore, emerging on the collection, storage, and use of biometric information in medical and related use cases may limit the use of these technologies in XR applications related to these areas.

Across the Atlantic, the EU Charter of Fundamental Rights provides specific protections for data representing an individual’s likeness – a protection that has been expanded in the General Data Protection Regulation (GDPR) that specifically governs the use of personal data related to biometric features such as identifiable facial features. The EU AI Act aims to severely limit the use of biometric technologies such as facial recognition that might lead to ubiquitous surveillance.

Standards have been slow to emerge around biometric and facial recognition technologies despite their widespread proliferation in mobile technologies and related computing ecosystems.

NIST has accumulated a series of standards, subcommittees, and recommendations through bodies such as ISO on the responsible use of biometrics technologies, including facial recognition technologies, since 2002.

An international coalition of business contributors have also introduced material for the drafting of biometrics and facial recognition standards in ITU-T. This coalition may be motivated to promote public infrastructure technologies that make mass surveillance technologies more privacy-preserving in the face of potential public backlash.

As biometric technologies continue to proliferate along with new use cases in the vastly expanding sphere of metaverse and extended reality technologies, we may expect further standards and best practices to emerge alongside them. Given the multitude of varied interests involved in standards, innovators must remain vigilant to ensure that standards meet the privacy challenges of emerging technologies in this domain.

Facial recognition in security systems

Is there more to future security than meets the eye?

Read the article

Environmental sensing: Regulations and standards

Environmental sensing technologies not only set the scene for emerging XR technologies, by providing environmental positioning, awareness, and context; but they also may introduce critical privacy threats to users who wish to keep identifiable features of their environments private.

A particularly relevant example of environmental sensing technologies that are both core to metaverse technologies and fall under the umbrella of existing and emerging regulations and standards are localization and mapping technologies.

In their essence, localization and mapping technologies enable devices to estimate their approximate location and/or meaningful representations of their surroundings using features of the devices’ environment.

Localization and mapping processes that collect, store, or process information related to the location of end users are subject to the regulatory umbrella of general privacy regulations such as GDPR, CCPA, and any other regulatory framework that includes user locations or environments in its broader scope of personal data.

Within the GDPR, additional guidelines and best-practices guidance have been provided by the ELISE (European Location Interoperability Solutions for E-government) working group.

From a standardization perspective, positioning capabilities of cellular networks have been a topic of study since at least 3GPP Release 9, with positioning and localization standards emerging as key features with the development and release of LTE and 5G technologies in Release 13 and onwards. In addition, ITU, IETF, and ISO have a series of standards related to spatial processes in the Internet of Things that affect the development of localization and mapping technologies in XR.

Erik Ekudden presents the Tech Trends

Which technology and network trends are expected to drive universal metaverse mobility? Find out in the latest technology trends article.

The impact of privacy regulation and standards on innovation

So how might these emerging regulations and standards influence future development of metaverse technologies? In a nutshell, there are three parts to this query:

  1. Strict requirements for privacy and security will require serious innovation not yet seen in existing products

    Even just the foundational technology domains of the metaverse we have explored above – AI processes, facial recognition and biometric technologies, and environmental sensing – fall under the umbrellas of several overlapping existing and emerging regulatory bodies, frameworks, and guidelines.

    As these regulations mature alongside rapidly evolving innovations within each of these domains, we expect the tension between privacy and emerging capabilities to necessitate significant investments in privacy- and security-aware technologies that ensure compliance with regulators’ expectations. Near-certain financial penalties will follow for those who forgo such investments to prevent breaches of consumer privacy.

  2. Standards create a common framework that helps to align ecosystem players, but the real work has only just begun

    Standards bodies can help bridge the gap between existing and anticipated regulatory expectations, demands, obligations, requirements, and existing mitigation technologies by aligning companies with common standards and specifications.

    This greatly improves the reliability of technologies for consumers while significantly reducing the risk of regulatory infringement by binding innovators to a common framework for technological evolution.

    However, the emergence of effective and enduring standards is not itself a given, but rather the outcome of a long and collaborative process prone to disagreement, delay, and compromise. When financial and innovation incentives fail to align across major corporate or political actors, standards bodies may fail to provide specifications that help ecosystem players address common concerns and may lead to infringement of key regulations.

    In the privacy domain, this work has only just begun. Today, major standards bodies have yet to fully address the privacy and security dimensions of existing metaverse technologies even as many of these products begin deployment.

  3. The current focus of regulations and standards suggest a major innovation gap in meeting high bars for user privacy and security

    With increasing demands on the safeguarding of personal data in transit, use, and at rest, much work remains to adapt core functions in metaverse technologies so they satisfy the new regulatory realities.

    This will mean a renewed and vigorous push to increase the speed, efficiency, and compatibility of existing privacy enhancing technologies (PETs) to core functions such as AI, biometric, and environmental sensing capabilities.

    Adapting PETs to such technologies while still meeting critical latency, accuracy, and energy efficiency requirements will require major investments in research to optimize the underlying algorithms, software, and hardware enabling compliance.

To the metaverse and beyond

In this blogpost, we have illustrated how the very technological foundations of the metaverse are already being influenced by emerging regulations and standards; we have uncovered several potential storm clouds stemming from this very rapid advancement of both technology and regulation to meet user demands.

What may seem at first like a “Wild Wild West” of discord is more like a series of rapidly proliferating roots in an ever-adapting collaborative garden of innovation, regulation, and coordination. The three core takeaways listed above speak to the future of metaverse adaptation to privacy regulation and standards in the coming decades of advancement.

While these rapidly expanding and overlapping spheres of regulation and standardization may seem full-proof, several open questions remain. For example, a lack of privacy certification standards has made way for several conflicting and often difficult-to-reconcile claims of privacy adherence without a clear authority to verify compliance with applicable regulations.

Additionally, there is evidence that the sheer rate of innovation and proliferation in the extended reality space is outpacing the ability of regulators and standards organizations to effectively protect user data – thereby eroding core principles of anonymity and privacy despite the presence of regulations and standards that seek to do so.

Finally, our understanding of the underlying ethics of privacy and immersive and AI technologies are constantly evolving, raising questions as to how regulations and standards may shift alongside them. These open questions – and the uncertainty they introduce – are worthy of their own exploration in future analysis and may change the way consumers and creators alike interact with the metaverse, and merit further study for those working in this space.

Love it or hate it, the emerging metaverse has brought key questions of privacy to the forefront of our understanding of how technology interacts with societal and market interests. As we look beyond today’s technology to the innovation of tomorrow, we hope this post has given you an understanding of how regulations and standards might play an important role in shaping the immersive interactions of the future.

Related reading

Take a tour of the top twelve metaverse use cases, one use place at a time!

Find out why the metaverse needs 5G to bring disruptive VR, XR, and Web 3.0 to life.

Will metaverse universities be a thing of the future? Learn how XR can transform education through the metaverse.

Learn more about the importance of technology standards.

The Ericsson Blog

Like what you’re reading? Please sign up for email updates on your favorite topics.

Subscribe now

At the Ericsson Blog, we provide insight to make complex ideas on technology, innovation and business simple.