Privacy and Safety Issues With Facebook's New 'Metaventure'
With access to a user's 3D model and full-body digital tracking, attackers can recreate the perfect replica of a C-level executive to trick employees.
December 15, 2021
Facebook is moving away from being just a social media company and toward building a metaverse. In this mixed reality (MR) world where users can meet up using virtual reality headsets, aspects of social media, gaming, and cryptocurrencies come together.
However, the company has been under criticism regarding the protection of the privacy of its consumers. Trust has been further eroded by revelations from former employee Frances Haugen about how Instagram's content recommendation algorithms negatively impact the mental health of teenage girls. Meta has pledged $50 million to build the metaverse responsibly, releasing a set of Responsible Innovation Principles and funded academic researchers to provide guidance on incorporating privacy. To protect consumers beyond voluntary corporate efforts, some states have enacted data privacy laws, such as California's CCPA, Virginia's VCDPA, and Colorado's ColoPA. A federal DATA Privacy Act is also under consideration.
But despite these laws and the company's apparently sincere efforts and intentions, the new market direction that Meta is pursuing appears to be fraught with even greater risks to privacy and safety.
An increased amount of personal data will be collected to provide a fully immersive experience. VR headsets come with a myriad of microphones, cameras, and motion trackers that allow access to the user's location, appearance, and even images — over and above credit card numbers, messages, and chats.
The attack surface will expand. We have seen how the advent of the Internet of Things created a hacker's playground. Vulnerabilities will increase in the wake of the new technologies required to power MR. Within a few days of Facebook's purchase of Oculus, a hacker accessed Oculus' official development website by impersonating an administrator and edited projects and uploaded fake software.
Finer tracking as a consequence of virtual worlds mimicking the real will allow greater visibility to certain sensitive user actions. Consider hackers observing an avatar entering PIN codes at a virtual storefront or marketers watching eye movements to see what users are interested in. This is already happening, such as when researchers exploited a popular virtual hangout app called Big Screen. Using what they called a man-in-the-room attack, they were able to join private rooms, turn on their victim's microphones, view their screens, and listen to their private conversations while remaining invisible.
Enhanced social engineering will proliferate in MR. According to a study, 98% of cyberattacks rely in part on some type of social engineering whereby attackers impersonate a legitimate business. With access to a history of a user's movements in MR, their 3D model, and full-body digital tracking, attackers can recreate the perfect replica of C-level executives to trick employees. In one such incident, a scammer used deepfake technology to mimic the voice of the head of a German energy company and trick an executive of a UK subsidiary to transfer a large sum of money to a Hungarian supplier.
Physical safety will be impacted in a number of ways. In VR, the images flash very quickly, which has been linked to seizures in people with photosensitive epilepsy. Other side effects of VR include motion sickness, dizziness, disorientation, nausea, and loss of spatial awareness. More dangerously, cardiac patients could be affected by emotion-hacking applications designed to induce a fear reaction in horror-game players by speeding up their heart rate. In these ways, cyberattacks could turn physical.
Immersion could increase gaming addiction. Apart from its business uses like healthcare, real estate, and architecture, MR is most popular for video games. Young men without a college degree in the US, a demographic that recently experienced a 10% drop in employment rate, whiled away 75% of their extra leisure time playing games. American teenagers already spend almost one third of their day on screens, not including study time — and American adults are only 10 minutes behind teens. Cognitive scientists have found that VR causes "time compression," where gamers can play for 28.5% more time than they would on a conventional monitor without realizing it. When gaming becomes compulsive, users might be more likely to resent or skip security measures for instance installing third party apps without waiting for them to be vetted and being made available through the official app store. The more time spent in the metaverse, the more information can be gathered and exploited.
The metaverse is in its infancy. As it matures and gains public trust, it will offer an increasingly outsized opportunity for hackers. To counter both the rising security threats and the potential effects on public health, this new space needs to be regulated.
Companies will likely have their own safety and age guidelines. Governmental bodies will certainly step in, as they already have for more general online privacy issues. In the end, users themselves will play a role in determining how cybersecurity concerns evolve along with the metaverse. Having access to parental controls such as screen recording apps, timers, virtual world trackers and anti-spyware, audit and log software to keep stalkers at bay would certainly help. With such an alluring technology, it is important to invest in a safety layer from the get-go rather than attempt to implement reformation practices later on.
About the Author
You May Also Like
A Cyber Pros' Guide to Navigating Emerging Privacy Regulation
Dec 10, 2024Identifying the Cybersecurity Metrics that Actually Matter
Dec 11, 2024The Current State of AI Adoption in Cybersecurity, Including its Opportunities
Dec 12, 2024Cybersecurity Day: How to Automate Security Analytics with AI and ML
Dec 17, 2024The Dirt on ROT Data
Dec 18, 2024