Tokenizing virtual identity is the latest buzzword in the world of technology. With the rise of blockchain and AI, the process of tokenizing virtual identity has become more feasible and efficient. In a world that is increasingly dependent on digital communication and transactions, virtual identity has become an essential aspect of our lives. From social media to online banking, virtual identity is crucial for individuals and organizations alike. This article explores the inevitable impact of blockchain and AI on tokenizing virtual identity.
What is Blockchain and AI?
To understand the role of blockchain and AI in tokenizing virtual identity, we need to first understand what these technologies are. Blockchain is a decentralized and distributed digital ledger that records transactions across multiple computers, allowing secure and transparent storage of data. AI, on the other hand, refers to the simulation of human intelligence in machines that can perform tasks that typically require human cognition, such as learning, reasoning, and problem-solving.
The Benefits of Tokenizing Virtual Identity
Tokenizing virtual identity offers several benefits. Firstly, it provides a higher degree of security than traditional identity management systems, as it is based on cryptography and decentralized storage. Secondly, it offers greater control and ownership of personal data, allowing individuals to manage and monetize their identity. Thirdly, it offers greater efficiency by reducing the need for intermediaries and streamlining identity verification processes.
The Role of Blockchain in Tokenizing Identity
Blockchain plays a crucial role in tokenizing virtual identity. By providing a decentralized and secure platform for storing and managing identity data, blockchain ensures that personal data is owned and controlled by individuals, rather than centralized institutions. Blockchain also enables the creation of self-sovereign identities, where individuals have complete control over their identity data and can share it securely with trusted parties.
The Role of AI in Tokenizing Identity
AI plays a crucial role in tokenizing virtual identity by automating identity verification processes. By leveraging machine learning algorithms, AI can analyze large volumes of data and make intelligent decisions about identity verification. This can help reduce the risk of fraud and improve the efficiency of identity verification processes.
Tokenizing Virtual Identity: Use Cases
Tokenizing virtual identity has several use cases. For example, it can be used for secure and decentralized voting systems, where individuals can verify their identity and cast their vote securely and anonymously. It can also be used for secure and decentralized identity verification for financial and healthcare services, reducing the risk of identity theft and fraud.
Tokenizing Virtual Identity: Challenges
Tokenizing virtual identity also presents several challenges. One of the main challenges is interoperability, as different blockchain networks and AI systems may not be compatible with each other. Another challenge is scalability, as blockchain and AI systems may not be able to handle the volume of data required for identity verification on a large scale.
Security Concerns in Tokenizing Identity
Security is a key concern in tokenizing virtual identity. While blockchain and AI offer greater security than traditional identity management systems, they are not immune to attacks. Hackers could potentially exploit vulnerabilities in blockchain and AI systems to gain access to personal data. It is therefore crucial to implement robust security measures to protect personal data.
Privacy Issues in Tokenizing Identity
Privacy is another key concern in tokenizing virtual identity. While tokenizing virtual identity offers greater control and ownership of personal data, it also raises concerns about data privacy. It is essential to ensure that personal data is not shared without consent and that individuals have the right to access, modify, and delete their data.
Legal Implications of Tokenizing Identity
Tokenizing virtual identity also has legal implications. As personal data becomes more valuable, it is crucial to ensure that there are adequate laws and regulations in place to protect personal data. It is also essential to ensure that individuals have the right to access and control their data, and that they are not discriminated against based on their identity.
The Future of Tokenizing Virtual Identity
The future of tokenizing virtual identity looks bright. As blockchain and AI continue to evolve, we can expect to see more secure, efficient, and decentralized identity management systems. We can also expect to see more use cases for tokenizing virtual identity, from secure and anonymous voting systems to decentralized identity verification for financial and healthcare services.
Embracing Blockchain & AI for Identity Management
In conclusion, tokenizing virtual identity is an inevitable trend that will revolutionize the way we manage identity. By leveraging blockchain and AI, we can create more secure, efficient, and decentralized identity management systems that give individuals greater control and ownership of their personal data. While there are challenges and concerns associated with tokenizing virtual identity, these can be addressed through robust security measures, privacy protections, and adequate laws and regulations. As we continue to embrace blockchain and AI for identity management, we can look forward to a more secure, efficient, and decentralized future.
Facebook has been under fire recently, with explosive whistleblower allegations and continuing regulatory headaches. But things might have just got worse for Facebook’s 3 billion users—could it be the turning point that finally incentivises people to delete their accounts?
If you care about your data, it might be. According to a new report in Vice’s Motherboard, Facebook has no idea what it does with your data, or where it goes. That’s despite the fact that Facebook is one of the most data-hungry platforms in the world.
Motherboard published the leaked document written by Facebook privacy engineers in the social network’s Ad and Business Product Team, in full.
“We’ve built systems with open borders. The result of these open systems and open culture is well described with an analogy: Imagine you hold a bottle of ink in your hand. This bottle of ink is a mixture of all kinds of user data (3PD, 1PD, SCD, Europe, etc.)
“You pour that ink into a lake of water (our open data systems; our open culture) … and it flows … everywhere. How do you put that ink back in the bottle? How do you organize it again, such that it only flows to the allowed places in the lake?”
As Motherboard explains: 3PD means third-party data; 1PD means first-party data; SCD means sensitive categories data.
Another highlight in the document reads: “We can’t confidently make controlled policy changes or external commitments such as ‘we will not use X data for Y purpose.’ And yet, this is exactly what regulators expect us to do.”
The problem with the leaked Facebook document
So what’s the problem with this? Privacy regulation such as the EU Genereal Data Protection Regulation (GDPR)—which is thought of as the “gold standard” for people’s data protection rights—stipulates that data must be collected for a specific purpose. In other words, it can’t be collected for one reason, and then reused for something else.
The latest Facebook document shows the social network faces a challenge in complying with this, since it appears to lack control over the data in the first place.
“Not knowing where all the data is creates a fundamental problem within any business but when that data is personal user information, it causes huge privacy headaches and should be dealt with immediately,” says Jake Moore, global cybersecurity advisor at ESET.
A spokeswoman at Facebook owner Meta denies that the social network falls foul of regulation. “Considering this document does not describe our extensive processes and controls to comply with privacy regulations, it's simply inaccurate to conclude that it demonstrates non-compliance.
“New privacy regulations across the globe introduce different requirements and this document reflects the technical solutions we’re building to scale the current measures we have in place to manage data and meet our obligations."
Time to delete Facebook?
Facebook saw a decline in user numbers for the first time this year—which have since recovered slightly—as its data-hungry practices become more clear to all.
At the same time, Facebook has been hit hard by Apple’s App Tracking Transparency features, which allow people to prevent ad tracking on their iPhone. However, these features don’t prevent Facebook from collecting first party data—the data you provide to it on its site.
If you want to delete Facebook, I’ve written an article detailing the steps required to do so. If you are not quite ready yet, it’s worth considering deleting the app on your phone and instead using it on your computer's browser, to at least limit the amount of data Facebook can collect.
Fully homomorphic encryption allows us to run analysis on data without ever seeing the contents. It could help us reap the full benefits of big data, from fighting financial fraud to catching diseases early
LIKE any doctor, Jacques Fellay wants to give his patients the best care possible. But his instrument of choice is no scalpel or stethoscope, it is far more powerful than that. Hidden inside each of us are genetic markers that can tell doctors like Fellay which individuals are susceptible to diseases such as AIDS, hepatitis and more. If he can learn to read these clues, then Fellay would have advance warning of who requires early treatment.
This could be life-saving. The trouble is, teasing out the relationships between genetic markers and diseases requires an awful lot of data, more than any one hospital has on its own. You might think hospitals could pool their information, but it isn’t so simple. Genetic data contains all sorts of sensitive details about people that could lead to embarrassment, discrimination or worse. Ethical worries of this sort are a serious roadblock for Fellay, who is based at Lausanne University Hospital in Switzerland. “We have the technology, we have the ideas,” he says. “But putting together a large enough data set is more often than not the limiting factor.”
Fellay’s concerns are a microcosm of one of the world’s biggest technological problems. The inability to safely share data hampers progress in all kinds of other spheres too, from detecting financial crime to responding to disasters and governing nations effectively. Now, a new kind of encryption is making it possible to wring the juice out of data without anyone ever actually seeing it. This could help end big data’s big privacy problem – and Fellay’s patients could be some of the first to benefit.
It was more than 15 years ago that we first heard that “data is the new oil”, a phrase coined by the British mathematician and marketing expert Clive Humby. Today, we are used to the idea that personal data is valuable. Companies like Meta, which owns Facebook, and Google’s owner Alphabet grew into multibillion-dollar behemoths by collecting information about us and using it to sell targeted advertising.
Data could do good for all of us too. Fellay’s work is one example of how medical data might be used to make us healthier. Plus, Meta shares anonymised user data with aid organisations to help plan responses to floods and wildfires, in a project called Disaster Maps. And in the US, around 1400 colleges analyse academic records to spot students who are likely to drop out and provide them with extra support. These are just a few examples out of many – data is a currency that helps make the modern world go around.
Getting such insights often means publishing or sharing the data. That way, more people can look at it and conduct analyses, potentially drawing out unforeseen conclusions. Those who collect the data often don’t have the skills or advanced AI tools to make the best use of it, either, so it pays to share it with firms or organisations that do. Even if no outside analysis is happening, the data has to be kept somewhere, which often means on a cloud storage server, owned by an external company.
You can’t share raw data unthinkingly. It will typically contain sensitive personal details, anything from names and addresses to voting records and medical information. There is an obligation to keep this information private, not just because it is the right thing to do, but because of stringent privacy laws, such as the European Union’s General Data Protection Regulation (GDPR). Breaches can see big fines.
Over the past few decades, we have come up with ways of trying to preserve people’s privacy while sharing data. The traditional approach is to remove information that could identify someone or make these details less precise, says privacy expert Yves-Alexandre de Montjoye at Imperial College London. You might replace dates of birth with an age bracket, for example. But that is no longer enough. “It was OK in the 90s, but it doesn’t really work any more,” says de Montjoye. There is an enormous amount of information available about people online, so even seemingly insignificant nuggets can be cross-referenced with public information to identify individuals.
One significant case of reidentification from 2021 involves apparently anonymised data sold to a data broker by the dating app Grindr, which is used by gay people among others. A media outlet called The Pillar obtained it and correlated the location pings of a particular mobile phone represented in the data with the known movements of a high-ranking US priest, showing that the phone popped up regularly near his home and at the locations of multiple meetings he had attended. The implication was that this priest had used Grindr, and a scandal ensued because Catholic priests are required to abstain from sexual relationships and the church considers homosexual activity a sin.
A more sophisticated way of maintaining people’s privacy has emerged recently, called differential privacy. In this approach, the manager of a database never shares the whole thing. Instead, they allow people to ask questions about the statistical properties of the data – for example, “what proportion of people have cancer?” – and provide answers. Yet if enough clever questions are asked, this can still lead to private details being triangulated. So the database manager also uses statistical techniques to inject errors into the answers, for example recording the wrong cancer status for some people when totting up totals. Done carefully, this doesn’t affect the statistical validity of the data, but it does make it much harder to identify individuals. The US Census Bureau adopted this method when the time came to release statistics based on its 2020 census.
Trust no one
Still, differential privacy has its limits. It only provides statistical patterns and can’t flag up specific records – for instance to highlight someone at risk of disease, as Fellay would like to do. And while the idea is “beautiful”, says de Montjoye, getting it to work in practice is hard.
There is a completely different and more extreme solution, however, one with origins going back 40 years. What if you could encrypt and share data in such a way that others could analyse it and perform calculations on it, but never actually see it? It would be a bit like placing a precious gemstone in a glovebox, the chambers in labs used for handling hazardous material. You could invite people to put their arms into the gloves and handle the gem. But they wouldn’t have free access and could never steal anything.
This was the thought that occurred to Ronald Rivest, Len Adleman and Michael Dertouzos at the Massachusetts Institute of Technology in 1978. They devised a theoretical way of making the equivalent of a secure glovebox to protect data. It rested on a mathematical idea called a homomorphism, which refers to the ability to map data from one form to another without changing its underlying structure. Much of this hinges on using algebra to represent the same numbers in different ways.
Imagine you want to share a database with an AI analytics company, but it contains private information. The AI firm won’t give you the algorithm it uses to analyse data because it is commercially sensitive. So, to get around this, you homomorphically encrypt the data and send it to the company. It has no key to decrypt the data. But the firm can analyse the data and get a result, which itself is encrypted. Although the firm has no idea what it means, it can send it back to you. Crucially, you can now simply decrypt the result and it will make total sense.
“The promise is massive,” says Tom Rondeau at the US Defense Advanced Research Projects Agency (DARPA), which is one of many organisations investigating the technology. “It’s almost hard to put a bound to what we can do if we have this kind of technology.”
In the 30 years since the method was proposed, researchers devised homomorphic encryption schemes that allowed them to carry out a restricted set of operations, for instance only additions or multiplications. Yet fully homomorphic encryption, or FHE, which would let you run any program on the encrypted data, remained elusive. “FHE was what we thought of as being the holy grail in those days,” says Marten van Dijk at CWI, the national research institute for mathematics and computer science in the Netherlands. “It was kind of unimaginable.”
One approach to homomorphic encryption at the time involved an idea called lattice cryptography. This encrypts ordinary numbers by mapping them onto a grid with many more dimensions than the standard two. It worked – but only up to a point. Each computation ended up adding randomness to the data. As a result, doing anything more than a simple computation led to so much randomness building up that the answer became unreadable.
In 2009, Craig Gentry, then a PhD student at Stanford University in California, made a breakthrough. His brilliant solution was to periodically remove this randomness by decrypting the data under a secondary covering of encryption. If that sounds paradoxical, imagine that glovebox with the gem inside. Gentry’s scheme was like putting one glovebox inside another, so that the first one could be opened while still encased in a layer of security. This provided a workable FHE scheme for the first time.
Workable, but still slow: computations on the FHE-encrypted data could take millions of times longer than identical ones on raw data. Gentry went on to work at IBM, and over the next decade, he and others toiled to make the process quicker by improving the underlying mathematics. But lately the focus has shifted, says Michael Osborne at IBM Research in Zurich, Switzerland. There is a growing realisation that massive speed enhancements can be achieved by optimising the way cryptography is applied for specific uses. “We’re getting orders of magnitudes improvements,” says Osborne.
IBM now has a suite of FHE tools that can run AI and other analyses on encrypted data. Its researchers have shown they can detect fraudulent transactions in encrypted credit card data using an artificial neural network that can crunch 4000 records per second. They also demonstrated that they could use the same kind of analysis to scour the encrypted CT scans of more than 1500 people’s lungs to detect signs of covid-19 infection.
Also in the works are real-world, proof-of-concept projects with a variety of customers. In 2020, IBM revealed the results of a pilot study conducted with the Brazilian bank Banco Bradesco. Privacy concerns and regulations often prevent banks from sharing sensitive data either internally or externally. But in the study, IBM showed it could use machine learning to analyse encrypted financial transactions from the bank’s customers to predict if they were likely to take out a loan. The system was able to make predictions for more than 16,500 customers in 10 seconds and it performed just as accurately as the same analysis performed on unencrypted data.
Suspicious activity
Other companies are keen on this extreme form of encryption too. Computer scientist Shafi Goldwasser, a co-founder of privacy technology start-up Duality, says the firm is achieving significantly faster speeds by helping customers better structure their data and tailoring tools to their problems. Duality’s encryption tech has already been integrated into the software systems that technology giant Oracle uses to detect financial crimes, where it is assisting banks in sharing data to detect suspicious activity.
Still, for most applications, FHE processing remains at least 100,000 times slower compared with unencrypted data, says Rondeau. This is why, in 2020, DARPA launched a programme called Data Protection in Virtual Environments to create specialised chips designed to run FHE. Lattice-encrypted data comes in much larger chunks than normal chips are used to dealing with. So several research teams involved in the project, including one led by Duality, are investigating ways to alter circuits to efficiently process, store and move this kind of data. The goal is to analyse any FHE-encrypted data just 10 times slower than usual, says Rondeau, who is managing the programme.
Even if it were lightning fast, FHE wouldn’t be flawless. Van Dijk says it doesn’t work well with certain kinds of program, such as those that contain branching logic made up of “if this, do that” operations. Meanwhile, information security researcher Martin Albrecht at Royal Holloway, University of London, points out that the justification for FHE is based on the need to share data so it can be analysed. But a lot of routine data analysis isn’t that complicated – doing it yourself might sometimes be simpler than getting to grips with FHE.
For his part, de Montjoye is a proponent of privacy engineering: not relying on one technology to protect people’s data, but combining several approaches in a defensive package. FHE is a great addition to that toolbox, he reckons, but not a standalone winner.
This is exactly the approach that Fellay and his colleagues have taken to smooth the sharing of medical data. Fellay worked with computer scientists at the Swiss Federal Institute of Technology in Lausanne who created a scheme combining FHE with another privacy-preserving tactic called secure multiparty computation (SMC). This sees the different organisations join up chunks of their data in such a way that none of the private details from any organisation can be retrieved.
In a paper published in October 2021, the team used a combination of FHE and SMC to securely pool data from multiple sources and use it to predict the efficacy of cancer treatments or identify specific variations in people’s genomes that predict the progression of HIV infection. The trial was so successful that the team has now deployed the technology to allow Switzerland’s five university hospitals to share patient data, both for medical research and to help doctors personalise treatments. “We’re implementing it in real life,” says Fellay, “making the data of the Swiss hospitals shareable to answer any research question as long as the data exists.”
If data is the new oil, then it seems the world’s thirst for it isn’t letting up. FHE could be akin to a new mining technology, one that will open up some of the most valuable but currently inaccessible deposits. Its slow speed may be a stumbling block. But, as Goldwasser says, comparing the technology with completely unencrypted processing makes no sense. “If you believe that security is not a plus, but it’s a must,” she says, “then in some sense there is no overhead.”
I first wrote about the Metaverse in 2018, and overhauled my thinking in a January 2020 update: The Metaverse: What It Is, Where to Find it, Who Will Build It, and Fortnite. Since then, a lot has happened. COVID-19 forced hundreds of millions into Zoomschool and remote work. Roblox became one of the most popular entertainment experiences in history. Google Trends’ index on the phrase ‘The Metaverse’ set a new ‘100’ in March 2021. Against this baseline, use of the term never exceeded seven from January 2005 through to December 2020. With that in mind, I thought it was time to do an update - one that reflects how my thinking has changed over the past 18 months and addresses the questions I’ve received during this time, such as “Is the Metaverse here?”, “When will it arrive?”, and “What does it need to grow?”. Welcome to the Foreword to ‘THE METAVERSE PRIMER’.
When did the mobile internet era begin? Some would start this history with the very first mobile phones. Others might wait until the commercial deployment of 2G, which was the first digital wireless network. Or the introduction of the Wireless Application Protocol standard, which gave us WAP browsers and thus the ability to access a (rather primitive) version of most websites from nearly any ‘dumbphone’. Or maybe it started with the BlackBerry 6000, or 7000 or 8000 series? At least one of them was the first mainstream mobile device designed for on-the-go data. Most would say it’s the iPhone, which came more than a decade after the first BlackBerry and eight years after WAP, nearly two decades after 2G, 34 years after the first mobile phone call, and has since defined many of the mobile internet era’s visual design principles, economics, and business practices.
In truth, there’s never a flip. We can identify when a specific technology was created, tested, or deployed, but not when an era precisely occurred. This is because technological change requires a lot of technological changes, plural, to all come together. The electricity revolution, for example, was not a single period of steady growth. Instead, it was two separate waves of technological, industrial, and process-related transformations.
The first wave began around 1881, when Thomas Edison stood up electric power stations in Manhattan and London. Although this was a quick start to the era of electrical power — Edison had created the first working incandescent light bulb only two years earlier, and was only one year into its commercialization — industrial adoption was slow. Some 30 years after Edison’s first stations, less than 10% of mechanical drive power in the United States came from electricity (two thirds of which was generated locally, rather than from a grid). But then suddenly, the second wave began. Between 1910 and 1920, electricity’s share of mechanical drive power quintupled to over 50% (nearly two thirds of which came from independent electric utilities. By 1929 it stood at 78%).
The difference between the first and second waves is not how much of American industry used electricity, but the extent to which it did — and designed around it.
When plants first adopted electrical power, it was typically used for lighting and/or to replace a plant’s on-premises source of power (usually steam). These plants did not, however, rethink or replace the legacy infrastructure which would carry this power throughout the factory and put it to work. Instead, they continued to use a lumbering network of cogs and gears that were messy and loud and dangerous, difficult to upgrade or change, were either ‘all on’ or ‘all off’ (and therefore required the same amount of power to support a single operating station or the entire plant, and suffered from countless ‘single points of failure’), and struggled to support specialized work.
But eventually, new technologies and understandings gave factories both the reason and ability to be redesigned end-to-end for electricity, from replacing cogs with electric wires, to installing individual stations with bespoke and dedicated electrically-powered motors for functions such as sewing, cutting, pressing, and welding.
The benefits were wide-ranging. The same plant now had considerably more space, more light, better air, and less life-threatening equipment. What’s more, individual stations could be powered individually (which increased safety, while reducing costs and downtime), and use more specialized equipment (e.g. electric socket wrenches).
In addition, factories could configure their production areas around the logic of the production process, rather than hulking equipment, and even reconfigure these areas on a regular basis. These two changes meant that far more industries could deploy assembly lines in their plants (which had actually first emerged in the late 1700s), while those that already had such lines could extend them further and more efficiently. In 1913, for example, Henry Ford created the first moving assembly line, which used electricity and conveyor belts to reduce the production time per car from 12.5 hours to 93 minutes, while also using less power. According to historian David Nye, Ford’s famous Highland Park plant was “built on the assumption that electrical light and power should be available everywhere.”
Once a few plants began this transformation, the entire market was forced to catch up, thereby spurring more investment and innovation in electricity-based infrastructure, equipment, and processes. Within a year of its first moving assembly line, Ford was producing more cars than the rest of the industry combined. By its 10 millionth car, it had built more than half of all cars on the road.
This ‘second wave’ of industrial electricity adoption didn’t depend on a single visionary making an evolutionary leap from Thomas Edison’s core work. Nor was it driven just by an increasing number of industrial power stations. Instead, it reflected a critical mass of interconnected innovations, spanning power management, manufacturing hardware, production theory, and more. Some of these innovations fit in the palm of a plant manager’s hand, others needed a room, a few required a city, and they all depended on people and processes.
To return to Nye, “Henry Ford didn’t first conceive of the assembly line and then delegate its development to his managers. … [The] Highland Park facility brought together managers and engineers who collectively knew most of the manufacturing processes used in the United States … they pooled their ideas and drew on their varied work experiences to create a new method of production.” This process, which happened at national scale, led to the ‘roaring twenties’, which saw the greatest average annual increases in labor and capital productivity in a hundred years.
Powering the Mobile Internet
This is how to think about the mobile internet era. The iPhone feels like the start of the mobile internet because it united and/or distilled all of the things we now think of as ‘the mobile internet’ into a single minimum viable product that we could touch and hold and love. But the mobile internet was created — and driven — by so much more.
In fact, we probably don’t even mean the first iPhone but the second, the iPhone 3G (which saw the largest model-over-model growth of any iPhone, with over 4× the sales). This second iPhone was the first to include 3G, which made the mobile web usable, and operated the iOS App Store, which made wireless networks and smartphones useful.
But neither 3G nor the App Store were Apple-only innovations or creations. The iPhone accessed 3G networks via chips made by Infineon that connected via standards set by the ITU and GSMA, and which were deployed by wireless providers such as AT&T on top of wireless towers built by tower companies such as Crown Castle and American Tower. The iPhone had “an app for that” because millions of developers built them, just as thousands of different companies built specialized electric motor devices for factories in the 1920s. In addition, these apps were built on a wide variety of standards — from KDE to Java, HTML and Unity — which were established and/or maintained by outside parties (some of whom competed with Apple in key areas). The App Store’s payments worked because of digital payments systems and rails established by the major banks. The iPhone also depended on countless other technologies, from a Samsung CPU (licensed in turn from ARM), to an accelerometer from STMicroelectronics, Gorilla Glass from Corning, and other components from companies like Broadcom, Wolfson, and National Semiconductor.
All of the above creations and contributions, collectively, enabled the iPhone and started the mobile internet era. They also defined its improvement path.
Consider the iPhone 12, which was released in 2020. There was no amount of money Apple could have spent to release the iPhone 12 as its second model in 2008. Even if Apple could have devised a 5G network chip back then, there would have been no 5G networks for it to use, nor 5G wireless standards through which to communicate to these networks, and no apps that took advantage of its low latency or bandwidth. And even if Apple had made its own ARM-like GPU back in 2008 (more than a decade before ARM itself), game developers (which generate more than two thirds of App Store revenues) would have lacked the game-engine technologies required to take advantage of its superpowered capabilities.
Getting to the iPhone 12 required ecosystem-wide innovation and investments, most of which sat outside Apple’s purview (even though Apple’s lucrative iOS platform was the core driver of these advancements). The business case for Verizon’s 4G networks and American Tower Corporation’s wireless tower buildouts depended on the consumer and business demand for faster and better wireless for apps such as Spotify, Netflix and Snapchat. Without them, 4G’s ‘killer app’ would have been… slightly faster email. Better GPUs, meanwhile, were utilized by better games, and better cameras were made relevant by photo-sharing services such as Instagram. And this better hardware powered greater engagement, which drove greater growth and profits for these companies, thereby driving better products, apps, and services. Accordingly, we should think of the overall market as driving itself, just as the adoption of electrical grids led to innovation in small electric-powered industrial motors that in turn drove demand for the grid itself.
We must also consider the role of changing user capability. The first iPhone could have skipped the home button altogether, rather than waiting until the tenth. This would have opened up more room inside the device itself for higher-quality hardware or bigger batteries. But the home button was an important training exercise for what was a vastly more complex and capable mobile phone than consumers were used to. Like closing a clamshell phone, it was a safe, easy, and tactile way to ‘restart’ the iPhone if a user was confused or tapped the wrong app. It took a decade for consumers to be able to have no dedicated home button. This idea is critical. As time passes, consumers become increasingly familiar with advanced technology, and therefore better able to adopt further advances - some of which might have long been possible!
And just as consumers shift to new mindsets, so too does industry. Over the past 20 years, nearly every industry has hired, restructured, and re-oriented itself around mobile workflows, products, or business lines. This transformation is as significant as any hardware or software innovation — and, in turn, creates the business case for subsequent innovations.
Defining the Metaverse
This essay is the foreword to my nine-part and 33,000-word primer on the Metaverse, a term I’ve not yet mentioned, let alone described.
Before doing so, it was important for me to provide the context and evolutionary path of technologies such as ‘electricity’ and the ‘mobile internet’. Hopefully it provided a few lessons. First, the proliferation of these technologies fundamentally changed human culture, from where we lived to how we worked, what we made, what we bought, how, and from who. Second, these ‘revolutions’ or ‘transformations’ really depended on a bundle of many different, secondary innovations and inventions that built upon and drove one another. Third, even the most detailed understanding of these newly-emergent technologies didn’t make clear which specific, secondary innovations and inventions they required in order to achieve mass adoption and change the world. And how they would change the world was almost entirely unknowable.
In other words, we should not expect a single, all-illuminating definition of the ‘Metaverse’. Especially not at a time in which the Metaverse has only just begun to emerge. Technologically driven transformation is too organic and unpredictable of a process. Furthermore, it’s this very messiness that enables and results in such large-scale disruption.
My goal therefore is to explain what makes the Metaverse so significant – i.e. deserving of the comparisons I offered above – and offer ways to understand how it might work and develop.
The Metaverse is best understood as ‘a quasi-successor state to the mobile internet’. This is because the Metaverse will not fundamentally replace the internet, but instead build upon and iteratively transform it. The best analogy here is the mobile internet, a ‘quasi-successor state’ to the internet established from the 1960s through the 1990s. Even though the mobile internet did not change the underlying architecture of the internet – and in fact, the vast majority of internet traffic today, including data sent to mobile devices, is still transmitted through and managed by fixed infrastructure – we still recognize it as iteratively different. This is because the mobile internet has led to changes in how we access the internet, where, when and why, as well as the devices we use, the companies we patron, the products and services we buy, the technologies we use, our culture, our business model, and our politics.
The Metaverse will be similarly transformative as it too advances and alters the role of computers and the internet in our lives.
The fixed-line internet of the 1990s and early 2000s inspired many of us to purchase our own personal computer. However, this device was largely isolated to our office, living room or bedroom. As a result, we had only occasional access to and usage of computing resources and an internet connection. The mobile internet led most humans globally to purchase their own personal computer and internet service, which meant almost everyone had continuous access to both compute and connectivity.
Metaverse iterates further by placing everyone inside an ‘embodied’, or ‘virtual’ or ‘3D’ version of the internet and on a nearly unending basis. In other words, we will constantly be ‘within’ the internet, rather than have access to it, and within the billions of interconnected computers around us, rather than occasionally reach for them, and alongside all other users and real-time.
The progression listed above is a helpful way to understand what the Metaverse changes. But it doesn’t explain what it is or what it’s like to experience. To that end, I’ll offer my best swing at a definition:
“The Metaverse is a massively scaled and interoperable network of real-time rendered 3D virtual worlds which can be experienced synchronously and persistently by an effectively unlimited number of users with an individual sense of presence, and with continuity of data, such as identity, history, entitlements, objects, communications, and payments.”
Most commonly, the Metaverse is mis-described as virtual reality. In truth, virtual reality is merely a way to experience the Metaverse. To say VR is the Metaverse is like saying the mobile internet is an app. Note, too, that hundreds of millions are already participating in virtual worlds on a daily basis (and spending tens of billions of hours a month inside them) without VR/AR/MR/XR devices. As a corollary to the above, VR headsets aren’t the Metaverse any more than smartphones are the mobile internet.
Sometimes the Metaverse is described as a user-generated virtual world or virtual world platform. This is like saying the internet is Facebook or Geocities. Facebook is a UGC-focused social network on the internet, while Geocities made it easy to create webpages that lived on the internet. UGC experiences are just one of many experiences on the internet.
Furthermore, the Metaverse doesn’t mean a video game. Video games are purpose-specific (even when the purpose is broad, like ‘fun’), unintegrated (i.e. Call of Duty is isolated from fellow portfolio title Overwatch), temporary (i.e. each game world ‘resets’ after a match) and capped in participants (e.g. 1MM concurrent Fortnite users are in over 100,000 separated simulations. Yes, we will play games in the Metaverse, and those games may have user caps and resets, but those are games in the Metaverse, not the Metaverse itself. Overall, The Metaverse will significantly broaden the number of virtual experiences used in everyday life (i.e. well beyond video games, which have existed for decades) and in turn, expand the number of people who participate in them.
Lastly, the Metaverse isn’t tools like Unreal or Unity or WebXR or WebGPU. This is like saying the internet is TCP/IP, HTTP, or web browser. These are protocols upon which the internet depends, and the software used to render it.
The Metaverse, like the internet, mobile internet, and process of electrification, is a network of interconnected experiences and applications, devices and products, tools and infrastructure. This is why we don’t even say that horizontally and vertically integrated giants such as Facebook, Google or Apple are an internet. Instead, they are destinations and ecosystems on or in the internet, or which provide access to and services for the internet. And of course, nearly all of the internet would exist without them.
The Metaverse Emerges
As I’ve written before, the full vision of the Metaverse is decades away. It requires extraordinary technical advancements (we are far from being able to produce shared, persistent simulations that millions of users synchronized in real-time), and perhaps regulatory involvement too. In addition, it will require overhauls in business policies, and changes to consumer behavior.
But the term has become so recently popular because we can feel it beginning. This is one of the reasons why Fortnite and Roblox are so commonly conflated with the Metaverse. Just as the iPhone feels like the mobile internet because the device embodied the many innovations which enabled the mobile internet to go mainstream, these ‘games’ bring together many different technologies and trends to produce an experience which is simultaneously tangible and feels different from everything that came before. But they do not constitute the Metaverse.
Personally, I’m tracking the emergence of the Metaverse around eight core categories, which can be thought of as a stack (click each header for a dedicated essay).
Hardware: The sale and support of physical technologies and devices used to access, interact with, or develop the Metaverse. This includes, but is not limited to, consumer-facing hardware (such as VR headsets, mobile phones, and haptic gloves) as well as enterprise hardware (such as those used to operate or create virtual or AR-based environments, e.g. industrial cameras, projection and tracking systems, and scanning sensors). This category does not include compute-specific hardware, such as GPU chips and servers, as well as networking-specific hardware, such as fiber optic cabling or wireless chipsets.
Networking: The provisioning of persistent, real-time connections, high bandwidth, and decentralized data transmission by backbone providers, the networks, exchange centers, and services that route amongst them, as well as those managing ‘last mile’ data to consumers.
Compute: The enablement and supply of computing power to support the Metaverse, supporting such diverse and demanding functions as physics calculation, rendering, data reconciliation and synchronization, artificial intelligence, projection, motion capture and translation.
Virtual Platforms: The development and operation of immersive digital and often three-dimensional simulations, environments, and worlds wherein users and businesses can explore, create, socialize, and participate in a wide variety of experiences (e.g. race a car, paint a painting, attend a class, listen to music), and engage in economic activity. These businesses are differentiated from traditional online experiences and multiplayer video games by the existence of a large ecosystem of developers and content creators which generate the majority of content on and/or collect the majority of revenues built on top of the underlying platform.
Interchange Tools and Standards: The tools, protocols, formats, services, and engines which serve as actual or de facto standards for interoperability, and enable the creation, operation and ongoing improvements to the Metaverse. These standards support activities such as rendering, physics, and AI, as well as asset formats and their import/export from experience to experience, forward compatibility management and updating, tooling, and authoring activities, and information management.
Payments: The support of digital payment processes, platforms, and operations, which includes fiat on-ramps (a form of digital currency exchange) to pure-play digital currencies and financial services, including cryptocurrencies, such as bitcoin and ether, and other blockchain technologies.
Metaverse Content, Services, and Assets: The design/creation, sale, re-sale, storage, secure protection and financial management of digital assets, such as virtual goods and currencies, as connected to user data and identity. This contains all business and services “built on top of” and/or which “service” the Metaverse, and which are not vertically integrated into a virtual platform by the platform owner, including content which is built specifically for the Metaverse, independent of virtual platforms.
User Behaviors: Observable changes in consumer and business behaviors (including spend and investment, time and attention, decision-making and capability) which are either directly associated with the Metaverse, or otherwise enable it or reflect its principles and philosophy. These behaviors almost always seem like ‘trends’ (or, more pejoratively, ‘fads’) when they initially appear, but later show enduring global social significance.
(You’ll note ‘crypto’ or ‘blockchain technologies’ are not a category. Rather, they span and/or drive several categories, most notably compute, interchange tools and standards, and payments — potentially others as well.)
Each of these buckets is critical to the development of the Metaverse. In many cases, we have a good sense of how each one needs to develop, or at least where there’s a critical threshold (say, VR resolution and frame rates, or network latency).
But ultimately, how these many pieces come together and what they produce is the hard, important, and society-altering part of any Metaverse analysis. Just as the electricity revolution was about more than the kilowatt hours produced per square mile in 1900s New York, and the internet about more than HTTP and broadband cabling.
Based on precedent, however, we can guess that the Metaverse will revolutionize nearly every industry and function. From healthcare to payments, consumer products, entertainment, hourly labor, and even sex work. In addition, altogether new industries, marketplaces and resources will be created to enable this future, as will novel types of skills, professions, and certifications. The collective value of these changes will be in the trillions.
On the third anniversary of the General Data Protection Regulation, Cooley started a series of webinars focused on the GDPR.
Our first webinar covers what we consider “the Top 10 key developments you should know” concerning the implementation of this ground-breaking personal data privacy regime.https://videopress.com/embed/jeN8KgiT?preloadContent=metadata&hd=1
#1: GDPR: It’s here to stay, and it’s never going to go away!
There’s been some debate around the need to reform the GDPR. However, it is unlikely that this reform is going to happen in the short term if we take into consideration that the European Commission noted in its 2020 evaluation report of the GDPR that it considers the GDPR has met its objectives. For the European Commission, the GDPR has given stronger rights to individuals while businesses are developing a compliance culture and using data protection as a competitive advantage, among others.
#2: Playtime seems to be over (both for companies and DPAs)
Looking at the past three years of enforcement by the national data protection authorities, we have seen some kind of evolution in the enforcement area:
From June to the end of 2018: National authorities were setting up and reorganizing their teams to align their internal structure and resources with their new roles under the GDPR. This resulted in very few enforcements
Year 2019: The enforcement increased in 2019, but it consisted mainly of small fines and small companies being targeted
Year 2020: National data protection authorities started imposing very high monetary penalties, but many of these were appealed
Year 2021: This year, we have started to see more mature and sophisticated enforcement decisions
#3:GDPR: the global ripple effect
GDPR has been a great inspiration around the globe. Some countries have started to implement new data protection frameworks that are aligned with the GDPR, such as the United States with the California Consumer Privacy Act and Brazil with the General Law for the Protection of Personal Data (LGPD). India is following closely, and a law is expected to be finalized at the end of this year.
#4:Data transfers have become a key challenge
Data transfers have become a key challenge for global organizations. Following the European Court of Justice Schrems II case, companies need to complete a Data Transfer Impact Assessment before transferring any data outside of the EEA, assessing the law and practice of the country of the data importer.
Although the European Court of Justice didn’t invalidate the SCCs, companies now also have to supplement them with additional contractual and technical measures following the European Data Protection Board guidance.
#5:Brexit has added an additional level of complexity
Following Brexit, we now have two GDPRs – a UK one and an EU one. Although currently both frameworks are basically identical, we may expect that there will be some deviations in the future. Brexit has also brought some duplications in relation to appointments of DPOs, representatives and BCRs.
#6:EU countries make use of the possibility to finetune by national laws
The GDPR has brought a fair amount of harmonization into the EU data protection framework, however, it’s important to note that EU Member States still have the possibility to finetune the GDPR locally by imposing additional requirements in areas such as the appointment of DPOs, processing activities that require a Data Protection Impact Assessment, or the age under which parental consent is needed to provide online services to children.
#7:To consent or not to consent, that’s the question
GDPR raises the bar for consent: pre-ticked boxes are not valid, and companies shall be able to demonstrate that individuals were totally free when they gave consent. Also, consent can be withdrawn at any time. All of this makes consent a difficult legal basis to rely on.
#8:Regulator guidance: creating clarity or more confusion? (Thankfully it’s black and white…. No grey areas to cause confusion)
The EDPB and the national data protection authorities have issued a lot of guidance since 2018 on multiple matters such as virtual voice assistants, data breach notifications, international data transfers and the concepts of controller and processor. In most cases the guidance is more restrictive than the GDPR.
The European Court of Justice has also had an active role in defining the GDPR through cases such as Fashion ID, Orange and Schrems II.
#9:Much more sophisticated and balanced data processing/sharing agreements
The relationship between data processors and controllers has become more mature and sophisticated. All steps of the relationship – from the onboarding phase, following with the contract execution and during the whole contractual relationship – have been impacted by the GDPR.
#10:And more is yet to come: what about 2022?
The EU Commission is quite active on data protection. There’s new legislation on the horizon mirroring GDPR, such as the Artificial Intelligence Regulation. Another area where we expect changes is e-privacy.
From the United States’ perspective, there is a lot of activity and, as mentioned earlier, the GDPR has inspired it. Apart from the CCPA, in 2018, Alabama enacted a data breach notification law, and other states such as Washington, Virginia and New York have begun to introduce legislation of baseline privacy laws.
Cooley’s cyber/data/privacy group
50+ lawyers globally counseling on privacy, cybersecurity and data protection matters
Holistic approach to compliance and security, built to preserve and protect enterprise value
Blockchain is a distributed ledger technology that is revolutionizing the way we conduct transactions, protect our identity, and preserve our privacy. By providing a secure and transparent platform for recording and verifying transactions, blockchain is fortifying the traditional finance system and unlocking new opportunities for innovation and growth. With its decentralized and immutable nature, blockchain is also empowering individuals to take control of their personal data and protect it from unauthorized access and exploitation. Whether you are a business owner, investor, or consumer, blockchain is a technology that you cannot afford to ignore in today's digital age.
Blockchain and AI are revolutionizing the way we perceive identity. With virtual identity tokenization, individuals can take ownership of their digital self and protect their data. The impact of this technology is inevitable, and it will change the way we interact with the digital world forever.
The anime classic Ghost in the Shell has been praised for its exploration of transhumanist themes, questioning what it means to be human in a world where artificial intelligence is advancing rapidly. The central question of the film is whether AI is just a shell, or if it is capable of developing true consciousness and emotions.
As our lives become more intertwined with technology, the concept of virtual identity has become increasingly important. From social media profiles to online banking accounts, our virtual identities can have a significant impact on our lives. However, with the rise of AI and other advanced technologies, questions about the ethics of virtual identity are becoming more complex. In this article, we will explore the different systems and technologies that make up virtual identity, as well as the ethical considerations that must be taken into account when developing these systems.
As technology continues to advance, our lives are becoming increasingly intertwined with virtual spaces. From social media platforms to online gaming communities, virtual identities have become an integral part of our daily lives. In these virtual spaces, we have the opportunity to express ourselves, interact with others, and explore new identities. However, as we spend more time in these virtual spaces, it is important that we understand the systems, behaviours, and ethics related to virtual identities.
Virtual Identity and Digital Integrity In today’s digital age, virtual identity has become an integral part of our online existence. It is the representation of who we are in the digital world, and it plays a significant role in our interactions with the online community. However, the growing concern of identity theft and data breaches highlights the need for a secure and reliable system to manage virtual identity. Blockchain technology has emerged as a potential solution to these challenges, offering a secure and decentralized platform for identity management. In this article, we will explore the role of blockchain in virtual identity and its impact on digital integrity. Understanding the Blockchain Technology Blockchain technology is a distributed ledger that provides a secure and transparent system for recording transactions. It is a decentralized system that operates on a peer-to-peer network, eliminating the need for a central authority to govern the transactions. Each block in the chain is linked to the previous block, creating an unalterable record of all the transactions. The security of the blockchain lies in its consensus mechanism, which ensures that all network participants agree on the validity of each transaction. The Role of Blockchain in Identity Management Blockchain technology offers a secure and decentralized platform for identity management, enabling individuals to have greater control over their personal data. Instead of relying on central authorities to manage identity, blockchain allows individuals to create and manage their own digital identities. This eliminates the need for third-party authentication, providing a more secure and efficient system for identity verification. Safeguarding Personal Data with Blockchain Blockchain technology provides a secure platform for storing and sharing personal data. The decentralization of the blockchain ensures that there is no single point of failure, making it difficult for hackers to breach the system. The use of encryption algorithms further enhances the security of the data, ensuring that only authorized individuals can access it. The Benefits of Blockchain for Digital Integrity Blockchain technology has the potential to revolutionize the way we manage digital identities, offering several benefits for digital integrity. Firstly, it provides a secure and decentralized platform for identity management, eliminating the need for third-party authentication. Secondly, it ensures the security of personal data, safeguarding against data breaches and identity theft. Thirdly, it provides greater transparency and accountability, enabling individuals to have greater control over their data. Blockchain and Biometric Authentication Blockchain technology can also be used for biometric authentication, providing an additional layer of security for identity management. Biometric authentication uses unique biological characteristics such as fingerprints and facial recognition to verify identity. By combining biometric authentication with blockchain, we can create a more secure and efficient system for identity verification. The Future of Digital Identity with Blockchain The future of digital identity is closely linked to the development of blockchain technology. With the increasing use of blockchain in identity management, we can expect to see a more secure and efficient system for managing virtual identity. The use of biometric authentication and encryption algorithms will further enhance the security of the system, providing a reliable platform for managing personal data. Overcoming the Challenges of Blockchain Implementation The implementation of blockchain technology presents several challenges, including scalability, interoperability and regulatory issues. Scalability is a major challenge for blockchain, as the system needs to be able to handle a large number of transactions. Interoperability is also a challenge, as different blockchain networks may not be compatible with each other. Regulatory issues also need to be addressed, as the use of blockchain in identity management raises several legal and ethical concerns. Regulatory Frameworks for Blockchain and Virtual Identity Regulatory frameworks for blockchain and virtual identity are still in the early stages of development. However, several initiatives have been launched to address the legal and ethical issues surrounding blockchain technology. The EU’s General Data Protection Regulation (GDPR) and the US’s National Institute of Standards and Technology (NIST) are two examples of regulatory frameworks that aim to promote the responsible use of blockchain in identity management. Use Cases of Blockchain in Virtual Identity Blockchain technology has several use cases in virtual identity, including digital identity management, biometric authentication, and secure data storage. The use of blockchain in virtual identity can also be extended to other applications, such as healthcare, finance, and e-commerce. Conclusion: The Path Towards Digital Integrity Blockchain technology has the potential to transform the way we manage virtual identity and promote digital integrity. By providing a secure and decentralized platform for identity management, blockchain can eliminate the need for third-party authentication, safeguard personal data, and enhance transparency and accountability. While there are still challenges to overcome, the future of digital identity looks promising with the use of blockchain technology. References and Further Reading
04 Feb’23 | By Amit Ghosh As the country pushes its sustainability agenda, the use of new technology deserves a closer look in order to make a difference in this cause When we examine blockchain’s role in environmental, social, and governance (ESG) policies and markets around the world, we can see how technology is already changing ESG markets. If more Indian companies adopt blockchain as part of their sustainability practises and policies, we will be one step closer to realising the ambitious goals that the country and the world have set for themselves As the world moves towards a greener future, it is imperative for businesses to build and lead with sustainable practices. India, one of the most populous countries in the world, has a tremendous stake in the global responsibility towards building a more sustainable world. The responsibility is especially magnified given the country’s reputation as a major economic powerhouse that ranks among the world’s largest energy-consuming countries. Link