The future of data privacy: from data rights to Web 3.0 - BOX1824
PT EN
The future of data privacy: from data rights to Web 3.0

From GDPR to data rights

 

Two industries call their customers users: illegal drugs and software. If you are not paying for the product, you are the product. The most emphatic and talked-about quotes from the documentary The Social Dilemma, available on Netflix, helped popularize the debates on how distinct companies are monetizing our data and how the addiction caused by social media helps make this gear work.

 

On its face, this is a popular, didactic, and successful narrative that resulted in a chorus of comments on the need to log out of all social media immediately. Subsequently, of course, a pause to check one’s favorite feeds and post an indignant statement. However, some details cause a stir, if we look more closely. First, the almost complete absence of experts in the political and social implications of technological and digital progress, such as Evgeny Morozov. Second, what prompted the writing of this article is the trivial way the issue of data protection is treated from a human rights perspective and where this debate will lead the future customer-company relationship. 

 

The Brazilian General Data Protection Law (LGPD), passed in August 2018 and effective from August 2020, caused the market to look for adequacy that had a much more legalistic and protectionist bias rather than interested in what the law proposes. In other words, the law proposes to ensure the safety of people’s personal and sensitive data, respecting their freedom and privacy. The focus was on the punishments that companies could suffer and the damage that infringements could inflict on brand image, rather than addressing these measures as part of the corporate sustainability and regeneration policies, for example.

 

Much has changed regarding human rights, and various social actors have been attempting to shift the conversation away from the legal department and onto the marketing department and board meetings. This demonstrates how corporate accountability must extend well beyond a privacy policy page or a cookie consent banner when customers enter a brand’s digital channels. 

 

The work being done by InternetLab is one example of this. InternetLab is an independent research center that fosters academic debate around law and technology issues, especially internet policy. As a nonprofit entity, it acts as a point of articulation between academicians and representatives of the public and private sectors and civil society.

 

Since 2016, long before the debate in the Brazilian market took form, InternetLab has been conducting a research called “Quem defende seus dados?” (Who Defends Your Data?, in free English translation) inspired by a similar initiative in the United States called “Who Has Your Back?”. 

 

The goal of this research is to assess how Brazilian internet service providers handle data and behave in terms of transparency, privacy, and personal data protection policies. The components evaluated strive to comprehend the company’s public commitment to user privacy and data protection. Simultaneously, these components may be used as a guideline for other markets to understand what they should be worried about when it comes to protecting their users’ and customers’ data. 

 

The categories assessed in this research include: the disclosure of information on data protection policies; companies’ privacy-friendly public stance; data delivery protocols for investigations; the defense of users before the judiciary; the way companies create and disclose their transparency and data protection impact reports; and the way they notify their users about the need to share data with administrative or judicial authorities, enhancing the conditions for a wide defense against abuses and irregularities.

 

It is worth noting that, of the components assessed by InternetLab, only the first, data protection policy, is more often addressed and disclosed by companies that are now openly concerned about this issue. However, by deeply assessing the other components, one may highlight a dimension that has everything to do with the protection of rights. This shows us a hint that today’s major blind spot in this debate has to do with a shift from the data protection narrative to the data rights narrative. 

 

In the book Weapons of Math Destruction, the American activist and data scientist Cathy O’Neil exposes a series of cases in which biased decisions produced by algorithms caused harm to thousands of people. O’Neil’s main argument concerns a commonsense that since algorithms originated in science, they are devoid of the errors that humans make owing to specific ideals or preconceptions. Some examples are removing black people from recruitment processes, favoring a certain attitude during the renewal of car insurances, closely monitoring favela communities, or even rewarding or punishing public school teachers for their student’s performance. All the examples above are incidents highlighted by O’Neil in which the algorithm failed, and the numbers led to injustices rather than assisting society in becoming equal via the use of data.

 

One aggravating aspect regarding this is that most decisions made by algorithms are based on a non-transparent method of pointing out and categorizing individuals. As a result, these decisions are hard to refute or counterargue. Algorithms are making decisions that cannot be clearly justified and are perceived as absolute truths.

 

Nina Da Hora, a data scientist and student from the city of Rio de Janeiro, Brazil, is another name that is gaining traction in the data rights activist scene. As a researcher and science promoter, Da Hora is focusing on algorithmic ethics and joining a group of black scientists to support a global campaign to ban facial recognition technology, which is used with biases that reinforce racism and discrimination in various ways.

 

The premise behind this demand is that algorithmic facial recognition reproduces the same biases and racism that already exist in society. It is not rare for such technology to fail, and people can even be arrested for being mistaken for others.

 

In 2020, several US technology companies such as IBM, Amazon, and Microsoft, which offered this type of solution, decided to discontinue their provision to the government after people were erroneously identified during the Black Lives Matter protests. Giving an example of such nature within the Brazilian reality, in April 2021, CUFA, or Central Única das Favelas (Central Union of the Slums, in free English translation), canceled a facial recognition mechanism that registered people from the favela communities to benefit from a program that donated cestas básicas, which is a kit of staple goods provided by the Brazilian government to minimum-income employees. 

 

Beyond isolated attempts by companies using cutting-edge technology for sensitive data gathering and usage, it is critical that all industries are aware of the potential damage that the leakage of sensitive data may cause. For example, racial or ethnic data, religious or philosophical convictions, political opinions, financial data, union memberships, and issues concerning genetics, biometrics, and a person’s health or sex life. This paves the way for broader, engaged, and transparent debates on data rights and places the employees in charge of these issues with greater empathy towards the use of customers’ or stakeholders’ data.

 

After all, who would want their name removed from a job interview or their lease to be refused because of data leakage? The more the debates on data protection are aligned with those on human rights and sustainability, the more likely it is that this demand will gain traction in the corporate world and, consequently, in the scenario of wide defense of citizenship rights. 

 

From data rights to blockchain: Web 3.0’s impact on individual privacy

 

Moving this discussion forward a few years, in the era of Web 3.0, it is likely that data storage will be decentralized, out of the hands of giant technology companies and onto a blockchain technology network. This sort of network allows for the creation of “blocks” and the formation of data chains, and it is maintained by millions of independent contributors all over the globe. This will allow users to access thousands of data centers and choose who stores their data, rather than settling for the big tech players like Amazon, Google, and Microsoft, who currently lead the cloud storage market.

 

This change in the infrastructure mindset increases data security. Through this peer-to-peer (P2P) technology, which allows the exchange of resources equally and directly among multiple users, data are always encrypted and cannot be shared without a connection to the network. The block needs to be connected to the predecessor block and the one that comes immediately after. According to experts, such as Colin Evran, from the blockchain technology company Protocol Labs, which created Filecoin, Web 3.0’s blockchain technology is very secure, with no reports of hacking.

 

In this scenario, users’ data belong to them transparently and cannot be classified as owned by any company or institution. Moreover, users can clearly see who has access to their data and the type of access.

 

Taking it a step further and viewing the metaverse trend as something that will be highly prevalent in the future, it is understandable that this kind of universe would need users to have “persistent digital identities” due to the combination of real identities, physical worlds, and avatars. These identities would allow users to continue their experience among worlds as needed.

 

To facilitate this transition and ensure the persistence of unique items tied to their specific identity, people will increasingly resort to NFTs (Non-Fungible Tokens), backed by encrypted blockchains. Then, the use of identities will be self-sovereign. Only the identity owners will be allowed to dispose of their data and decide how it is utilized, regardless of the “world” they operate. 

 

But until Web 3.0 takes hold, what can a company do about data privacy today?

 

1) Be transparent and upfront about how the company and its partners use customers’ and users’ data. The more open the company is about this, the more users will trust them.

 

2) Be explicit about the exchange value for those businesses that need data for the operation to work, such as discount and cashback programs and commercial social media. Expose how the company is monetizing user data and what it intends to give in return for that monetization.

 

3) Arrange external audits to protect user’s privacy inside and outside the company. It is often necessary to call in an outside auditor to ensure that all behaviors related to the issue are handled seriously.

 

4) Recognize that emerging blockchain technologies have the potential to drastically alter the data collection and privacy scenario, as well as the data security universe. This way, it may create ways to recalibrate in time so that it is not left out in the cold, while causing the least amount of interruption to its businesses.