What is data extractivism?

When we hear the word extractivism we think of the extraction of natural resources, an economic and political model that causes great damage to the environment, that requires large amounts of common goods, such as water, and that affects in multiple ways the communities that live near where the extractive practices are carried out, such as through sexual violence and massive displacement of communities.

This extractivism is based on the capitalist system and establishes an international division of labor that assigns to some countries the role of importers of raw materials to be processed and to others that of exporters; this division is beneficial for the economic growth of the former, but does not take into account the terrible damage generated in the countries from which the raw materials are extracted (Rosa Luxemburgo Foundation, 2013).

Why then talk about extractivism on our personal data? 

Paola Ricaurte mentions in “Decolonizing and depatriarchalizing technologies” (2022) that “extractivism in the territories, although in the first instance it alludes to the dispossession of natural resources, ultimately means to remove from their place, to dislocate, to deprive communities of the ways of sustaining life (community organization, ancestral knowledge, technologies, cosmovisions, spiritualities, native seeds, practices to strengthen the social fabric, ways of building a sense of the common and imaginaries of the future)”.

With the passage of time and the development of new technologies, it is essential to understand extractivism as a process associated with the dispossession of bodies-territories both in its material and immaterial and relational sense because the threatened common goods are not only those taken from the land (Ricaurte, 2022), but those taken from us.

When we talk about data extractivism then we refer to a global process where large volumes of data are extracted from people (data related to the most intimate sphere of our lives and our subjectivity) for the economic gains of large international technology companies at the expense of our security and freedom to decide on our autonomy on the Internet.

In addition, to store these large amounts of data, it is necessary to build the necessary infrastructure for them and the extraction of natural resources along with the use of common goods such as water.

Maximilian Jung argues in 2023 in the blog “Digital Capitalism is a Mine, Not a Cloud” that the big companies that appropriate, aggregate and sell the data they mine decide on what data is “worth” collecting, how it is stored, tagged and analyzed, often without informed consent. He also says that if the violence of data collection replicates the practices of historical colonialism, the mass of data captured and commodified, particularly through its automated algorithmic processing, deepens current forms of racial, gender and class oppression.

With data extractivism, capitalism has invaded our autonomy and privacy: reducing us to sources of information that can be mined. Information that is part of our subjectivity, creativity, expression in the world and form of resistance.

What we can do: apply the rules of consent

Data as a business model = consent as an unequal power struggle”.

Peña and Varon, 2019

In “The consent of our data bodies lessons from feminist theories to enforce data protection” written by Paz Peña and Joana Varon explains that we, as consumers of services from very few companies that have a monopoly on communication tools and social networks, are deprived of “no” when confronted with the terms and conditions of such platforms. 

We are forced to make an oversimplified binary choice between agreeing or disagreeing, while the latter means digital exclusion, because we will not be able to access these tools or platforms. 

The authors point out that this type of consent is an individual approach based on the “assumption that all people are autonomous, free and rational with the capacity to consent, without taking into account unequal power dynamics”. In other words, consent must have an intersectional and accessible view that takes into account the different forms of inequality that impact our lives in all its forms, including in the digital sphere.

The text proposes a feminist view of the understanding of meaningful consent that we can apply to the collection of personal data. According to this view, consent must be: 

(a) Free, including the possibility of “no” or “only on some elements”.

b) Clear, i.e. that it can be understood, 

c) Informed, fully aware of its uses and implications. 

d) Current, it can be withdrawn and modified at any time.

e) Specific to a given situation and therefore a change in the situation or in the policies of use, as well as on any of the central elements of the relationship with the platform, will require updating/validating the consent again.

d) Retractable.

Joana Varon and Paz Peña conclude that if we are willing to make sense of consent for data collection and processing, we would at least need to think about and design technologies that allow for a material expression of all the characteristics of consent mentioned in feminist debates. More importantly, we would need to consider that there are no universal standards if there are different conditions of power vis-à-vis whom to give consent. 

How about we start by reviewing what we are saying “yes” to regarding our digital bodies?


“Decolonizing and depatriarchalizing technologies” (Paola Ricaurte, 2022).
“Digital capitalism is a mine not a cloud” (Maximilian Jung, 2023)
“Consent to our data bodies: Lessons from feminist theories to enforce data protection” (Paz Peña and Joana Varon, 2019).