top of page
  • Writer's pictureNikita Le Messurier

The digital age was meant to bring us closer, but it's dividing us

Updated: Jul 12, 2020

Do you know how it feels to be looked at with disdain? To witness someone wipe their hand after shaking yours? To need to teach your children to caution authority? I don’t. And it’s not fair that someone else does. If, like me, your grasp on this conversation isn’t truly skin deep, assume ignorance.


In his recent book, The Body, it takes Bill Bryson a paragraph to explain that race is just one millimetre deep - a “sliver of epidermis”, pigmented through evolution to assist with the absorption of sunlight. Paraphrasing took me a sentence. And yet, for over 400 years, a chasm of racial inequality has continued to expand, based solely on the pigmentation of several layers of cells. This makes clear that the inexperience of freedom, equality and basic human rights are predicated on nothing more than the strength of ignorance.


Ignorance is why the current conversation is so important.


Here in Australia, 434 Aboriginal people have died in incarceration since 1991. Despite making up only 2% of our population, indigenous Australians account for 27% of the national prison population.


In the US, Black Americans have been harder hit than white during COVID-19. In early June, the unemployment rate for Black Americans rose to 16.8%, ~30% higher than white unemployment levels at 12.4%.


In the UK, the Metropolitan police are twice as likely to fine black people for breaching Coronavirus restrictions than white. Black people are also 4 times more likely to die from COVID-19.


It would appear that on the wrong side of the racial equality spectrum, poverty and incarceration increase, and employment and healthcare decrease. Whilst indirectly painting a picture for white privilege, these current statistics also mask the extent to which black people have been disadvantaged. History helps us understand the bigger picture.


In the late 1500s, the buying and selling of humans, the slave trade, was established, disrupting an estimated 10-15 million black lives. Slavery removed basic human rights, turning many black people into assets that could be owned and traded for the economic gain and privilege of (mostly) white people. Not only did this promote race-based social divides, embedding class systems with bias and entitlement, but the oppression of black people encumbered their political and economic freedoms to such an extent, that equal representation in these (and other) arenas has still not been recovered.


For example, the removal of land rights for indigenous and black people many decades ago have handicapped subsequent generations. Without land against which to borrow, loan or lease, families have been limited in how they can accrue wealth and expand their sphere of economic influence. In parallel, we also know that that wealth directly impacts access to education and opportunity. Over time, this has had a crippling effect on the inequality gap, as the disadvantage of black families - and relative advantage of white - has embedded, making racism systemic.


Whilst it’s clear society has developed to absorb the structures that support and perpetuate racism, it’s not just our physical worlds that have been impacted. The digital world we inhabit is no different - it is skewed towards exacerbating inequality, too. The platforms we use to socialise, consume and broadcast information, also have inbuilt structures designed for ignorance. Let me explain what I mean.


You have a supercomputer in your pocket, which allows you to interact with people, platforms and content. These interactions produce digital crumbs, or data. This data is captured through your apps, processed, shared with third parties, and third parties of those third parties, giving a great number of organisations access to information about you through your digital interactions. For the most part, laws ensure that only non-PII data (personally identifiable information - like your name, address or credit card number) can be shared in this manner. But the truth is, location data alone is enough to identify an individual.


I currently work for an organisation that cares deeply about how data analytics can have a positive impact; I’m proud to be able to say that we help organisations harness data to do good things, like decrease our carbon footprint or fight human trafficking. However, not all data analytics are created equal. When I was at university, I took part in an experiment as part of a module on Big Data. Each classmate was given a phone, and was instructed to keep that phone by their side for a week. At the end of the week, we collected the phones and extracted just one source of data: the GPS information. Location data tells a rich story - it’s peppered with patterns that belie our unique habits and preferences; everything from a favourite coffee shop to how often we exercise, or who we are dating. With limited location data collected over a few days, it wasn’t difficult to match each phone to the right student. Clearly, it doesn’t take PII data to identify an individual.


Think about what that means for a second. Location data is just one data source. Imagine what a company can learn about you when they have thousands, like Google, Facebook and Amazon do.


In an interview with Shoshana Zuboff, the author of The Age of Surveillance Capitalism, HBR’s Azeem Azhar talks with her about how data is “collated, crunched and turned into predictions about what we might want to do next”. When there is enough data (and trust me, there is enough data - watch The Great Hack on Netflix) conclusions about how you will vote, what kinds of articles you will read, what you will purchase next and how you will feel several hours from now are increasingly easy for algorithms to predict. These conclusions are calculated based on your past behaviours, the people you interact with, and the activity of people digitally similar to you. With each new calculation your box is more precisely defined, and you simultaneously become a richer asset to the platforms you frequent, to their third parties, and to the third parties of those third parties… you get the drift.


This is because algorithms governing these platforms are moment by moment trading on predictions about you and what you will do next. Understanding your behavioural patterns is all the ammunition needed to capture and keep your attention with tailored content you will subsequently ‘choose’ to consume. The more accurately these companies know you, the more profitable they can be, and so the algorithms constantly recalibrate to match you with information in the form of ads, products or content, that you already want. Here begins our constant exposure to information, content and opinion that conforms to who we already are, and reaffirms our existing beliefs.


This effect is known as the filter bubble, and it is how the societal structures of inequality have been built into our digital world.


This is particularly concerning when it comes to news and current affairs. A filter bubble tends to exclude content a user doesn’t typically engage with, isolating that user from differing points of view. Let's look at a few examples, to really understand what this means for us.


In conversation with an ABC reporter, researcher Dr Robert Elliot Smith references a study which found black people more likely to receive news content about racial discrimination than white people, “because big data-analysis algorithms basically said only black people cared about those topics.” We've already talked about how racial bias is embedded in society, so I'll ask just one question here. If the people who discriminate - who are ultimately the problem - are not exposed to information on racial discrimination, how are they supposed to change? With racism so culturally embedded as it is now, it is easy for people to overlook their own contribution to racial inequality. If we continue to ignore it, it will only embed more deeply.


In 2018 The New York Times revealed how the inherent structure of Facebook was directly linked to anti-refugee attacks in Germany. The platform gave a local firefighter, who attempted arson on the house of a refugee family, a community echo chamber perpetuating his fears and anxieties of immigration. Further analysis of refugee attacks across the country found that as “per-person Facebook use rose one standard deviation above the national average, attacks on refugees increased by about 50%”. A year earlier, the publication also illuminated how Google’s algorithms enabled the “sale of ads tied to racist bigoted keywords [to] automatically suggest more offensive terms as part of the process”.


The filter bubble phenomenon is a divisive pandemic: there is no doubt these algorithms have the power to fuel ignorance, perpetuating racism and inequality. The hardest hit will be those vulnerable to suggestive content, as much as those on the wrong side of racism.


Thankfully, many platforms now have early iterations of algorithms intended to find and quash hate speech or violent content. But, these are very much in their infancy. Until these algorithms reach a certain level of sophistication, they won’t be able to detect the regurgitation of subtler cultural racism, already embedded into our communities, or negate the filter bubble’s inability to expose us to content beyond our preexisting beliefs. Until that day, our digital worlds will continue to amplify whatever racist tendencies we accrue out here in the real world.


The Black Lives Matter movement runs against systemic racism, and this includes racial bias perpetuated through the digital platforms we engage with. To support this movement, and to shift the needle towards equality, there are numerous ways that we can collectively tackle digital racism.


Governments can play a role in data governance by writing policy to protect our digital worlds against racism and division.


Technology companies can make greater efforts to seed algorithms with an equality compass. They can give digital consumers the choice to opt into the filter bubble, and empower users to adjust the strength of that filter. They can also use their platforms to facilitate and engage us in the wider conversation about race and inequality. Big tech companies have rich profiles about each and every user; it would be easy for them to facilitate a program aimed at connecting people from different racial backgrounds and communities, for the purpose of education. Exposure to someone else’s truth is a stepping stone to eradicating ignorance and encouraging empathy.


As consumers, we aren’t powerless. We can make a conscious effort to engage with platforms that give us greater control of our data and support racial equality. We can assume ignorance. We can question what we see, hear and read on these platforms, fact check, and consciously seek out opinions beyond the purview of our preexisting beliefs. We can be mindful that everything we say and do online is tracked. Our data informs our filter bubble, which informs the content we receive, but also what others receive through us. Our children are informing a lifetime of filter bubble implications right now; we are never too young or old to start questioning how we are engaging with content and people online. Every single one of us are in the privileged position to fight for change and to make change.


We have a platform for a voice, but we also have a platform that we can choose to leverage for education against our own ignorance and the ignorance of others. If you have access to this message, you have the power to be a voice or catalyst for change.


Government policy around data matters because black lives matter.


Companies use of your data matters because black lives matter.


Your approach to your data matters because black lives matter.


279 views1 comment
Post: Blog2 Post
bottom of page