The future is here – revealing algorithmic racism

The final session of this year’s Understanding Race course focuses on the ever-deepening intersections between race, digital technology and social media.

Our main reading is Safiya U. Noble‘s Algorithms of Oppression which has made an important contribution to our understanding of the racialised nature of Internet search through her cogent analysis of the ways in which racial bias against Black women and girls in particular are encoded within Google’s algorithms. This work on the multiple and as yet not fully understood ways in which digital technology shapes our experiences of race is a growing interest for me. I began to research this area in my work with Justine Humphry on antiracism apps and we are continuing to develop a research project examining the role of digital technologies in antiracism more globally. I have also had the opportunity, at the recent Data Justice conference to meet with computer scientists Syed Mustafa Ali and Seda Guerses, both of whom are working on the coloniality of computing and, in the case of Guerses, working on ‘Protective Optimization Technologies‘ in the realm of privacy. Ali’s work on decolonial computing is particularly exciting. And I am eagerly awaiting the outcome of Flavia Dzodan‘s current research in the coloniality of the algorithm.

In this blog post I want to think not only about the substantive issues raised by Noble’s critique, and the consequences they have for the future, given the impossibility to separate our worlds online from those ‘in real life’. I also want to discuss the ways in which the problems of categorization and hierarachisation introduced by race are mirrored and amplified by the ubiquity of the algorithm. There are all kinds of implications of this, not only for our understanding of the ever-changing modalities of race itself, but also for other discussions we have engaged in over the course of semester, such as the conditions of coloniality, and the new forms taken by racial capitalism. I cannot do justice to all of these themes here, but I do want to flag them as areas for further interrogation. I also want to make the point that this is a relatively new area of study for me, and that much of the language – be it either from technology/computer studies or from media/film/cultural studies – is challenging for me, as I am sure it is for many. Nevertheless, this area of study really pushes us to think about the motility of race and, as Wendy Hui Kyong Chun writes, about what race does and/as technology.

Burgeoning research into race, digital technology and the Internet (Nakamura 2007, Daniels 2009, Sharma 2013) stresses that the digital both changes our understandings of race and creates new types of racial inequality (Nakamura and Chow-White 2012). Two main approaches to conceptualising the implications of digital technology and the internet for the question ‘what is race’ have dominated. According to the first, the ‘biotechnical turn… which privileges the technological and digital over other forms of knowledge, mediation, and interaction’ (ibid.: 4) proves that race is first and foremost a matter of genetics despite the dominance of social constructionism. This is the view of US race scholar, Henry Louis Gates who has become a central figure in the digital genealogy industry through his role as producer of highly successful television series such as African American Lives which have given rise to myriad social media forums and DIY reality television-style YouTube posts. The worrying predominance of  DNA as the ultimate arbiter of race has been in the spotlight this week due to US politicians Elizabeth Warren‘s 23andme DNA test taken to prove her claim that she has Native American ancestry, as Alondra Nelson discusses here.

Contra Gates, Paul Gilroy claims that the non-visible ‘nanological perspective’ afforded by digital technology means that the genetic theory of race can be finally laid to rest (Gilroy 1998). Evaluating this, Kyong Chun proposes that a view on ‘race as technology’ serves to ‘displace claims of race as either purely biological or purely cultural’ (Kyong Chun 2012: 38). She argues that the allure of digital genealogy is testament to the fact that race has always been about relationships between,

‘human and machine, human and animal, media and environment, mediation and embodiment, nature and culture, visibility and invisibility, privacy and publicity’ (ibid.: 39).

It is clear that the ‘turns away from deconstruction and cultural analysis and towards genomic thinking’ enabled by the spread of digital technology (Nakamura and Chow-White 2012: 5) are a game changer for race critical scholarship, not least because they seem to cement racial certainties and deepen racialised divisions.

Kyong Chun’s discussion of race as technology helps us to understand what race has been, and continues to be, used for. As I have also written about extensively, getting caught up in discussions about race as either biological or cultural hides the fact that race acts on or through these conceptualizations of the human rather then being one or the other. Kyong Chun discusses the fact that the separation of race into discrete biological or cultural variants is relatively new, ‘stemming from the acceptance of Mendelian genetics‘ (ibid. 44). She quotes George Stocking who wrote that both those who thought that racial differences were caused by environment and those who believed in heredity saw race as ‘an integrated linguistic and physical totality’ (ibid.). Race has also always been open to manipulation, thus casting doubt on the fixity with which race is seen by so many – only referring to a theorization of human difference in biological/genetic terms. As Kyong Chun reminds us, ‘the term “breeding” exemplifies human races as technologically manipulable… eugenics is necessary because biology is not enough’ (Ibid. 45). Race-making is always about creating the society to come, the one that is not yet here, the one that would be perfect were it not for the existence of the ‘not quite human’. So, policies such as segregation and anti-miscegenation in the US reveal the fact that ‘breeding populations, if they exist, are never simply natural but rather result from a complex negotiation between culture, society and biology’ (ibid.)

Segregation is an example of a technique of racial technology which spatialised race differences. Following Grace Elizabeth Hale, Kyong Chun remarks that segregation became the order of the day in the US at a time when middle-class Black people had the ability to access white spaces thus making it apparent that Blacks were not destined to be ‘anything other than poor and uneducated’ (ibid. 46). Segregation serves to remove the fact that middle class Black people existed from view so as to make ‘clear distinctions in society where none necessarily existed’ (ibid.). The technology of race serves to solidify distinctions based on arbitrary characteristics and to embed them as hereditary, and thus immutable.

Kyong Chun calls on Heidegger who made the point that the ‘essence of technology is not technological’ (ibid. 47). In other words, if we only examine the tools of technology and not what it reveals or ‘enframes’ we misunderstands the purpose of technology. For Heidegger, technology makes people into what he called ‘standing reserves’: ‘Everywhere everything is ordered to stand by, to be immediately on hand, indeed to stand there just so that it may be on call for a further ordering . . .’ So, as I understand it, the other becomes merely for our use, not recognized as an autonomous being. The example of this that was pertinent for Heidegger (because of his support for the Nazis) was the reduction by the Nazis of racialised others to standing reserves ‘some to be “destroyed”, others to be optimized and made more productive’ (ibid.). There is of course a parallel or a continuity here with the variable treatment of Africans and Indigenous peoples as either exploitable or ‘in the way’ of full conquest, as I discussed here.

In a similar manner to Stuart Hall’s idea of the genetic code, Kyong Chun explains that the utility of race is in apparently revealing what is invisible (genetic traits) to ‘render everyone into a set of traits that are stored and transmitted’ (ibid. 48). Relating this to the history of slavery, she cites Hortense Spillers who noted that enslaved people were not considered either men or women because both were reduced to quantities to be accounted for. The constancy of race, ordering as it does groups into enduring categories, allows us to conceive of humanity as lasting ‘through time as a set of unchanging characteristics’ (ibid.) so that, as many scholars of slavery for example have noted, despite the formal changes to our interpretations of race, and the legal frameworks governing the existence of racialised people in societies such as the US or Australia, the ‘afterlives‘ of racist domination continue to shape everyday experience in myriad ways.

This continuity is at the heart of the critique of the digital technologies that are now increasingly ordering the lives of vulnerable people in particular, used to make decisions about access, legality, ability across an ever-wider range of domains. This is the point being made in the article, ‘Friction-Free Racism’, which discusses the use of digital technologies such as facial recognition to determine who belongs in a given space, and who is ‘matter out of place‘. The author, Chris Gillard, draws on Simone Browne‘s book Darkmatters, which I discuss here, to show that technologies such as facial recognition which are now ubiquitous across ‘airports, at borders, in stadiums, and in shopping malls‘, encode the same purportedly ‘neutral’ ways of measuring people. Facial recognition does not differ widely from the old racial science of phrenology. For a good summary of the history of this, see Flavia Dzodan‘s post ‘A simplified political history of big data’.

The 'science' of phrenology
The ‘science’ of phrenology

Just like phrenology, Gillard points out, most facial recognition softwares are not merely concerned with individual recognition (such as using your face to unlock your phone) but rather,

‘are invested in taking the extra steps of assigning a subject to an identity category in terms of race, ethnicity, gender, sexuality, and matching those categories with guesses about emotions, intentions, relationships, and character to shore up forms of discrimination, both judicial and economic.’

If we think of the ways in which Facebook constantly asks how we feel or what we are thinking, and then matches these emotions along with data about our preferences, our friends and our political views to target advertising at us, we can understand the utility of this information for capitalism.

As Browne shows in Dark Matter, a central technique of race is the surveillance of racialised populations. Her powerful argument is that, in order to understand the widespread practice (and acceptance) of surveillance, we must understand the importance of the ontological condition of blackness for the massification of surveillance practices under modernity. One of the problems confronting scholars of race and digital technology today is that there is a constant struggle to reveal the ways in which technology is invested in the embedding of racial categorization. Just as surveillance for white people during slavery or segregation, or more recently in multiracial societies where those of migrant origin have been associated with the threat of crime, terrorism, etc., white people tend to see surveillance not as an incursion on privacy or freedom, but as necessary to ensure their safety. It is often heard that if you have nothing to hide, then you have nothing to fear, from government incursions on our privacy.

From 'Why You Can't Trust AI to Make Unbiased Hiring Decisions' Time Magazine
From ‘Why You Can’t Trust AI to Make Unbiased Hiring Decisions’, Time Magazine

In an age in which racism has been understood as a moral failing, within a prevailing atmosphere of postracialism where a lack of racial literacy is prevalent, it is commonly agreed upon that the tools of technology ensure that ‘human error’ does not intervene to reinstall racial division. The ‘beauty’ of technologies such as facial recognition software or artificial intelligence for making assessments about everything from home loans to college applications is that they, purportedly, take out the human dimension, thus shielding the user from ‘unconscious bias’. However, as Safiya Noble shows in Algorithmns of Oppression, it is a myth that the algorithms that drive our interactions with the Internet are ‘benign, neutral and objective’ (Noble 2017: 11). Rather, because algorithms are essentially shaped by commercial interests, and are operationalised by people who do not exist outside of a racist society, then racism is in fact integral to how the Internet works. Noble introduces the concept of ‘technological redlining’, analogous to the practice of redlining in housing, now banned but still effective in the US, to argue that ‘digital decisions reinforce oppressive social relationships and enact new modes of racial profiling’ (ibid. 10).

And as Flavia Dzodan reminds us, we need to unmask the workings of the racial logics shaping both the need to collect ‘big data’ and the interpretations of that data for a wide range of uses. Dzodan remarks,

‘The history of Big Data is the history of racial hierarchies and the upholding of white supremacist power structures through the use of methodically collected surveys, community indexes and data points. However, none of these uses have been discontinued. They have just been slightly re-oriented to reflect our more contemporary “sensibilities”. Real estate brokers still evaluate the racial demographics of neighborhoods to determine the value of property. Non white and/ or poor neighborhoods can see their property values plummet if they fall within the “unacceptable” percentile of certain measurements. These real estate valuations, based on data collected by the State, can even have an effect in intergenerational wealth and affect families for decades. Healthcare providers can determine cost of coverage of certain demographics based on data such as eating habits, ethnic predisposition for certain diseases and eventual health predictions. Data from census and surveys is used to allocate funds for Government programs. City Councils can regulate educational resource investments on students based on parents’ incomes and predictive models of performance. Funds can be allocated based on expectations from historical data sets.’

Safiya Noble’s entry point to the subject of algorithm oppression was through her realization that a Google search for things that young Black girls may enjoy yielded overwhelmingly pornographic imagery that relied on sexualized-racist assumptions about Black women.

An example given by Noble

Thus, it is clear that Internet search is not neutral; rather it filters the world through a racist-sexist worldview that frames racialised and other marginalized people from the perspective of those in power, namely in this instance white men, who also make up the vast majority of those who work in the tech industry.

As Jessie Daniels notes, ‘the tech firms in Silicon Valley are predominantly led by White men and a few White women; yet the manual labor of assembling circuit boards is done by immigrants and outsourced labor, often women living in the global South’ (Daniels 2015: 1379). This is compounded by the fact that the Internet itself reproduces the racial hierarchy through, in one example, the ‘nearly ubiquitous white hand-pointer acts as a kind of avatar that, in turn, becomes “attached” to depictions of White people in advertisements, graphical communication settings, and web greeting cards’ (ibid.). Despite the racial categorization that occurs through the ‘application programme interface (API) of the Internet’ (Noble 2017: 14), Daniels states that the belief is that the Internet is a racially neutral space.

Some of this has to do with the early days of the Internet when it was marketed as a postracial utopia where differences between people would melt away and where there was an idea that everyone could be whoever they wanted to online. Lisa Nakamura has critiqued this, citing several phenomena that pertain to how race is reproduced in online culture.

The idea of ‘racial passing’, or the notion that one can take on a another racial identity online, particularly while playing virtual reality games, has underpinned a lot of the beliefs about the postracial capacity of the internet. This was accompanied by  ‘digital utopianism’ – the idea that the Internet would be a truly postracial and post-sexist space. Digital  media appeals to us because it gives us the impression that we can construct our lives from a wide range of available choices. However, this utopian view does not consider how increasingly there is no division between the online and the offline worlds. Rather, the internet and digital technology reflects and intensifies ‘real’ life. Digital enabled ‘passing’ creates an illusion of diversity where it doesn’t exist because most often, as Nakamura notes, the overwhelming tendency of white players of online games taking on Orientalised avatars does not threaten the security of their whiteness.

‘This type of Orientalized theatricality is a form of identity tourism; players who choose to perform this type of racial play are almost always white, and their appropriation of stereotyped male Asiatic samurai figures allows them to indulge in a dream of crossing over racial boundaries temporarily and recreationally. Choosing these stereotypes tips their interlocutors off to the fact that they are not “really” Asian; they are instead “playing” in an already familiar type of performance. Thus, the Orient is brought into the discourse, but only as a token or “type.”‘

Racial harmony in ‘Second Life’?

Nakamura calls this online ‘identity tourism’ which allows users to ‘wear race’ interchangeably and to unwear it at will without feeling that their behavior is in any way racist, or indeed without having to be non-racist in their ‘offline’ existence. Rather, digital role play of this kind is a form of ‘blackface‘ whereby players temporarily don the racial identities of those considered lesser in society in fetishised ways that bear no resemblance to the actual lived experience of the people they are supposedly emulating. With regards the tendency of players Nakamura researched to play Orientalised characters such as Samurai warriors, she notes the similarity of the experience with tourism:

‘Tourism is a particularly apt metaphor to describe the activity of racial identity appropriation, or “passing” in cyberspace. The activity of “surfing,” (an activity already associated with tourism in the mind of most Americans) the Internet not only reinforces the idea that cyberspace is not only a place where travel and mobility are featured attractions, but also figures it as a form of travel which is inherently recreational, exotic, and exciting, like surfing. The choice to enact oneself as a samurai warrior in LambdaMOO constitutes a form of identity tourism which allows a player to appropriate an Asian racial identity without any of the risks associated with being a racial minority in real life. While this might seem to offer a promising venue for non-Asian characters to see through the eyes of the Other by performing themselves as Asian through on-line textual interaction, the fact that the personae chosen are overwhelmingly Asian stereotypes blocks this possibility by reinforcing these stereotypes.’

Like Noble, Nakamura is attentive to the particular intersections of race and gender at play in identity tourism, where playing Asian female characters has both racist and sexist ramifications. These roles, she writes, ‘are doubly repressive because they enact a variety of identity tourism which cuts across the axes of gender and race, linking them in a powerful mix which brings together virtual sex, Orientalist stereotyping, and performance.’ So players could enjoy playing racialised characters but  displayed no interest in hearing about real experiences of racism endured by actual Asian people!

Discussions of racial passing and identity tourism are interesting in terms of tracking the history of how, as Sanjay Sharma puts it,

‘Modalities of race wildly proliferate in social media sites such as Facebook, Youtube and Twitter: casual racial banter, race-hate comments, “griefing”, images, videos and anti-racist sentiment bewilderingly intermingle, mash-up and virally circulate’ (Sharma 2013).

However, I am unsure of the extent of this digital ‘naivety’ today. The election of Donald Trump and the Brexit vote in the UK, among other political phenomena, has drawn awareness to the power of social media algorithms to drive voter behaviour and affect political allegiances. As Jessie Daniels has shown in her book Cyber Racismwhite supremacists and the far right have been early adopters of the Internet and, to a great extent, what many mistakenly see as a resurgence of white supremacism, has been due to the success with which they have used the Internet as an organizing and dissemination tool, allowing them to access the mainstream. The success with which far right ideas, often spread through their use of memes, have entered mainstream sensibility is not dissociable from the beliefs about the Internet as a neutral space, as discussed by Noble, Nakamura and others.

The belief in the better ability of algorithms to manage outcomes, unencumbered by racial and other forms of bias is then complemented by the dominance of the belief in free speech as a primal value in society. In other words, the hegemonic liberal idea that computers cannot be biased in combination with the notion that all ideas deserve an airing and can be assessed by free-thinking individuals combines to create the current predicament wherein we are being served up a near constant stream of racist, sexist, homophobic and transphobic ideas that are presented as mere opinions on the so-called ‘marketplace of ideas‘.

Take as an example the introduction by Australian far-right wing politician, Pauline Hanson of a motion to the Senate in October 2018 that stated ‘It’s OK to be white‘. The government Liberal-National coalition voted in support of the motion and it was narrowly defeated by only 3 votes. The near success of the motion can be read as being due to the widespread willful misunderstanding of what racism is, which dehistorices racism and reduces it to a sentiment that anyone could have against a group of people that they perceived as different to themselves, as I show here. This belief precedes the age of the Internet, clearly, but the spread of the idea that ‘anti-white racism’ not only exists, but has now become more severe than the racism historically meted out to people of colour, has arguably become dominant due to the spread of the idea via social media. In fact, as many rushed to point out after the motion nearly passed the Australian Senate, the phrase ‘It’s OK to be white’ originates with the Internet chat site 4 chan which has been central to white supremacist organizing, for example in the lead up to the Charlottesville rally which saw the murder of antifascist protester, Heather Heyer. As reported in Newsweek,

‘Like many other trolling campaigns that have emerged in the era of President Donald Trump, “It’s Okay to Be White” started on the imageboard site 4chan, a favorite online hub for young, white males who consider themselves part of the so-called alt-right movement. Anonymous users of that site posted a “game plan” urging people to hang “It’s Okay to Be White” signs on college campuses in an attempt to bait people into an overreaction against an ostensibly benign statement. As one anonymous 4chan user envisioned it, media outlets would go “completely berserk” after the signs were discovered, revealing what the alt-right perceives as the media’s anti-white agenda.’

The phrase was also tweeted by David Duke, former KKK Grand Wizard who lauded the success of the phrase in entering mainstream consciousness. A major conduit for that success was Fox News presenter Tucker Carlson who, as Newsweek reported, said, ‘“Being white by the way is not something you can control,” Carlson said to the camera in a priggish tone. “Like any ethnicity, you’re born with it. Which is why you shouldn’t attack people for it, and yet the left does constantly—in case you haven’t noticed.”’ Putting it in this way, coupled with a lack of knowledge about the history of race, and adding that to a belief that ‘bad ideas’ can be debated in open forums, such as social media, can easily lead to a situation in which Australian senators vote for a motion which they apparently believed merely states what it says, namely that there’s ‘nothing wrong’ with being white because no one can help the colour of skin they are born with (an interpretation which has absolutely nothing to do with a critical whiteness studies reading). Following revelations of the extremist provenance of the phrase, government senators requested a re-vote and changed their vote to defeat the motion!

A mural of Trayvon Martin

Safiya Noble briefly touches on the importance of algorithms in the growth of white supremacism and the far right in Algorithms of Oppression when she discusses the case of Dylann Roof who murdered 10 African-American church goers in South Carolina in 2015. As was widely discussed at the time, the revelation of Roof’s ‘racial manifesto’ on his website revealed the thinking which motivated him to carry out the murders. In it he discusses his online research into ‘black on white crime’, after the Trayvon Martin killing, which led him first to the website of the Council of Conservative Citizens. While the CCC  appears to be a legitimate source, Noble cites the Southern Poverty Law Centre which notes its origins in the ‘old White Citizens Councils, which were formed in the 1950s and 1960s to battle school desegregation’ (Noble 2017: 119). The Council is opposed to the integration of racial groups and clearly presented biased and un-empircal ideas about ‘black on white’ crime which, as is widely supported by the data, is ‘a patently false notion that Black violence on White Americans is an American crisis’ (ibid.). As Noble further remarks,

‘What is compelling about the alleged information that Roof accessed is how his search terms did not lead him to the federal Bureau of Investigation (FBI crime statistics on violence in the United States, which point to how crime against White Americans is largely an interracial phenomenon’ (ibid. 121).

Searching for ‘Black on White crime’ in Google does not, she says, lead one to any ‘experts on race or to any universities, libraries, books, or articles about the history of race in the United States’ (ibid.). So, search engines are leading people who use them to incorrect information which, in the case of Dylann Roof, had murderous consequences. His is not the only case, as the Utoya massacre attests.

These consequences cannot be detached from Google’s commercial interests according to Noble. What ‘commercial search engines provide at the very top of the results ranking (on the first page) can have deleterious effects… What we find when we search on racial and gender identities is profitable to Google, as much as what we find when we search on racist concepts’ (ibid. 122). A case in point are the suggestions made by YouTube to viewers. YouTube’s guiding premise is that its role is to suggest to viewers videos that they would like to watch. So, quite simply, watching one video with content that concerns race will leads you to another one, often leading viewers directly to videos produced by white supremacists spreading untruths about, for example, the degree of ‘Black on White crime’ and the existence of ‘anti-white racism’. This can then segue to more extreme and open white supremacist ideas hosted by organizations with increasingly powerful online networks.

However, as this report by New Republic shows, because YouTube (owned by Google) is motivated primarily by profit, it does not merely direct viewers to racist content at random (based on algorithms that purportedly cater to individual interest) it attempts to further drive up profit by encouraging more clicks on already wildly popular videos:

‘Google had itself a little P.R. problem that soon turned into a big one when The Times of London, the Wall Street Journal, and others exposed a much dirtier secret hidden in plain view: YouTube wasn’t just offering up millions of hours of hate speech, but rewarding the most successful propagandists with a cut of the revenue from those video ads you have to wait impatiently to “skip” before getting, say, your “33 Fun Facts About Slavery” (#5: “There Were Just as Many White Slaves as Black Slaves”). Worse, some of the YouTube ranters were being paid—in one case, millions—to produce noxious content for YouTube’s “preferred” channel.’

Noble notes that, in a few cases, governments have successfully forced search engines to remove sites that promote far-right extremism, such as in the case when the French government got Yahoo to remove a website that was selling pro-Nazi memorabilia. However, second amendment legislation in the US makes this almost impossible. Be that as it may, the way in which we think about how to deal with the rapid and alarming spread of white supremacist propaganda over the Internet and its impact on mainstream politics (a subject too large to go into more detail on here) is determined by whether or not we think the solution lies in data itself.

As I understand Noble’s conclusions, the main problem with Google is the fact that it is primarily driven by profit and hence does not provide ‘correct’, ‘evidence-based information to searchers, as she argues publicly controlled databases and libraries would. Google, she argues, has a monopoly on information through its domination over search hence skewing results and impoverishing knowledge, with dangerous consequences. The ability for Google to gain such dominance is based on the prevailing ideology that ‘individuals make their own choices of their own accord in the free market which is normalized as the only legitimate source of social change’ (Noble 2017: 182). So any concept of the public good is taken out of the equation, the beneficiaries being primarily the tech industry, and the currently predominant right-wing politics that benefits from the status quo. This situation has a profound impact on every aspect of social life, Noble argues, giving an example of school ‘choice’ which in the US is increasingly governed by websites which present data about ‘good schools’ which is tethered to the income level and real estate value of the area in which they are located which, in the US in particular, is racially determined. Schools will be judged as not ‘good schools’ when a higher percentage of the students are African American because there is a correlation, made by ‘data-intensive applications that work across vast data sets’ (ibid. 183), between ‘low-income areas’ and the quality of the education.

Noble concludes her book by calling for ‘public policy that advocates protections from the effects of unregulated and unethical artificial intelligence’ (ibid. 198). But, as she adds in an epilogue written after the election of Donald Trump, the ‘agencies that could have played a meaningful role in supporting research about the role of information and research in society’ are all at risk of being defunded (ibid. 200). So, she calls on the public to ‘reclaim its institutions and direct our respires in service of multiracial democracy’ (ibid.). However, it appears to me that the belief here in democracy seems to call into question some of the more radical revelations in Noble’s book, namely that racism and sexism in networked communications are not a question of unconscious bias, but they are, rather, in-built into the system. So, we might well need to be putting it in the way that Wendy Hui Kyong Chun does in the video below, based on her latest book Updating to Remain the Samewhen she says that it is not ‘better data’ or more diversity within the tech industry that will solve the problem of what she says is very disingenuously called racism 2.0. Rather we need to understand the extent to which networked communications are predicated on network science which itself based on the idea that the world is reducible to a map or a lab.

Network science, she explains, is based on the principle of homophily coned in the 1950s by Lazarsfeld and Merton which, she argues has been misinterpreted to mean ‘birds of a feather flock together’, or that similar people gravitate towards each other. The problem, Kyong Chun observes is that the influential paper by Miller McPherson, Lynn Smith-Lovin and James Cook interpreted Lazarsfeld and Merton as concluding that ‘the result is that people’s personal networks are homogeneous.’ And this has led those analyzing the ways communities form online to argue that naturally occurring groups in society, based on race/ethnicity, gender, age, locations, and so on, are represented organically in online spaces.  As the New York Times reports, therefore,

‘academics have cited homophily in elucidating everything from why teenagers choose friends who smoke and drink the same amount that they do to “the strong isolation of lower-class blacks from the interracial- marriage market.” Researchers at M.I.T. even published “Homophily in Online Dating: When Do You Like Someone Like Yourself?” which showed that you like someone like yourself most of the time, online or off. So much for “opposites attract.”’

The problem with this for Kyong Chun is that the concept as developed by Lazarsfeld and Merton was counterpoised to that of heterophily. Accepting homophily as representative of the way social interactions work leads to remarks such as, ‘homophily is a good example of where an existing social theory can now be explored numerically, and be easily verified in a wide variety of different networks, because the data is held digitally.’ However, thinking uncritically about the concept leads to an obscuring of the fact that there is nothing ecological or natural about groups based on homophily. Rather they have to be created. Making an analogy with the history of racial segregation in the US, Kyong Chun shows that the tendency of white people to live in clusters or for so-called ‘ghettos’ of African Americans to appear following desegregation had nothing to do with the natural tendency for ‘birds of a feather to flock together’, and everything to do with white flight. This connects with my earlier discussion of Kyong Chun’s race and/as technology. The technology of the algorithm organises the network according to simplified clusters that are then represented, and interpreted, as naturally occurring rather than produced to facilitate the working of the algorithm across a variety of sectors (commercial, judicial, welfare, health, education, etc.).

The acceptance of the principle of homophily, that ‘similarity breeds connection’ becomes a self-fulfilling prophecy whereby, when we bemoan the existence of echo chambers it is presumed that something can be done about it by ‘listening’ to those who do not share our beliefs, etc. However, this criticism usually voiced by liberals to complain about what they see as an increasingly sheltered and fearful ‘coddled’ force within the left that ‘refuses’ to engage with ideas they might find unpalatable, entirely ignores the  fact that it is is the constructs underlying network science – what Kyong Chun calls a particularly retrograde (effectively segregationist) form of identity politics – that creates the echo chambers. Moreover, these chambers or ‘silos’ are more profitable for entities such as Google (but also for those currently in political power) because they allow for the simple organization, and thus management, of society.

My reading of this, then, is that – following Kyong Chun – networked communications not only contain racial (and gendered, etc.) bias within them, but that they work like race; that technology is as race, rather than technology being racialised.

Kyong Chun’s hopeful solution to this is to recreate the network otherwise, to build models that ingrain a knowledge of history within them in order to expose the ways in which racial logics are constructed into the system. This would require computer scientists who are also race critical theorists!

This, however, may be overly optimistic if we take on board the critique mounted by computer scientist, Syed Mustafa Ali. In his view, a decolonial reading of the history of computing is necessary to decolonize computer studies because computing is itself a ‘colonial phenomenon’ (Ali 2016: 16). Computing has been shown to mirror colonialism in that it is expansionist, being ‘ubiquitous and pervasive’ (ibid. 18). However, for Ali, this is not mere analogy. Rather, the observation of the coloniality of computing needs to be set in

‘relation to a more general, expansionist thrust of computing associated with the transformation of the modern world through incessant computerization and the rise of a global information society following the “cybernetic turn” of the 1950s’ (ibid.).

Ali argues that the contemporary ‘(post-)modern information society’ is undergirded by a apocalyptic, millenarian and utopian vision that cannot be disconnected from ‘the emergence of global, systemic and  structural race/racism/racialization at the onset of colonial modernity’ (Ali 2017: 1). He writes that we are currently witnessing an ‘algorithmic reiteration’ of the ‘coloniality of power’ within computing. He names this algorithmic racism. But the basis for his argument and possible conclusions are quite different to those of Noble:

‘(‘Algorithmic Racism’ (AR) is a methodological framework, metaphorically-grounded in the figure of the algorithm, for conceptualizing the relationship between processes of racial formation (or racialization) within ‘Western’ historical experience in relation to its (various) ‘Other(s)’’ (ibid.)

He argues further that,

‘AR has utility in exposing the “dark postcolonial underside” underpinning and tacitly informing developments associated with the rise of ubicomp, the IoT and Big Data/datafication, and facilitating parallel developments associated with Transhumanism and/or techno-scientific posthumanism, both rhetorical and ‘material’, in terms of disclosing the persistence of asymmetric race hierarchies and the ‘algorithmic’ (re)production of race, racism and racialization)’ (ibid. 2).

The assumed neutrality of the algorithm thus serves to obscure the underside of modernity – its colonized others and racialised subjects. The work of what Ali calls ‘decolonial computing’ is to expose the workings of this, for example as an exit strategy out of ‘white crisis’, as he discusses in relation to Robert Geraci’s Apocalyptic AI in the video below.

‘Decolonial computing, as a critical project, is about interrogating who is doing computing, where they are doing it, and, thereby, what computing means both epistemologically (that is, in relation to knowing) and ontologically (that is, in relation to being)’ (Ali 2016: 20).

But it goes beyond the question of inclusion and exclusion to question, like Kyong Chun, how things are included even when they are not. Or, in other words, how computing or the algorithm is racially constituted even when it is said to be neutral. Further exposing this, redressing it, recreating workable systems, and, as Ali remarks, paying reparations to those whose lives have been sacrificed are the tasks ahead.

 


2 Comments

  • Dr Syed Mustafa Ali

    October 22, 2018

    Thank you for the gracious engagement with and exposure to my work. I am both humbled and honoured.

  • Alana Lentin

    October 23, 2018

    Dear Mustafa,
    I wish I could have gone further. There is still a lot more to say about the implications of your ideas. I hope to be able to develop them in greater detail together.
    All the best,

    alana

Leave a Reply