Week 28: Summer Reading Roundup

The internet is not a utopia. I know, this isn’t exactly a revolutionary, new idea. A lot of people have made this critique. There are a lot of problems with the internet. It’s overrun with racist, sexist, homophobic language. I pretty strongly agree with Sarah Jeong’s theory that the internet is mostly garbage. Amazon and Google and Facebook are all collecting our data and selling it to companies and political campaigns. And no matter where you go on the web, infinite ads flash in the corners, the middle of articles, and interrupt the videos you watch.

But the internet is not a total dystopia either, as many critics have claimed (see reading list). The internet and the digital in general seem to be forms that provide infinite production and zero consumption, and while this isn’t exactly true, the abundance and availability of information and art has dramatically increased with the internet. The internet has created the opportunity for remixes and play and new forms of community. It has created new forms of communication and provides an unprecedented amount of instantly available information. Some bloggers have likened early YouTube to David Graeber’s concept of baseline communism, that is “the raw material of sociality, a recognition of our ultimate interdependence that is the ultimate substance of social peace.” On YouTube individuals create content not to make money but instead to help and entertain one another with tutorials or silly songs. Early YouTube–that is YouTube prior to the ad revenue–created a platform on which almost anyone could upload and share their knowledge, and millions of people did this, not for their own gain, but just to help other people because we are all ultimately dependent on one another, and it brings us joy to contribute to the greater good, however small the contribution. For all of the racism and sexism on the internet (which Sarah Jeong points out is mostly run by bots created by a small portion of the population), there are examples where humans help one another and give to one another without the expectation of receiving anything in return. If the internet has done any good, it has shown that humans are not innately selfish, calculating machines, but are instead constantly striving to build communities and help one another, flawed though their attempts might be.

This article is my attempt to provide an overview of what I believe are the most pressing issues created by the internet and the digital based on the critiques I have read throughout the summer. My goal is to show how each of these problems is related to and feeds into one another, and show how we might begin to think of a world that actually changes and responds to these problems.

Perhaps the most widespread problem with the internet is the fact that we are the product. We are the thing that generates capital for big companies through the sale of our data. Mike Grimshaw summarises this issue succinctly in his article, “Towards a manifest for a critical digital humanities: critiquing the extractive capitalism of digital society” (emphasis in original).

As consumers, the internet seems to be to our advantage. The reality however is that as citizens the internet creates a society of increasing inequality, a society of rapid job-loss and the reduction also in permanent work, and the control of our lives, our data and, via predictive algorithms, our choice, by monopolistic oligarchies. Digitally, we are all active workers as both producers and consumers, because our digital consumption is in itself data work because as Astra Taylor notes ‘every click can be measured, every piece of data mined, every view marketed against’ (Taylor, 2014, p 7). Digital society is really a society of advertising: data for advertising, advertising as data collection. The big shift was that of web 2.0 whereby ecommerce and social media combined with the aim of getting workers to create content without compensation.

In short, “Digital culture is sharecropper culture.” We are both consumers and producers of the internet, which is one giant surveillance machine, a “vast panopticon” to use Andrew Keen’s (really, Foucault’s) phrase. The data we generate is used in ads on the internet to sell us more stuff that will generate more ads. Of course, with the Cambridge Analytica revelation (and even before that) we know that our data is used for much more than just trying to convince us to buy wedding rings or new vacuums, but I’ll return to the idea of surveillance later.

All of the remixing and play created on the internet, all of the memes and community jokes are data for big companies to harvest. Does this necessarily mean we should all quit making memes and using the internet to learn and play? Grimshaw seems to lean toward this solution, but I hesitate at the idea, not only because I think it’s unlikely Grimshaw will ever be able to successfully convince the public to give up their one source of entertainment, but also because I don’t think it necessarily needs to be an all or nothing approach. Certainly, the current system is seriously flawed, but that doesn’t mean it has to be that way, that there is  only destination for this system and it is the violence of digital financial capitalism and endless sharecropping. Why can’t we “farm” (to extend the sharecropping metaphor) for ourselves? Just because the data currently goes to big companies, doesn’t mean it has to. What if those companies were publicly owned and operated?

This gets at another issue, ownership.

The first version of Article 13 (the EU law concerning copyright) was struck down in the last month. The intent of this law was to give artists and other content creators (e.g. Paul McCartney) a greater share of the profits for the copyrighted content they create compared the the portion of the profit they currently receive, which is much smaller than that which platforms (e.g. Spotify) receive. Although this sounds good in theory, critics claim,

The Article stipulates that platforms should “prevent the availability” of protected works, suggesting these ISSPs will need to adopt technology that can recognise and filter work created by someone other than the person uploading it. This could include fragments of music, pictures and videos. If you’ve ever been on the internet, you’ll know that this ‘remix’ culture is a key part of how online communities function. The worry is that Article 13 will hinder this, and create a type of censorship that ignores nuances in how content can be adopted, quoted or parodied.

After widespread criticism, the article failed to pass. However, there is potential for a revised version to return to vote in the fall. Depending on the level of revision, the law could still cause widespread censorship on the internet, which is what has most critics concerned. Of course, this censorship would not only affect the EU. Internet platforms may migrate to the United States in order to avoid the law, but the citizens who use the platforms would not have this option and their ability to create content could be stifled.

Andrew Keen lists piracy as one of the major problems with the internet. I disagree, not because I think piracy doesn’t happen. It certainly does. I don’t think piracy is a problem because I think more content should be free. This isn’t an argument against paying artists and writers and those who create the online content that is stolen. I do not believe creating online content is just a hobby. It is a job the same way creating scholarly articles is a job. In the big push for open access scholarship, we were given the argument that access to scholarship can benefit the public and it should therefore be freely accessible to anyone. I believe the same thing should be true of art (paintings, sculptures, music, videos, etc.), which has just as much potential to benefit the population. The problem is that our economy is setup to force people to monetize work that could potentially benefit the greater population, which is often work humans wish to undertake for the joy of doing it alone. Again, this isn’t an argument against paying artists, just as the open access argument isn’t against paying academics. Instead, we should find new ways to compensate artists, which is really an argument for restructuring our economy on a much larger scale.

In other words:

At the same time, open access should have exceptions. I’ve written about this before, so I’m not going to spend a lot of time on it here. Basically, digital archives with content taken from indigenous populations and/or indigenous lands should be subject to a much greater level of scrutiny than other digital collections and the rights to the digital content should be controlled by the population from whom it was taken. Open access should not mean increasing the potential for exploitation and appropriation. The push for open access is about creating a system that benefits those who have been historically the most disadvantaged. In a lot of cases this means creating free online sources for journals and articles and collections because the cost to access this content has historically excluded lower income individuals from taking part in scholarship. However, in some cases open access means protecting the content that was taken from historically disadvantaged populations, and ensuring those populations have the rights to share (or not share) that content as they choose. This brings me to my next point: colonialist cyberspace and digital infrastructure.

The satellite space around the globe is dominated by the United States, which owns 803 out of the 1738 total satellites in space. Sixty-percent of the internet is written in English. The majority of domain names are owned by companies and people in the United States. The same thing is true of radio and television broadcasting, which is dominated by US content. In the 1970 and 80s there was a global debate about the proliferation of Western–specifically American–content and ideology being broadcast in colonized countries like Tunisia. The United States, which controlled the majority of radio frequencies and television broadcasting networks decided to flood colonized countries with propaganda by exploiting the Universal Declaration of Human Rights resolution that recognized free expression as a fundamental human right. Article 19 states “Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information ideas through any media regardless of frontiers.” Essentially the United States monopolized the various media forms and took the right to “impart information ideas” as the right to take over all channels of information sharing, essentially transforming the right to impart information to the right to force others to listen to their propaganda. In other words, the United States tried to make the right to hear into the obligation to listen. The same thing is now true of the internet. Just look at what Facebook tried to do with “Free Basics” in India, a limited internet app that would show mostly US content (read: propaganda) written in English.

If we were to take open access to its fullest anti-colonial extent, we wouldn’t just make all of the data and information on the internet freely available (barring exceptions noted above, of course), we would also turn over the ability to create, host, and disseminate information to the people who have been historically excluded from the communication space. We would begin organizing data in non-colonial systems. For example, Miriam Posner explains that the Library of Congress classification system places American Indian objects in antiquities, relegating them to the past. Obviously, for indigenous people, this is a pretty clear instance of identity erasure because the Library of Congress treats them as if they no longer exist. (She has more examples, so please go watch that video. It’s really good). Furthermore, if we were to truly commit to open access, we would guarantee Net Neutrality.

The biggest concern that we should have from recent FCC decision to eliminate Net Neutrality, is not that we will have to pay ten dollars per month for various internet packages (i.e. one package that comes with Netflix, Twitter, Youtube and another package that comes with Tumblr, Facebook, and Hulu). Instead, the biggest concern should be ISPs’ ability to censor the content we can post and see online. As Emily Bell of The Guardian explains, “With net neutrality in place, whether you are a newspaper, a blogger discussing sexual assault, a video provider, or someone filming a public official at a town hall, Verizon or AT&T can’t slow or block your ability to put your content online and speak. Without it, they effectively can.” It’s not just about creating fast-lanes for high-paying customers and throttling speeds for low-paying customers, it’s about shutting out competition, redirecting internet users to websites that promote the ISP’s brand or in any way boost its profits, and censoring internet users.

So far, this has mostly been a critique of the ways big companies and governments use the internet for nefarious purposes, but I promised a critique of the digital as well, which means I’m now going to talk about the social problems with computers.

Computers are not magical black boxes of science; they’re human made. As Cathy O’Neil explains in her book, Weapons of Math Destruction, “Algorithms are opinions embedded in code.” When we use algorithms to determine who gets a better rate for a mortgage or which teachers are promoted or fired, we are not being inherently objective just because we are using math, we are taking real life variables and transforming them into data, a human construct. Life will always be much more nuanced than data will allow because humans will never naturally fit into checkboxes. Take the social construct of race for example. Different geographic areas have different definitions of skin tone and race. How does a person who immigrated between these areas define their identity on a form that excludes their conception of their own identity? O’Neil gives the example of a teacher who was fired due to a low score generated by an algorithm to determine her effectiveness as a teacher. Even excluding the fact that it turns out the algorithm was essentially a random number generator, how can we quantify a teacher’s ability to impart knowledge when so many factors (like the fact that the students change every year, class size, and budget cuts) are not and cannot be taken into consideration?

Safiya Umoja Noble centers her critique of ubiquitous algorithms on Google in her book, Algorithms of Oppression, in which she shows that Google’s search algorithm reinforces existing power structures and as a result perpetuates oppression. Noble explains, “some of the very people who are developing search algorithms and architecture are willing to promote sexist and racist attitudes openly a work and beyond, while we are supposed to believe that these same employees are developing “neutral” or “objective” decision-making tools.” If algorithms are indeed opinions embedded in code, then employing someone who has very sexist opinions will naturally yield sexist algorithms. To return to the anti-colonial argument from earlier, it makes sense that if the majority of internet is controlled by the United States, the internet and all of its algorithms are going to be biased in favor of Western ideology. The fact that these algorithms determine major aspects of our lives from loan rates to employability to having our identities made visible or invisible to others is a problem. We cannot base our lives on equations that fail to account for our humanity.

Unfortunately, these algorithms are being taken to a new level with surveillance. We all know about the Patriot Act and the surveillance undertaken by the government, which has become such common knowledge, it’s now a meme. However, with improvements in AI and facial/vocal recognition software, large internet companies are able to create a new system of surveillance that has thus far only been imagined in dystopian fiction. Mainly, I am referencing Amazon’s ambition to sell facial recognition software (and, consequently, data of your face) to law enforcement. Google has also provided artificial intelligence to assist military drone attacks. Now our faces and our voices are treated as data. The digital is spilling over into the tangible world and with it, a new form of digital, late-stage capitalism. Now, our cars are taxis, our houses are hotels, our faces and our voices and our bodies are data to be sold. The surveillance of the real world with AI technology makes everyone, even the most careful non-internet user into a digital object. Again, as Andrew Keen put it, the internet is a vast panopticon, which leads me to agree with Noble’s conclusion that Facebook, Amazon, and Google’s monopoly on information organization is a threat to democracy.

Finally, digital capitalism doesn’t just operate on human data, it utilizes virtual currency. Virtual currency is nothing new. In Debt: The First 5,000 Years, Graeber explains that the first money was virtual, not minted coins. In fact, the whole idea of trade and barter as an economic system is (in almost every case) a myth. Instead, humans have always tabulated debts using various accounting systems and measurements. They just haven’t always used coins or cash for the exchange. The difference now is that fiat currency – particularly digital fiat currency – can create the illusion of infinite growth and zero consumption. According to Michael Betancourt, “The digital is a symptom of a larger shift from considerations and valuations based in physical processes toward immaterial processes; hence “digital capitalism” refers to the transfer of this immateriality to the larger capitalist superstructure.” In other words, we are trying to create value out of nothing, purely intangible, socially conceptualized immaterial “things.” The problem with digital capitalism is that investments are based upon real-life, tangible assets (e.g. a mortgage on property). When the assets aren’t paid for, the bubble of financialization pops. Betancourt explains, “Financial “bubbles” are an inevitable result of a systematic shift focused on the generation of value through the semiotic exchange and transfer of immaterial assets.” Bubbles pop when material assets cannot keep pace with the valuation of immaterial assets, which are constantly growing. I realize this is a drastic oversimplification of the issue, and you should read Betancourt’s article for a better description, but the point is, digital capitalism is designed to implode repeatedly.

This is getting really dark and dystopian. As I said in the beginning though, the internet is not a complete dystopia. The digital isn’t the antichrist. AI, for example, can have wonderful, artistic qualities. It can be used to make new forms of understanding that we could never imagine. It can be “both an instrument and a highly accessible platform,” as Bethany Nowviskie described in her presentation (now a blog post) reconstitute the world. This is to say that although so many of these technologies are currently being used for war and control, they have the potential to make humanistic inquiry wonderfully new and interesting.

In The Great Dictator, Charlie Chaplin speaks to a crowd of nazis and calls on them to unite together with the people of the rest of the world, to stop following the direction of a dictator and instead realize the opportunity they have to improve the human condition.

Machinery that gives abundance has left us in want. Our knowledge has made us cynical. Our cleverness, hard and unkind. We think too much and feel too little. More than machinery we need humanity. More than cleverness we need kindness and gentleness. Without these qualities, life will be violent and all will be lost. The aeroplane and the radio have brought us closer together. The very nature of these inventions cries out for the goodness in men – cries out for universal brotherhood – for the unity of us all.

Humans created the digital problems we face and humans can change them to benefit all people, not just the one percent (or rather 0.0001% as Andrew Keen has termed the multi billionaire class).

I don’t have solutions to all of these problems. It would be ridiculous of me to claim that I did. Instead, I have questions about the possibilities for the future.

  • What if we treated the internet and the digital as a whole as something that cannot be monetized?
  • What if we made the internet a national resource and free to everyone?
  • What if we made our data non-capitalistic?
  • What would the internet look like devoid of advertising and data phishing schemes?
  • If we automated or eliminated all of the work that didn’t need to exist as a result of these changes and instead gave people more opportunities to follow their actual passions, what would people create?
  • How would the digital change?
  • What do we want work to look like?
  • What if we gave a proportionate amount of satellite space, domain names, and internet infrastructure to countries other than the US (possibly based on population)?
  • How would the internet change? Would harassment online decrease?
  • What new things would be invented?
  • What if we brought up these issues with our students?
  • What is instead of just teaching them how to use the digital, we taught them to be digital “discerning moral agents” (#DenisonMissionStatement), capable of making critiques of the digital that go beyond iPhones Cause Depression?

In the words of Charlie Chaplin, “Let us fight for a world of reason, a world where science and progress will lead to all men’s happiness.”

 

 

P.S. Just watch this speech. It’s so good and so relevant right now.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php