POETIC THINKING

Explore samples of student writing on the Poetic Thinking platform. Students start new dialogues and engage with course material, expanding conversations that begin in class.

Do machines build humans?

Posted by Jason Zhao '21 as part of the course “Myth and Modernity” taught in Spring 2020.

I’ve really enjoyed all the interesting analyses of machines, AI, and technology as cultural phenomena rather than inert technical objects. I do think Kafka, intentionally or not, really provokes an interrogation of the relationship between humanity and technology: both to question it explicitly and also implicitly by using technology as a mirror for mankind’s relationship with itself.

I wanted to contribute to this dialogue by sharing a really interesting presentation I found from the Center for Humane Technology (CHT), which is an institute based here in Silicon Valley. Their mission is to work with tech companies, top executives, investors, technologists, and political leaders to stop human downgrading caused by inhumane technology.

Now, what is this ominous “human downgrading”, and how does “inhumane” technology contribute to such downgrading?

Tristan Harris, the founder of CHT, describes his experience and realization of human downgrading during his time as a product lead at Google, where he realized that technology, built in service of profit, was capitalizing upon human vulnerabilities and over time, literally programming human minds to become less rational and more judgemental. Our biological tendencies to make hasty judgements, crave emotional feedback, and status signal at every chance are open vulnerabilities that tech companies like Google and Facebook actively manipulate, increasing the amount of time we spend on their platforms and thus the amount of money in their pockets.

While many have been scared of a time in the future where technology overtakes our human capabilities, Tristan Harris warns us that technology has already overwhelmed our human vulnerabilities. This moment is already here, and perhaps because it is here, we are so exploited by technology we fail to even notice it as exploitation.

As technology continues to capitalize on human vulnerabilities, thus in an all too real way “building humans” in order to serve the machine’s goals of profit maximization, the relationship between technology and humanity becomes eerily subverted. We are being used by the machines. Human data is the new oil, and our privacy has become the new gold mine — something for machines to violate and extract in order to further their endless journey of accumulation. The machine agenda can be seen here:

Many anthropologists exalt humanity’s unrivalved propensity to create — our unique ability to construct useful tools and artifacts — as a defining, constitutive feature of our species. Yet, as Kafka so unforgettably illustrates, artifacts can take on a life of their own, becoming the main character in the drama of life. In turn, they assemble their own retinue of mechanistic human servants. Our abilities consume us, a mythical snake eating its own tail.

In order to stop human downgrading in an era where human problems are massively upgrading in their complexity and danger, CHT hopes to educate technology executives and engineers about how to design machines that are aligned with human values.

While I don’t think such an movement will solve all our problems (if forced to choose, will tech executives opt for human values or the United States Dollar?), I do think it is a first step in the right direction. The fact that companies in Silicon Valley now must confront the ethical implications of their products, even if only as gesture, still marks some progress.

I wonder if any of you have thoughts about how more systemic societal change might be possible. Perhaps a solution to technological exploitation of humanity that doesn’t recursively depend on technology (however humanely designed) is in order. Is unbridled capitalism at fault? Can, as Marx seems to suggest, technology be used for “good” as long as the cultural conditions are right?

I’ll end with 2 recommendations:

The first is Tristan Harris’s brief presentation (from which I took screenshots) on human downgrading for the CHT. It appears like a long video, but he only presents for about 15 minutes, and he’s very very accessible and insightful.

The second is an essay (From Manchester to Barcelona) I’m currently reading that seems to work through my question at the end of the essay, more or less critiquing capitalism for humanity’s toxic subjugation by the machine. The piece is from Logic mag, which is a far left magazine commenting on technology and culture.

Would love to get others’ thoughts!

POETIC THINKING TODAY:
AN ESSAY

To learn more about the philosophy of the project, see Poetic Thinking Today by Amir Eshel. Available from the Stanford University Press and in German from Suhrkamp Verlag.

POETIC THINKING would not have been realized without the collaboration of the many Stanford students who have participated over the years. Special thanks are due to Brian Johnsrud, Daniel Bush, Cody Chun, John Coyle, Ravi Smith, Courtney Hodrick, and Amber Moyles as well as the many others who have used the Poetic Thinking platform and contributed to its development.

Additional thanks and recognition go to the Center for Spatial and Textual Analysis (CESTA) and the Taube Center for Jewish Studies. The support of the Dean of the School of Humanities and Sciences at Stanford has also been essential.