Making AI work for us part 2: Action on AI

Disruptive Voices
Disruptive Voices
Published in
5 min readSep 28, 2021

--

This blog post is part of the UCL Public Policy, UCL Grand Challenges and British Academy project on AI and the Future of Work. As part of this, Helena Hollis has been exploring what “good” work means alongside AI.

In my previous post, I discussed ways of thinking about “good” work, and how AI developed within our current capitalist context may be maladaptive. In this blog post, I focus primarily on ideas I gained from an interview conversation with Carly Kind, director of the Ada Lovelace Institute. We talked about the need to understand all technologies as socially transformative, and how we might guide their transformations.

Steering change

How might we redirect AI advances such that they could truly provide us with emancipatory outcomes, taking away our unwanted labour without also stripping means of survival, improving our work, and giving us more space for meaningful, communal, creative action?

Interviewing Carly Kind gave me some interesting suggestions. Firstly, Kind argues we shouldn’t be thinking about the technology as a means for making work better, rather we need to start by thinking through how we want to change our work, and in what ways that change fits with AI. This brings us back to our starting point of understanding “good” work — first we need to know what good work means to specific workers: How many hours do they want to work? What kinds of products do they feel are worth working towards? What does enjoyment in their work look like? This can help inform a broader evolution in what we think different jobs in different contexts should be like, and design technology towards such jobs.

Kind argues we need a “philosophical orientation to the individual”, and that by putting the worker as a whole person first, good outcomes will follow, including better outcomes for businesses and the economy. But this is a very different approach than what we see in the major players in the tech and AI development space. Where “move fast and break things” is the philosophy, this kind of reflective and deep thinking is anathema. So, Kind says, “Broadly speaking, my position is: move slow and fix things.”

It also seems likely to me that the “move fast” approach is not conducive to meaningful, “good” work for developers; tech worker surveys have found more than half experience burnout. In my previous post I noted the ways that AI can intersect with consumer capitalism to escalate an ever-increasing cycle of instrumentalised work. Breaking out of this intensification process may require different approaches to technology, including from those creating it. Examples of slower ways of working with technology, emphasising the value of spending time contributing to something truly thought-through and socially beneficial, can be found in movements such as The Maintainers, valuing repair of essential infrastructure over the constant development of new products.

For those who want to move fast and break things, Kind argues for a blue sky, innovative environment that is kept in a separate ecosystem, not immediately deployed in the wild. This would permit very serious examination of societal impacts and the need for changes around existing societal structures before such technologies are released. She suggests a medicine-like approval process, calls for which have been increasingly popular. Yet Kind argues an AI approval process would need to consider a vast range of different aspects — not only the safety and security of the actual tool, but what implications it could have for society as a whole, and for work.

As things stand, it seems almost cliché to point out that the ability to regulate has fallen well behind the ability of companies in tech to innovate. But this isn’t necessarily a good thing for those in tech either. Presumably, most developers want their work to have positive lasting effects, and they also want to create meaningful products. Just as slowing down may be beneficial to both developers and the outcomes of the technologies they create, good regulation could help ensure good work developing and working with tech.

Good work for regulators

On the Ada Lovelace Institute blog, a series on AI ethics eloquently argues for an arts and humanities role in guiding AI. But as well as arguing for this need, we should also focus on how we can practically create good work for AI ethicists and regulators.

We can see how working in AI ethics can be made challenging or even impossible, in cases such as Google’s firing of Timnit Gebru. This highlights a fault line between the value of ethicists for improving a company’s products, and resistance to their challenging the very logic of the product. For instance, a facial recognition system that fails with non-white faces is clearly not a good product in a marketplace full of ethnic diversity. Yet showing this failure in the system also shines a much darker light on the structural inequalities within society as a whole, challenging what a facial recognition system tells us about ourselves and questioning its very inception. This case highlights a need for ethicists in positions better designed to support and empower their work than the traditional corporate structure can accommodate.

Gebru’s work also demonstrates how AI can both reflect the inequalities in the material it is trained on and thus shed light on them, and also pour fuel on their fires. The immense social influences and impacts of AI have led some to suggest tech workers need more social science and humanities training, to prevent siloed thinking. This may help, but in my interviewing I’ve found limited faith in this approach and calls for much broader regulatory need.

In my previous blog post I floated the idea of AI regulation as highly prestigious, expert role. Rather than making everyone in tech a little more knowledgeable about some wider issues, perhaps we should focus on creating good work for those with highly specialised knowledge and skillsets in the intersections of ethics, anthropology of work, technology, and more. Rather than thinking about introducing a little interdisciplinarity here and there, perhaps we should think of those expert in interdisciplinary fields as special, what Geraint Rees calls “the glue” we need to integrate AI into our work and wider lives.

This brings me back to the start of this post, and Kind’s call for a “philosophical orientation to the individual” — as well as putting workers’ lived experiences first in order to establish a way of integrating AI into work beneficially, we may also need to put the working experiences of AI ethicists and regulators first so as to set up a context within which they can flourish and have real impact.

_____________________________________________________________

About the author

Helena Hollis is a UCL PhD researcher in Information Studies, and is also working on the UCL and British Academy project on AI and the Future of Work.

--

--