Transparency in digital artwork

Transparency with data is a hot topic. Blue-coloured corporations misusing customer data has rightly led to public scrutiny in who collects and has access to our information. We've seen this concern also applied to artwork, with the processes behind digital work receiving higher interest and critique — with artists going to additional lengths to explain data policies and the reasoning for their choices, creating a broader discussion about privacy and data security in art.

During the last few months I've been prototyping Looking for Love, a new project with FanSHEN. During the development, we began to realise a large percentage of the progress we put into this artwork is ultimately "invisible”. This was something that we also saw with The Justice Syndicate, after many audience members began commenting on how they “forgot [they were] using an iPad” — and this is good, we’re interested in asking questions about society, not making self-referential works about technology. With Looking for Love, the more the experience feels intuitive, the more we’re able to ask the questions at the heart of the piece. But with this, there’s also a danger of sleight of hand, that in making the workings invisible, we also obscure their environmental cost.

This can carry severe implications, when the invisibility of processes deprives players of the chance to make conscious decisions. This is something that is particularly worrying when applied to the environmental impact of artwork. In the last three years machine learning has become more accessible, as previously-expensive techniques have become avalible to all thanks to cheap, hour-by-hour billing from server-farm companies including Amazon’s AWS and Google Cloud. These platforms allow artists to spin up instances spanning across multiple CPUs and GPUs for brief moments, to train and generate machine learning models. The financially inexpensive nature of this however, often hides the costly implications of such an act: of the quantity of electricity needed to train such models.

For perspective, a good example of this impact can be found in the training of AI natural language processing models. This technology, often used to analyse written language can create ~35,592kg CO2e1 per basic model. A monumental amount of carbon dioxide equivalent, when compared to the ~5,000kg CO2e that a human will create during a year of its life1. A more advanced model, using neural architecture search can produce ~284,019kg CO2e1 or ~56 years of a human life. I do believe machine learning has positively changed how artists work, as well as being beneficial for the wider society — however as the technology progresses so should our discussions about how to best disclose its impact on the environment, in a way that those experiencing it understand and can use to make conscious decisions.

Moving forwards, fanSHEN and I will be collaboratively producing documentation on our own carbon footprint, detailing the cost of running our shows day-to-day. This will also include what decisions we make about the methodology we use, favouring the use of technology with a lesser impact and creating new technology where alternatives don't exist. We see the informed decisions we make about the environmental impact of our work as contributing to its pioneering nature, rather than constraining us. We began this a while ago, rewriting and simplifying the codebase to The Justice Syndicate in 2018 to run at significantly lower voltages, and halving that value again in late 2019. With Looking for Love we have developed a new bot platform, with the potential to reduce our power consumption by a factor of 50.


We will be publishing more documentation about these developments over time. To read more about The Justice Syndicate's power consumption, click here.


1 Strubell, E., Ganesh, A. and McCallum, A. (2019). Energy and Policy Considerations for Deep Learning in NLP. [online] 1(1). Available at: https://arxiv.org/abs/1906.02243 [Accessed 13 Jun. 2019]. MacBook Pro graphic Anthony Boyd