The Manhattan Project collects photographs of various experiments related to the use of nuclear explosives in the 1940s. The small-format negatives were found on an abandoned US military base in Sicily. Over time, the images suffered considerable deterioration, producing an overabundance of grain and diminished sharpness. This could be a perfect incipit to deceive the viewer; a story that makes sense of photographs that make no sense. In fact, these aren’t even photographs; they’re photo-realistic images produced by artificial intelligence.
Starting with descriptions and scans of photographs from the 1977 book Evidence by Larry Sultan and Mike Mandel, an AI-tool generated new images, which were later manipulated in post-production, culminating in a series of fake pictures. What are we standing in front of? To whom do these images belong? To the photographers who took the pictures in the 1940s? To Larry Sultan and Mike Mandel, who selected them from thousands of others as they dug through US federal archives in the 1970s? Or to those who took the 12 billion photographs through which the AI generated these images by way of a deep-learning algorithm? Do they belong to the Artificial Intelligence itself, or to the person who clicked the ‘generate’ button on the computer?
In a short circuit of belonging, these images ask many questions without providing answers. The vision of an explosion – and the images related to it – is what we’re left with. Non-real images that speak to reality and the present; fake archival artefacts that connect to the present day – as we talk again about world war and nuclear testing.