Our final PRESENTATION SPACE called „Dust and Data. Artificial Intelligence im Museum“ will start on Tuesday, the 8th of June 2021.
The opening of this 3 month exhibition can be attended online at the 8th of June, 19.00 in a ZOOM lifestream (in German only). Please register at the Volkskundemuseum Wien to get the link. You can also join in person from 19.30-21.00 directly at the museum (Laudongasse 15–19, 1080 Vienna).
We are very much looking forward to presenting our results and ongoing work on the art of curating in the age of Artificial Intelligence. We hope to see many of you at the opening or during the run-time of the exhibition.
Our exhibition at the Volkskundemuseum Wien has been announced at their website: “DUST AND DATA, Artificial Intelligence im Museum” (Mi, 09.06.2021 – So, 29.08.2021). This will be a final point for DAD but at the same time a starting point for further endeavors based on our achievements and works-in-progress presented in this exhibition.
Die Museen sind jetzt alle digital. Auch museale Inhalte sind vielfach digital geworden – als Kopien und Scans der Originale oder auch als originär digitale Objekte. Die meisten Sammlungen sind durch ihre digitale Erschließung heute viel zugänglicher, gleichzeitig müssen Kurator*innen und Ausstellungsbesucher*innen mit einer Flut von Inhalten inner- und außerhalb der Ausstellungen umgehen. Auch die Kommunikation zwischen Museen und dem nunmehr vernetzten Publikum wird weitestgehend zu einem digitalen Unterfangen – nicht nur in Zeiten der Pandemie.
Dust and Data ist ein künstlerisches Forschungsprojekt, dass sich nicht mit der Digitalisierung an sich, sondern mit den darauf aufbauenden Möglichkeiten, Chancen und Gefahren der Artificial Intelligence beschäftigt. Anhand einzelner Museen und Sammlungen beschreiben wir neue Möglichkeiten des Ausstellens und des Betrachtens. AI getriebene Maschinen, die nicht nur die Bedeutung von Kunstwerken und Sammlungsobjekten verstehen lernen, sondern mitunter auch selbst kreativ werden sind dabei unsere Begleiter und Guides. Neue Zusammenhänge, neue Analyseergebnisse, neue Narrative entstehen dabei im Rahmen eines Zusammentreffens von Kurator*innen, Publikum und Maschinen. Wir zeigen dabei auch eine gänzliche neue Art des musealen Arbeitens und unsere (teils offenen) Fragen an die Beteiligung der neuen Technologien.
Das Volkskundemuseum Wien wird bei diesem Projekt zur Gastgeberin für neue Einblicke in die Sammlungen und Ausstellungsräume der Glyptothek, der Akademie der bildenden Künste, des Belvederes und thematisiert dabei auch die eigene Sammlung.
Kuratierung: Niko Wahl, Arthur Flexer, Irina Koerdt, Alexander Martos, Sanja Utech
DAD´s Arthur Flexer gave a virtual lecture on our plans for the DAD exhibition opening early summer at the Austrian Museum of Folk Life and Folk Art. The lecture was given at the Austrian Research Institute for Artificial Intelligence (OFAI), where DAD was also located during the first nine month. This last CRITICAL SPACE was conducted to get feedback about our plans to build exhibits documenting our engagement with the collections of the Glyptothek of the Academy of Fine Arts Vienna, the Volkskundemuseum Wien and the Belvedere. Results for these three case studies differ according to the level of interaction between curators and machine: from using natural language processing tools for research in museum databases to a symbiotic interaction between curators and algorithms to robots visiting museums autonomously.
The ensuing conversation centered around ways to include and document aspects of algorithmic bias and societal stereotypes existing in natural language models. We also discussed our approaches to turn digital findings into analog exhibits and how using a robot to explore the Glyptothek aligns with the public’s (mis)conception that AI is predominantly about building machines not software.
DAD’s Arthur Flexer engaged in an online meeting with Dr. Sandra Manninger (SPAN, Tsinghua SIGS, IAAC) and Dr. Matias del Campo (SPAN, Taubman College of Architecture and Urban Planning, University of Michigan) discussing “Data, Dust, Art & Techno”. Sandra Manninger and Matias del Campo are working in the intersection of Artificial Intelligence, Machine Learning and Architecture. Our communication centered on the opportunities as well as possible pitfalls of applying AI to the arts. Please enjoy what is the first in a series of “AI chats” by Manninger and del Campo.
When applying machine learning to fine art paintings, one obvious approach is to analyse the visual content of the paintings. We discuss two major problems which caused us to take a semantic route instead: (i) state-of-the-art image analysis has been trained on photos and does not work well with paintings; (ii) visual information obtained from paintings is not sufficient for building a curatorial narrative.
Let us start by using the DenseCap online tool to automatically compute captions for a photo of two dogs playing.
The DenseCap model correctly identifies the dogs and many of their properties (e.g. “the dog is brown”, “the dog has brown eyes”, “the front legs of a dog”, “the ear|head|paw of a dog”) as well as aspects of the backgound (“a piece of grass”, “a leaf on the ground”). There are some wrong captions for the dogs (“the dogs tail is white”, but there is no tail in the picture) and for the background also (“the fence is white”). But all in all the computer vision system does a good job in what it has been trained to do: localize and describe salient regions in images in natural language [Johnson et al 2016].
Let us now apply this system to a fairly realistic dog painting from the collection of our partner museum Belvedere, Vienna.
Again many characteristics of the dog are correctly identified (“the dog is looking at the camera”, “the eye|ear of the dog”) and also “a bowl of food”. The background already provides more problems, with some confusions still comprehensible (“a white napkin”, “the curtain is white”) but others less so (“the flowers are on the wall”).
Testing the system on a more abstract painting, Belvedere’s collection highlight “The Kiss” by Gustav Klimt, yields even stranger results.
While some captions are correct (“the mans hand”, “the dress is yellow”, “the flowers are yellow|green”), others are are somewhat off (“the hat is on the mans head”) or just completely wrong (“the picture is white”, “the wall is made of bricks”, “a black door”, “a window on the building”). The essential aspect of the painting, a man and a women embracing, is not comprehended at all. Of course this is understandable since the DenseCap system has been trained on 94,000 photo images (plus 4,100,000 localized captions) but not on fine art paintings, which explains that it cannot generalize to more abstract forms of art.
On the other hand, even if an image analysis system could perfectly detect that “Caesar am Rubicon” shows a dog looking at sausage in a bowl on a table, it would still not grasp the meaning of the painting: Caesar is both the name of the dog and the historical figure who crossed the Rubicon which was a declaration of war on the Roman Senate ultimately leading to Caesar’s ascent to Roman dictator. Hence “crossing the Rubicon” is now a metaphor that means to pass a point of no return.
The same holds for Gustav Klimt’s “The Kiss”. Even if the image analysis system were not fooled by Klimt’s use of mosaic-like two-dimensional Art Nouveau organic forms and would be able to detect two lovers embracing in a kiss, it would still not grasp the significance of the decadence conveyed by the opulent exalted golden robes or the possible connotation to the tale of Orpheus and Eurydice.
The DAD project is about exploring the influence of Artificial Intelligence on the art of curating. From a curatorial perspective, grasping the semantic meaning of works of art is essential to build curatorial narratives that are not just based on a purely aesthetic procedure. See our previous blogposts  on such a semantic driven approach towards the collection of the Belvedere, where we chose to analyse text about the paintings rather than the paintings themselves.
On the 22nd of September 2020 the DAD team met with Christian Huemer and Johanna Aufreiter from the Belvedere Research Center to discuss our results concerning Belvedere’s online collection. One focus of the meeting was our engagement with the room on “Viennese Portraiture in the Biedermeier Period” in Belvedere’s permanent exhibition.
Applying our algorithm to find pathways of semantic meaning[Flexer 2020] between works of art, we are able to suggest additional works for the liminal spaces between individiual positions in the curatorial narrative, opening up new sub-narratives for the room. Based on a word embedding [Mikolov et al 2013] of the keywords associated with the paintings, our algorithm suggests works of art which follow a pathway between the respective semantic meanings. Moreover we are able to further constrain our liminal curation by requiring all art works to fit an additional overall topic chosen by a human curator, again translated to the language of Belvedere’s keyword system via word embedding. As an example see a “Gender” constraint applied to the Biedermeier room.
A conceivable outcome is a revision of the Biedermeier room achieved via a joint curation of human and machine. This, as well as other approaches towards the Belvedere collection, will be the center of further exchange between DAD and the Belvedere.
All depicted paintings in this blog post by Belvedere, Vienna, Austria (CC BY-SA 4.0).
While a virtual workshop is not able to replace the experience and liveliness of a physical scientific meeting, it still allowed us to get an increasing degree of public exposure for our work in progress, which is the purpose of our Liminal Spaces.