Categories
Case Studies

“The picture is white” – on visual versus semantic analysis of fine art paintings

When applying machine learning to fine art paintings, one obvious approach is to analyse the visual content of the paintings. We discuss two major problems which caused us to take a semantic route instead: (i) state-of-the-art image analysis has been trained on photos and does not work well with paintings; (ii) visual information obtained from paintings is not sufficient for building a curatorial narrative.

Let us start by using the DenseCap online tool to automatically compute captions for a photo of two dogs playing.

The DenseCap model correctly identifies the dogs and many of their properties (e.g. “the dog is brown”, “the dog has brown eyes”, “the front legs of a dog”, “the ear|head|paw of a dog”) as well as aspects of the backgound (“a piece of grass”, “a leaf on the ground”). There are some wrong captions for the dogs (“the dogs tail is white”, but there is no tail in the picture) and for the background also (“the fence is white”). But all in all the computer vision system does a good job in what it has been trained to do: localize and describe salient regions in images in natural language [Johnson et al 2016].

Let us now apply this system to a fairly realistic dog painting from the collection of our partner museum Belvedere, Vienna.

Again many characteristics of the dog are correctly identified (“the dog is looking at the camera”, “the eye|ear of the dog”) and also “a bowl of food”. The background already provides more problems, with some confusions still comprehensible (“a white napkin”, “the curtain is white”) but others less so (“the flowers are on the wall”).

Testing the system on a more abstract painting, Belvedere’s collection highlight “The Kiss” by Gustav Klimt, yields even stranger results.

While some captions are correct (“the mans hand”, “the dress is yellow”, “the flowers are yellow|green”), others are are somewhat off (“the hat is on the mans head”) or just completely wrong (“the picture is white”, “the wall is made of bricks”, “a black door”, “a window on the building”). The essential aspect of the painting, a man and a women embracing, is not comprehended at all. Of course this is understandable since the DenseCap system has been trained on 94,000 photo images (plus 4,100,000 localized captions) but not on fine art paintings, which explains that it cannot generalize to more abstract forms of art.

On the other hand, even if an image analysis system could perfectly detect that “Caesar am Rubicon” shows a dog looking at sausage in a bowl on a table, it would still not grasp the meaning of the painting: Caesar is both the name of the dog and the historical figure who crossed the Rubicon which was a declaration of war on the Roman Senate ultimately leading to Caesar’s ascent to Roman dictator. Hence “crossing the Rubicon” is now a metaphor that means to pass a point of no return.

The same holds for Gustav Klimt’s “The Kiss”. Even if the image analysis system were not fooled by Klimt’s use of mosaic-like two-dimensional Art Nouveau organic forms and would be able to detect two lovers embracing in a kiss, it would still not grasp the significance of the decadence conveyed by the opulent exalted golden robes or the possible connotation to the tale of Orpheus and Eurydice.

The DAD project is about exploring the influence of Artificial Intelligence on the art of curating. From a curatorial perspective, grasping the semantic meaning of works of art is essential to build curatorial narratives that are not just based on a purely aesthetic procedure. See our previous blogposts [1][2] on such a semantic driven approach towards the collection of the Belvedere, where we chose to analyse text about the paintings rather than the paintings themselves.

Categories
Spaces

2nd Liminal Space at the ]a[ Research Day

On 12.11.2020 DAD’s Niko Wahl presented our intermediate results on the third research day of the Academy of Fine Arts, Vienna. Due to COVID-19, this event was an online ZOOM meeting. The goal of the ]a[ research days is to give an overview of all ongoing research projects at the academy including discussions with all participating colleagues.

Niko Wahl gave a short introduction of our project and an overview of DAD’ s collaborations with three different museums, where we work with an archive of an ethnological journal, a fine arts gallery and the statues in the Academy’s Glyptothek.

Since our work with the Austrian Museum of Folk Life and Folk Art and with the Belvedere, Vienna, has already been documented in previous blogposts [1][2], lets turn to the presentation of plaster casts at the Academy‘s Glyptothek, which we explored with Dusty, an off-the-shelf household robot.

Many people associate Artificial Intelligence (AI) with the development of ever more powerful and dextrous robots, along with horror scenarios of these machines taking over the planet. In reality robots are a small part of AI which is rather dominated by machine learning software solutions powering your Internet search engine, the natural language interface to your mobile phone, online music, movie and product recommendations and many other everyday technologies.

On the other hand, many people already own robots with limited forms of AI, for instance vacuum cleaning robots. What if we confront such a household robot with a – supposedly obsolete – museum collection of historic plaster copies of famous statues, whose very physis seems to be made of dust.

The robot takes its own route through the museum space. Following its built-in algorithms it perpetually finds new ways through the collection. It seemingly decides for itself in what order to visit the museum objects, all the time metaphorically internalizing the objects of art while inhaling their dust.

Other visitors are free to follow the robot on its path through the museum space engaging with its exhibition narrative. They might benefit form surprising relationships between objects of art established by the often creative course of the robot. Smart last generation vacuum cleaning robots are able to share their sensory experiences with others of their kind. These shared experiences usually are measurements of objects and how to avoid them when traversing a room. But what if this cloud communication, usually not accessible to us, deals with objects of art instead of everyday items? Will meeting David or the Pieta change the robots’ discourse? What if the robot meets a portrait of itself?