Categories
Spaces

3rd Liminal Space: Is one word worth more than one thousand pictures?

DAD’s Arthur Flexer presented our work on analysing the semantic meaning of works of art at the International online conference The Art Museum in the Digital Age of the Belvedere Research Center. This conference is concerned with the digital transformation of art museums, which seems even more relevant lately because of COVID-19 related lockdowns and closures.

Arthur presented our (somewhat radical) approach to analyse text about artworks rather than the usual route of analysing images of the artworks. We chose this semantic driven approach because a lot of information about an artwork cannot be found in the artwork itself. Think e.g. of subjecting the “Mona Lisa” to an automatic visual analysis. Computational results will tell you that it is a picture of a young woman, in front of a landscape, and (if your algorithm is really good) is sort of smiling. This information of course totally misses the significance of the painting for (Western) art history, its immense relevance and the many connotations it has. All of this rather is a societal construct and result of centuries of discourse and reception history (for more on this see our previous blogpost). Our semantic driven approach [1][2] towards the collection of the Belvedere enables us to discover X degrees of keyword separation between works of art.

This is achieved by using the technique of word embedding [Mikolov et al 2013], which encodes semantic similarities between words by modelling the context to their neighboring words in a large training text corpus. This was used to embed keywords of Belvedere´s online fine arts collection and obtain pathways through the resulting semantic space.

The above result starts with a painting having keywords ’Clouds’, ’Mountain’, ’Meadow’ from which we transit to ’Mountain’, ’Lake’, ’Alps’ and ’Austria’, next to a painting tagged ’Fog’, then one with ’Rocky coast’ and finally with ’Clouds’, ’Rocky Coast’, ’Sea’. Our pathway therefore smoothly transits from a mountain setting to a lake in the mountains to the sea.

We also presented one very concrete solution for a room in Belvedere’s permanent exhibition. It is a room about “Viennese portraiture in the Biedermeier period”, assembling the “greatest portrait painters” from this period. In the above picture you can see four blue frames which indicate empty slots which we like to fill using our algorithm with the respective neighboring artworks as input.

The keywords for these neighboring artworks however are purely descriptive, e.g. ‘headgear’, ‘necklace’, ‘bonnet’, ‘eye contact’, probably not doing the semantic content of the artworks full justice. We believe that one underlying topic of the Biedermeier room is ‘gender’, with all but one painting depicting females. We therefore add an additional algorithmic constraint by requiring all suggested artworks to respects both the requirement of being part of a pathway and having a ‘gender’ related keyword. Since ‘gender’ is not a keyword in the Belvedere taxonomy, we use word embedding to obtain Belvedere keywords with high similarity to the topic of ‘gender’. This translation step yields keywords like: ‘femaleness’, ‘religion’, ‘islam’, ‘equality’, ‘motherhood’ or ‘headscarf’. It is obvious that these keywords point to a stereotypical discourse of gender, quickly derailing towards topics of religion and a compulsion to wear headscarfs or women being predominantly seen in their role as mothers.

This is also why we termed the use of word embedding in this context world embedding: it confronts the very rigid taxonomy of the Belvedere keywords (based on Iconclass, a classification system for cultural content) with everyday language as represented in the textual training data of the word embedding. It thereby recontextualizes or even “resocializes” taxonomic art histories via natural language processing since it uncovers biases and prejudice in our use of language and (re-?) introduces them to the world of fine arts.

The above picture shows three paintings from the Biedermeier room plus four additional paintings (with red frames) which our algorithm suggests. The second painting from the left is suggested because its keyword ‘femaleness’ is a gender keyword and its keyword ‘necklace’ makes it similar to the keywords of the first painting (‘earrings’, ‘pearl necklace’) and the one in the middle (‘brooch’, ‘bracelet’). The 5th painting from the left is suggested because ‘headscarf’ is a gender keyword and ‘eye contact’ and ‘earring’ make it similar to both the painting in the middle (‘brooch’, ‘bracelet’, ‘eye contact’) and the painting on the far right (‘eye contact’, ‘bonnet’).

In the ensuing discussion with the conference’s audience Arthur Flexer advocated that our semantic apprach is more helpful for building a curatorial narrative than a purely aesthetic procedure. It allows to answer the question about curatorial gaps between artworks shown in an exhibition. What works of art exist in the holdings of the museum that fit the curatorial narrative but did not succeed in becoming part of the exhibition?

He also tried to make clear that by using such a machine learning tool like word embedding, curating becomes a joint endeavor of man and machine, where curatorial decisions have to be formulated as input and constraints to the algorithm. But even a simple curatorial Google search already is an interaction of man and machine, with algorithms (oblique to the curator) nevertheless to a certain extent shaping their curatorial enterprise by showing specific selections of information only. It was also discussed that such a man/machine approach is able to uncover algorithmic biases in the methods used, as e.g. stereotypical representations of societal discourse in word embedding.

Looking towards future extensions of our work it can be said that of course we could analyse longer (art historic) texts about artworks with the same methodology thereby gaining much richer semantic context then by relying on simple keywords only. Another possible extension is to embed semantic and visual information simultaneously which could yield curatorial solutions that respect semantic and viusal constraints at the same time [Frome et al 2013].

Categories
Activities

DAD at “The Art Museum in the Digital Age – 2021” conference

Foto: Johannes Stoll, © Belvedere, Wien

The DAD team will talk about “WOR(L)D EMBEDDING – Curating/Computing/Displaying Semantic Pathways through Belvedere’s Online Collection” at the annual conference of the Belvedere Research Center. This conference is concerned with the digital transformation of art museums. The dates are from 11th to 15th of January 2021 and the format is an online meeting. You can find out about the program and how to register for free at the respective website.

Categories
Case Studies

“The picture is white” – on visual versus semantic analysis of fine art paintings

When applying machine learning to fine art paintings, one obvious approach is to analyse the visual content of the paintings. We discuss two major problems which caused us to take a semantic route instead: (i) state-of-the-art image analysis has been trained on photos and does not work well with paintings; (ii) visual information obtained from paintings is not sufficient for building a curatorial narrative.

Let us start by using the DenseCap online tool to automatically compute captions for a photo of two dogs playing.

The DenseCap model correctly identifies the dogs and many of their properties (e.g. “the dog is brown”, “the dog has brown eyes”, “the front legs of a dog”, “the ear|head|paw of a dog”) as well as aspects of the backgound (“a piece of grass”, “a leaf on the ground”). There are some wrong captions for the dogs (“the dogs tail is white”, but there is no tail in the picture) and for the background also (“the fence is white”). But all in all the computer vision system does a good job in what it has been trained to do: localize and describe salient regions in images in natural language [Johnson et al 2016].

Let us now apply this system to a fairly realistic dog painting from the collection of our partner museum Belvedere, Vienna.

Again many characteristics of the dog are correctly identified (“the dog is looking at the camera”, “the eye|ear of the dog”) and also “a bowl of food”. The background already provides more problems, with some confusions still comprehensible (“a white napkin”, “the curtain is white”) but others less so (“the flowers are on the wall”).

Testing the system on a more abstract painting, Belvedere’s collection highlight “The Kiss” by Gustav Klimt, yields even stranger results.

While some captions are correct (“the mans hand”, “the dress is yellow”, “the flowers are yellow|green”), others are are somewhat off (“the hat is on the mans head”) or just completely wrong (“the picture is white”, “the wall is made of bricks”, “a black door”, “a window on the building”). The essential aspect of the painting, a man and a women embracing, is not comprehended at all. Of course this is understandable since the DenseCap system has been trained on 94,000 photo images (plus 4,100,000 localized captions) but not on fine art paintings, which explains that it cannot generalize to more abstract forms of art.

On the other hand, even if an image analysis system could perfectly detect that “Caesar am Rubicon” shows a dog looking at sausage in a bowl on a table, it would still not grasp the meaning of the painting: Caesar is both the name of the dog and the historical figure who crossed the Rubicon which was a declaration of war on the Roman Senate ultimately leading to Caesar’s ascent to Roman dictator. Hence “crossing the Rubicon” is now a metaphor that means to pass a point of no return.

The same holds for Gustav Klimt’s “The Kiss”. Even if the image analysis system were not fooled by Klimt’s use of mosaic-like two-dimensional Art Nouveau organic forms and would be able to detect two lovers embracing in a kiss, it would still not grasp the significance of the decadence conveyed by the opulent exalted golden robes or the possible connotation to the tale of Orpheus and Eurydice.

The DAD project is about exploring the influence of Artificial Intelligence on the art of curating. From a curatorial perspective, grasping the semantic meaning of works of art is essential to build curatorial narratives that are not just based on a purely aesthetic procedure. See our previous blogposts [1][2] on such a semantic driven approach towards the collection of the Belvedere, where we chose to analyse text about the paintings rather than the paintings themselves.