This includes a model of a weaving loom programmable by punch cards or a transparent model of the human brain.
There are also interactive stations e.g. trying to guess your emotional status via camera, which is appearantly not easy with visitors wearing masks due to COVID-19. You can also try to teach a dog/cat classifier by showing it respective pictures.
The top floor is dedicated to the intersection of AI and the Arts, concentrating on music, literature and visual arts.
The DAD team will talk about “WOR(L)D EMBEDDING – Curating/Computing/Displaying Semantic Pathways through Belvedere’s Online Collection” at the annual conference of the Belvedere Research Center. This conference is concerned with the digital transformation of art museums. The dates are from 11th to 15th of January 2021 and the format is an online meeting. You can find out about the program and how to register for free at the respective website.
When applying machine learning to fine art paintings, one obvious approach is to analyse the visual content of the paintings. We discuss two major problems which caused us to take a semantic route instead: (i) state-of-the-art image analysis has been trained on photos and does not work well with paintings; (ii) visual information obtained from paintings is not sufficient for building a curatorial narrative.
Let us start by using the DenseCap online tool to automatically compute captions for a photo of two dogs playing.
The DenseCap model correctly identifies the dogs and many of their properties (e.g. “the dog is brown”, “the dog has brown eyes”, “the front legs of a dog”, “the ear|head|paw of a dog”) as well as aspects of the backgound (“a piece of grass”, “a leaf on the ground”). There are some wrong captions for the dogs (“the dogs tail is white”, but there is no tail in the picture) and for the background also (“the fence is white”). But all in all the computer vision system does a good job in what it has been trained to do: localize and describe salient regions in images in natural language [Johnson et al 2016].
Let us now apply this system to a fairly realistic dog painting from the collection of our partner museum Belvedere, Vienna.
Again many characteristics of the dog are correctly identified (“the dog is looking at the camera”, “the eye|ear of the dog”) and also “a bowl of food”. The background already provides more problems, with some confusions still comprehensible (“a white napkin”, “the curtain is white”) but others less so (“the flowers are on the wall”).
Testing the system on a more abstract painting, Belvedere’s collection highlight “The Kiss” by Gustav Klimt, yields even stranger results.
While some captions are correct (“the mans hand”, “the dress is yellow”, “the flowers are yellow|green”), others are are somewhat off (“the hat is on the mans head”) or just completely wrong (“the picture is white”, “the wall is made of bricks”, “a black door”, “a window on the building”). The essential aspect of the painting, a man and a women embracing, is not comprehended at all. Of course this is understandable since the DenseCap system has been trained on 94,000 photo images (plus 4,100,000 localized captions) but not on fine art paintings, which explains that it cannot generalize to more abstract forms of art.
On the other hand, even if an image analysis system could perfectly detect that “Caesar am Rubicon” shows a dog looking at sausage in a bowl on a table, it would still not grasp the meaning of the painting: Caesar is both the name of the dog and the historical figure who crossed the Rubicon which was a declaration of war on the Roman Senate ultimately leading to Caesar’s ascent to Roman dictator. Hence “crossing the Rubicon” is now a metaphor that means to pass a point of no return.
The same holds for Gustav Klimt’s “The Kiss”. Even if the image analysis system were not fooled by Klimt’s use of mosaic-like two-dimensional Art Nouveau organic forms and would be able to detect two lovers embracing in a kiss, it would still not grasp the significance of the decadence conveyed by the opulent exalted golden robes or the possible connotation to the tale of Orpheus and Eurydice.
The DAD project is about exploring the influence of Artificial Intelligence on the art of curating. From a curatorial perspective, grasping the semantic meaning of works of art is essential to build curatorial narratives that are not just based on a purely aesthetic procedure. See our previous blogposts  on such a semantic driven approach towards the collection of the Belvedere, where we chose to analyse text about the paintings rather than the paintings themselves.
On 12.11.2020 DAD’s Niko Wahl presented our intermediate results on the third research day of the Academy of Fine Arts, Vienna. Due to COVID-19, this event was an online ZOOM meeting. The goal of the ]a[ research days is to give an overview of all ongoing research projects at the academy including discussions with all participating colleagues.
Niko Wahl gave a short introduction of our project and an overview of DAD’ s collaborations with three different museums, where we work with an archive of an ethnological journal, a fine arts gallery and the statues in the Academy’s Glyptothek.
Since our work with the Austrian Museum of Folk Life and Folk Art and with the Belvedere, Vienna, has already been documented in previous blogposts , lets turn to the presentation of plaster casts at the Academy‘s Glyptothek, which we explored with Dusty, an off-the-shelf household robot.
Many people associate Artificial Intelligence (AI) with the development of ever more powerful and dextrous robots, along with horror scenarios of these machines taking over the planet. In reality robots are a small part of AI which is rather dominated by machine learning software solutions powering your Internet search engine, the natural language interface to your mobile phone, online music, movie and product recommendations and many other everyday technologies.
On the other hand, many people already own robots with limited forms of AI, for instance vacuum cleaning robots. What if we confront such a household robot with a – supposedly obsolete – museum collection of historic plaster copies of famous statues, whose very physis seems to be made of dust.
The robot takes its own route through the museum space. Following its built-in algorithms it perpetually finds new ways through the collection. It seemingly decides for itself in what order to visit the museum objects, all the time metaphorically internalizing the objects of art while inhaling their dust.
Other visitors are free to follow the robot on its path through the museum space engaging with its exhibition narrative. They might benefit form surprising relationships between objects of art established by the often creative course of the robot. Smart last generation vacuum cleaning robots are able to share their sensory experiences with others of their kind. These shared experiences usually are measurements of objects and how to avoid them when traversing a room. But what if this cloud communication, usually not accessible to us, deals with objects of art instead of everyday items? Will meeting David or the Pieta change the robots’ discourse? What if the robot meets a portrait of itself?
Do you remember Dusty, the vacuum cleaner robot that explored a model version of the Glyptothek during this spring’s COVID19 related lockdown? This summer Dusty was able to experience the real Glyptothek, using its somewhat limited artificial intelligence, basically trying to avoid obstacles on its way through the maze of shelves full of plaster casts.
The Glyptothek of the Academy of Fine Arts, Vienna, is a collection of plaster casts dating back to the late 17th century. Its main task was to serve as study material for Academy students, containing copies of a canon of world renown sculptures, ranging from plaster casts of Egyptian originals to copies of Greek and Roman, medieval, renaissance and historism statues. This collection of copies of works of art can be seen as an early analog blueprint of digital collections: the Glyptothek made the essence of European sculpture available to local audiences, who could enjoy international pieces of art without leaving their home town, very much like today‘s internet population can access digital images of the world‘s artistic heritage at the click of their handheld device.
Speaking of digital images, the above image of Dusty in the Glyptothek actually is a digital copy of an analog photograph, which in itself is an analog copy of a plaster cast which is a copy of a statue which is a copy of a real (or imagined) person …
During the research visit of Bob Sturm we will discuss the frontiers of artificial creativity and its criticism in the context of DUST AND DATA. Bob Sturm will also give a public lecture about his work on using machine learning to compose Irish folk music. His talk will also feature live accordion playing.
“Folk the Algorithms” – Bob Sturm, KTH Royal Institute of Technology, Stockholm, Sweden
In this talk/musical performance, I will recount how a bit of Saturday morning humor turned into an ERC Consolidator Grant four years later. It’s a story of an engineer with an artistic bent meeting a machine learning algorithm through a blog. One part of the story involves the naive misappropriation of music data without consideration of its provenance and significance. Another part involves the serious contemplation of such transgressions, and then endeavors taken to redress them. A variety of interesting perspectives and questions have arisen out of this story, which will be subject to study in the project, Music at the Frontiers of Artificial Creativity and Criticism (MUSAiC, ERC-2019-COG No. 864189).
Time: Wednesday, 26th of February 2020, 6:30 p.m. sharp