Categories
Case Studies

On a blind spot in distant reading

The accessibility of vast amounts of text in digital form has enabled humanities to add ‘distant reading’ of thousands of books by algorithms as a new research tool to its repertoire of methods. This ‘distant reading’ of course has to be complemented with traditional ‘close reading’ of individual books. See e.g. [Jänicke et al 2015] for a survey and discussion of challenges.

We are currently working on applying machine learning as a distant reading tool to the journal Österreichische Zeitschrift für Volkskunde (OEZV) of the Austrian Museum of Folk Life and Folk Art. The OEZV has been published almost continuously since 1895 and we are able to use the result of an ‘Optical Character Recognition’ (OCR) scan of the entire publication history for our analysis. One of our interests is how the discourse and corresponding topics have changed over the years of its publication.

We applied topic modelling via Latent Dirichlet Allocation (LDA) for this analysis. Topic modelling tries to model latent topics in large amounts of text. Its basic entitity are documents which are modelled as probability distributions over word occurences. The assumption is that documents contain text about different topics which manifests itself in usage of different words. These latent topics are then also modelled as probability distributions over word occurences, since different topics will use different words related to their different content. LDA then finds a probability distribution of topics across documents, trying to optimize separation of documents over topics. The assumption here is that different documents contain information about rather different (and few) topics, but of course overlaps are allowed.

Usually a document is one article in a journal or newspaper, but for OEZV we have no access to article boundaries, therefore our basic entity are individual pages (more than 34000 for all years of OEZV together). Since we are interested in topic evolution over time, we aggregated all pages per year into overall documents. Using the LDA visualization package pyLDAvis, we modelled all OEZV volumes as distributions over 30 topics.

In this visualization, every topic is represented as a sphere in the left part of the figure. Clicking on one of the spheres (no. 17 in our picture), the right part of the figure shows a distribution of words which are prevalent for this topic. Some of the words have a religious meaning like ‘apostel’, ‘christusfigur’, ‘passionsspiele’ or ‘gründonnerstag’. Others are more about dancing like ‘tänzer’, ‘bandltanz’ or ‘getanzt’, which might lead to the conclusion that this topic is about religion and certain folkroristic rituals aorund it. It has to be said that such topics are often hard to interpret and maybe 30 topics for the entire OEZV collection is a too coarse resolution.

Every year of OEZV is now a distribution over 30 topics, i.e. a vector of probabilities of size 30. We can use this representation to compute how similar annual volumes of OEZV are in their distribution across topics. In the figure above both axes are the years of publication, from 1895 in the bottom left corner to 2018 bottom right and top left. One coloured cell in this figure represents the similarity of these years (inverted distance between probability vectors), with dark blue being very similar, bright yellow not similar and green in-between. Therefore the main diagonal is dark blue, since OEZV from one year is of course very similar to itself. The most interesting patterns are the larger dark blue rectangles along the main diagonal, indicating a number of consecutive years which are all similar to each other. This is most evident for the years 1940 to 1944, which is highlighted with a red ring around it. This time span coincides with Austria being part of Nazi Germany’s Third Reich, hence it is expected that the discourse in OEZV might be very different from all other years of its publication.

However, looking directly at the results from the OCR scan for the years 1940 to 1944, we realize that all of it is some kind of pseudo-German nonsense language, e.g.: “Vrautbater fteßte eine Dteipe bon Dtätfelfragen, auf bie ber Vrautfüprer alg Veboßmäiptigter beg Vräutigamg bie paffenbe Slnimori finbett mufgte”. To understand what happened here we look directly at the PDF-files of the OEZV, showing one page for the year 1944 below. As you can see a special typeface called ‘Frakturschrift’ is being used, which was typical for Nazi Germany.

Compare this to a page from year 1895, where a more common typeface has been used, as in all years except 1940 to 1944.

Appearantly the OCR scan failed miserably on the ‘Frakturschrift’ typeface resulting in the years 1940 to 1944 using its “own” language, some kind of gibberish German. This of course has a very harmful impact on our machine learning approach, since the years 1940 to 1944 use completely different (non-sensical) words than all other years. As a result these years end up having very different distributions across words and topics. Hence the high similarity between years 1940 to 1944 turns out to not be a very significant result after all, but an artefact of the processing pipeline with the OCR mistake propagating through the whole system.

Nevertheless we find this result interesting, because the time when Austria was part of the Third Reich has always been a sort of blind spot for Austria’s society, taking decades to accept its own disreputable role in the horrific events during this historic time span, slowly emerging from “Hitler’s first victim” to perpetrator and culprit. It is therefore quite ironic that this blind spot reappears via distant reading of Austria’s main scientific journal on Austrian folk life and folk art …