CIRMMT Workshop, September 7th, 2013, Part III: Music Digitization in the World
Posted by Catherine Motuz on September 14, 2013
Laurent Pugin: Looking at printed music anthologies in the context of digitization
Laurent Pugin, a former McGill postdoc now working for RISM Switzerland, focused on the challenges of using OMR software on large collections of music. Using Aruspix, a programme that uses machine learning and a graphical user interface (GUI) to perform OMR on early printed music, Laurent has been processing the resources of Early Music Online. This resource, the product of a JISM rapid igitization grant involving the Royal Holloway, British Library and RISM UK, incudes 324 anthologies of printed music from the sixteenth century, or around 10000 pieces. The pieces are rich in metadata but because the images are produced from microfilms, the image quaility is variable.
The variable quality, along with book formats, notation types and printing methods, reduced the selection of pieces that Aruspix could process. Partbooks and choirbooks were machine-readable, but table books, with parts oriented in different directions, were not (though it would be an easy problem to solve). Lute and organ tablatures, square notations, and mixed notations were all out, while only single impression printing was processable—no engravings or woodcuts. However, even with all of these limitations, Laurent was able to process 260 books, or 80% of the collection. The set left over included 49 printers, well-distributed between Venice, Antwerp, Paris and Nuremberg, showing that it was probably a very good sample of the total set, not only of the collection, but of the 4200 prints found in RISM A/1.
Aruspix was used to run OMR on one page per source, then, using the same settings as provided optimal results for the page, the whole source was processed. Aruspix acheived a recognition score of 80-100% for 3/4 of the pages, with some user interaction required to provide missing clefs and to help the computer follow parts layed out over multiple pages. The high success rate and relatively small amount of human interaction proved the concept that it is indeed possible to effectively run OMR on a large scale, validating afundamental premise of the work presently going on around the world and in the DDMAL lab.
Perry Roland: The current state of Music Encoding Initiative
One of the key elements underpinning Rodan is a sophisticated and versatile music-encoding language that allows the encoding not only of different types of music notation, but of other elements such as the position of notes on a page, simultaneous variants, and many other attributes. But MEI is not just a framework for encoding music, Perry reminded us, it is also an organization, a research community, a music notation data model and a markup language.
MEI stands for “Music Encoding Initiative,” but the acronym also has hidden meanings. “Mei” in Chinese means beautiful and well-built, while it is no accident that the meetings surrounding MEI’s development as well as new releases tend to take place in the fifth month of the year. This year the meeting took the form of a two-and-a-half day music encoding conference in Mainz, Germany, where as well as exchanging ideas about how to encode music, the MEI team obtained support for a work group, and began discussing how they might run themselves as an organization. Among the projects discussed, The Freischütz Digital project, functions as a proof of concept to show off MEI’s unique ability to encode variations between different sources of a piece. Activities are planned well into 2014, but Perry stressed that the present grant that supports MEI development runs out on October 15th of this year, and they are seeking alternate ways to support themselves.
Perry then brought us up to date on MEI’s capabilities, as of the May 2013 release. He reminded us that, far from being a fixed format—or even a file format—MEI is simply a framework that allows for detailed music encoding. It functions on the principle of ODD—One Document Does-it-all—meaning that a single file contains not only the musical data of a piece, but functionality rules and even documentation. Of course MEI has progressed beyong the theory of being just a framework: out of the box, the newest version comes with 24 modules or customizations, giving the user the ability to encode metadata, neume, mensural and common western music notation, graphical position data, library information, the critical apparatus of a piece, editorial notes (composer additions etc.), and can even encode the contextual stylistic rules of a given piece.
At this point, Julie asked if anyone has been working on encoding text and music together into a single document, to which Perry replied that it is indeed being worked on: TEI (Text Encoding Initiative) has a customization to embed MEI, and the Digital Duchemin Project employs both TEI and MEI together.
Perry ended his talk by giving a glimpse into future developments, including genetic editions (which encode not only what is on the page, but also try to reconstruct process), more translators and tools, and tools to not only encode but to compare variants in pieces.
Mitchell Brodsky: New York Philharmonic Digital Archives
Mitchell Brodsky came to us from the archives of the third oldest orchestra in the world, the New York Philharmonic. Founded in 1842, it is also the oldest orchestra in the USA, it was formed as a collective with a taste for posterity, beginning its first season with Beethoven’s monumental 5th, and self-consciously collecting every score, part, programme, photograph, recording, business record and press clipping since. All this adds up to around 7000 hours of audio and video, and millions of pages of documents. Where does SIMSSA come in? 1.3 million pages of scores and parts, dating from 1943 to 1970—the time when Leonard Bernstein was working with the orchestra—are now available for perusal on archives.nyphil.org.
The interesting thing about these scores is that the markings on them haven’t been erased. Scores have been marked up by conductors, and in some cases, as when Mahler came to conduct the orchestra, by composers; parts have been marked up with playing notes, and sometimes a single score will carry traces of multiple interpretations—we were shown a score of Mahler’s 1st symphony marked up by Mahler, Walter and Bernstein. The NY Phil archives have three goals when it comes to these scores: firstly, to recognize markings; secondly to recognize authors; and thirdly to try to figure out how a particular set of markings translate into an interpretation and what it would have sounded like. Sometimes these jobs are easy, but other times symbols that mean one thing to one conductor in one situation (like a triangle to show a measure in 3) mean something different to another conductor or the same one in another circumstance (cue the triangle player!).
While RODAN endeavours to marry OMR, MEI, DIVA and other components, its modular nature means that it could be expanded to suit this fascinating digitization project. Not only can it encode the music on these millions of pages, but also the texts and symbols (printed and scrawled), incorporate handwriting recognition software to try to determine who is who, and making some sense of how composers related to scores in the middle of the twentieth century.