Posted by neshraghi on November 7th, 2016
The tenth SIMSSA Workshop, hosted by the Centre for Interdisciplinary Research in Music Media and Technology CIRMMT at McGill University, took place on Saturday, September 24, 2016. These workshops started out as demonstrations of current research interests and technologies being developed by graduate students and professional scholars. Bringing together experts from many different disciplines, including music technology, music theory, music cognition, musicology, and others, SIMSSA Workshops continue to demonstrate cutting-edge advancements in the field of music today.
The morning began with an introduction from Professor Ichiro Fujinaga, the principal investigator of the SIMSSA grant. He discussed the history of previous SIMSSA workshops, previous visiting scholars to SIMSSA—such as Leigh van Handel, Jorge Calvo-Zaragoze, and many more—and papers and talks generated by the SIMSSA community that staggeringly number 45 (not counting those that are forthcoming). Some of the big news that…Read the rest
Posted by ehopkins on August 16th, 2016
Today’s post comes to us from Musicology PhD student Ian Lorenz. Entering his second year working with Julie Cumming and Peter Schubert, he’s working on a project using SIMSSA software to investigate lesser-known 16th-century composer Nicolas Gombert.
Emily Hopkins: How did you first become interested in using computers for music research? Had you done it at all before coming to McGill?
Ian Lorenz: I had not used computers in my music research prior to arriving at McGill. The research that I was involved in for my Master’s degree investigated modality in the music of the fifteenth century, in which I analyzed all of the secular works by Gilles Binchois, Guillaume Dufay, Johannes Ockeghem, Antoine Busnoys, and Johannes Tinctoris by hand. Since coming to McGill, however, I have learned about the expediency of working with computers when working on large-scale corpus studies.
EH: How did you become interested…Read the rest
Posted by ehopkins on August 2nd, 2016
In today’s post, I’m excited to introduce our new post-doc, Claire Arthur! She comes to us most recently from Ohio State University, where she defended her dissertation this past May working under David Huron. She’ll be heading up our analysis axis alongside Julie Cumming and continuing postdoc Reiner Krämer. She’s pictured below in Athens, Greece.
Emily Hopkins: Where are you originally from? What are some highlights: places you’ve lived, schools you’ve studied at, jobs you’ve done?
Claire Arthur: I grew up in Dundas, Ontario, close to McMaster University. I did my undergrad at the University of Toronto, and my master’s at the University of British Columbia. I spent most of my life as a young adult waitressing to pay my way through school. My favourite waitressing gig was at Joe’s Grill in Vancouver, BC — great breakfast joint!
EH: How did you first hear about SIMSSA?… Read the rest
Posted by ehopkins on July 11th, 2016
Sam Howes is a PhD student in Music Theory at McGill, and a researcher for the SIMSSA project. He recently presented a paper at the SMT Early Music Analysis Interest Group Conference in Indiana, "From Module to Schema in Corelli’s Trio Sonatas." I interviewed him for the blog to talk about his research and his experiences with computer music analysis and SIMSSA.
Emily Hopkins: When did you first become interested in using computers to do musical analysis?
Sam Howes: In the summer of 2015 I took an introductory computer science course and around that same time, I discovered Peter Schubert’s “Improvising a Canon” videos on Youtube. The simple description of melodic rules in those videos is what first helped me realize that strict counterpoint can be generated, and even queried, algorithmically.
EH: What was the main research question for this project?
SH: My aim was to better…Read the rest
Posted by ehopkins on June 23rd, 2016
One month ago, the SIMSSA Project and the Schulich School of Music had the privilege of hosting the fourth annual Music Encoding Conference, organized by the Music Encoding Initiative. This year’s conference included nearly 70 delegates from 10 different countries, including about a dozen students.
Participants gathered between activities in the Wirth Music Building lobby.
The full conference program is available as a PDF here. The conference started with workshops on recent developments in Verovio with Laurent Pugin, an Introduction to MEI with Perry Roland, and “Encoding Music at Music Encoding” with Jim DeLaHunt.
We opened the first day of papers with a message from our Dean, followed by Julia Flanders’ keynote, The Provocation of Music: Evolving Paradigms for Markup. The conference featured 20 papers presented over two days. Below, Kate Helsen presents Hartker’s XML: The Optical…Read the rest
Posted by ehopkins on March 21st, 2016
Today’s blog post features the Troubadour Melodies Database, a project developed by Katie Chapman that uses tools and resources developed through SIMSSA and the Cantus Ultimus project. Working with Jan Koláček, one of our international SIMSSA collaborators, she has developed a fully searchable online database of troubadour melodies encoded in the Volpiano font. Katie is currently a PhD candidate in Musicology at Indiana University. Originally from Rock Hill, South Carolina, she also studied Music Theory and History at Furman University (Greenville, SC) and performed on bassoon and contrabassoon.
Emily Hopkins: What inspired the creation of the database? Can you tell us more about how you are using the database in your dissertation research?
Katie Chapman: One of the first inspirations for the database came from Hans Tischler, who described the benefit of having a team bring together study of the different lyric traditions…Read the rest
Posted by ehopkins on February 29th, 2016
Hello everyone! Things have been a little quiet on the blog, but there’s lots of interesting things coming up soon. First among these is a post about SIMSSA-related work going on at Dalhousie. I work at McGill, and most of the guest posts so far have been from McGill developers and post-docs, but people work on SIMSSA all over the place. Many thanks to Jennifer Bain, one of our Co-Investigators, who is at Dalhousie and helped connect me to some of their researchers. I interviewed a few of them over email so you can all get to know them and their work at Dalhousie a little better.
First up is Barbara Swanson. Currently a postdoctoral fellow at York University, she was a postdoc at Dalhousie in 2014-15 and continues to work on the SIMSSA project with current Dalhousie students. Originally from Regina, SK, Barbara has lived in…Read the rest
Posted by ehopkins on December 17th, 2015
Today’s post marks the occasion of the Second International Workshop on the Human History Project, which took place a week ago on Saturday, December 12. While not specifically SIMSSA-related, this is an exciting Digital Humanities project, and many of our researchers are also involved in it. I was able to attend so I could write this post and provide you some highlights from the day.
The premise of the Human History Project is simple enough: store the entirety of documented human history in a single database, accessible online. Books, receipts, newspaper articles, lists, anything and everything. Carrying this out, however, is a long and involved process. Some key concepts are:
The event was well-attended and featured…Read the rest
Posted by ehopkins on November 18th, 2015
Last Thursday, SIMSSA participated in the Works in Progress series for McGill Digital Humanities, alongside the DREaM Project. In "Reanimating Corpora: The Single Interface for Music Score Searching and Analysis (SIMSSA) and Distant Reading Early Modernity (DREaM)", researchers from DREaM and SIMSSA shared some of their latest developments and talked about challenges in common when making decisions about how to organize large corpora, whether written texts or musical scores.
SIMSSA started things off, with an overview of SIMSSA presented by Ichiro Fujinaga, our PI and Content Axis leader. He discussed the process of Optical Music Recognition, and what goes into developing the viewing and searching capacities for the interface we’re working towards.
Next up, Alexander Morgan gave a presentation on "Integral Analysis in VIS." He talked about clustering composers’ work by comparing the similarity of the intervallic content of their works, demonstrating this technique with a…Read the rest
Posted by ehopkins on November 6th, 2015
Reiner Krämer has been working with SIMSSA as a Postdoc since July, and has been presenting on some of the work he’s done, most recently at SMT. Today’s post is a guest entry from Reiner, explaining some of his recent work and its implications for music theory and musicology research.
Using machine learning techniques as a compositional tool in music is almost as old as writing computer programs itself. When in 1957 Lejaren Hiller and Leonard Isaacson composed the Illiac Suite (also know as String Quartet No. 4), the researchers employed probability (p) tables generated that were used for Markov chain computations.1 One of the main ideas behind Markov models is how to randomly move from on state to another. The task is statistically achieved by creating state transition matrices (STMs). A STM keeps a tally of how many times a state is changed from one…Read the rest
Posted by ehopkins on October 16th, 2015
As part of CIRMMT’s Distinguished Lectures series, Masataka Goto visited McGill University. He gave a talk on Thursday afternoon, and the next day introduced the CIRMMT Workshop on usability and user experience for music information systems where many SIMSSA researchers and developers also presented.
Dr. Goto’s Thursday lecture on “Frontiers of Music technologies: Singing synthesis and active music listening” covered many different aspects of music technology, both in terms of music creation and appreciation. The music creation part included the singing synthesis system VocaListener, the robot singer system VocaWatcher (pictured below), and a discussion of the influence of singing synthesis superstar Hatsune Miku.
Screenshot from VocaWatcher demonstration on Youtube
The next day, Dr. Goto introduced…Read the rest
Posted by ehopkins on September 30th, 2015
A key part of the ELVIS Project is the ELVIS Database, a crowd-sourced resource of musical scores in symbolic notation (e.g. MEI, MusicXML). The usefulness of the database depends on contributions from all kinds of users, so developing a friendly, functional interface is key. Today’s post features one of our developers in the DDMAL lab discussing some of the big changes he worked on over the summer. Take a look at some of the improvements, and even try uploading a piece!
Guest post by Alex Parmentier
My name is Alex Parmentier. I’m an undergrad in Computer Science at McGill university, and over the last few months I’ve been making big improvements to the ELVIS Database which I’m eager to share.
The two major features I would like to focus on are the all-new upload and update interface and the improved searching interface.
When I began working…Read the rest
Posted by ehopkins on September 14th, 2015
Last week, we released Diva.js 4.0 (Document Image Viewer with AJAX), a new version of our open-source document image viewer. Evan Magoni is the lead developer on Diva in our lab here at McGill, and we’re excited to share what he’s been working on. Using Diva on their own websites, libraries, archives, and museums can present high-resolution document page images in a user-friendly interface that has been optimized for speed and flexibility.
Some highlights of version 4.0 include:
support for the International Image Interoperability Framework (IIIF). Supporting this standardized format makes Diva part of the larger movement to enhance and promote sharing of archival image collections.
Several demos are available at http://ddmal.github.io/diva.js/try/
Other improvements include:
Posted by ehopkins on September 3rd, 2015
Guest post from Alex Morgan, PhD student at McGill
Over the summer we’ve been working to integrate hierarchical clustering into our VIS Music Analysis Framework to help us reveal latent structure in musical corpora. Hierarchical clustering is a type of machine learning that groups data sets by how similar they are to one another. In hierarchical clustering, we begin with n groups each containing one data set. In the example below, each starting group is the intervallic profile of a two-voice piece by Josquin, Lassus, or Morley. The two most similar groups are then merged with one another, leaving us with n - 1 groups. This process is repeated until all of the starting singleton groups are merged into one big group. Then, we look at the visual representation of this hierarchical clustering, called a dendrogram, and focus on the intermediary stages of grouping to see if any meaningful…Read the rest
Posted by ehopkins on August 19th, 2015
SIMSSA has had a busy summer, with many of our team travelling to present on the work we’ve been doing. Highlights include conferences in New York and Brussels, where we were able to give SIMSSA workshops and present on some of our research.
The International Association of Music Libraries, Archives and Documentation Centres (IAML) and the International Musicological Society (IMS) hosted a congress on "Music Research in the Digital Age" from 21-26 June 2015. SIMSSA is, of course, all about digital music research, so several of our team members were in attendance to present on and discuss the subject.
We hosted SIMSSA Workshop VI, well-attended by the group below:
There was also a panel on the Music Encoding Initiative (MEI), chaired by Frans Wiering, featuring SIMSSA speakers including Perry Roland, Andrew Hankinson, Ichiro Fujinaga, Laurent Pugin, Johannes Kepler, and…Read the rest
Posted by ehopkins on August 6th, 2015
Cory McKay is one of our SIMSSA Co-Investigators, and a professor at Marianapolis College. Along with Tristano Tenaglia, one of our research assistants, he’s been doing some great work with jSymbolic software and ways of using it with our ELVIS Project tools. He presented his latest news at a lab meeting the other week and we’re sharing it with you here, too.
jSymbolic extracts statistical descriptors from symbolic music files. These descriptors include the familiar categories of pitch, melody, texture, rhythm, instrumentation, dynamics, and chords (coming soon!). Currently jSymbolic reads MIDI and MEI files, and saves features as ACE XML (with Weka ARFF and CSV coming soon).
These features have some fun applications in terms of machine learning and doing research and analysis with ELVIS — check out Cory’s presentation below for more detail and some neat examples of what we can do.
PDF andRead the rest
Posted by sagransj on July 10th, 2015
Reiner in his new office in the McGill music technology suite. Picture taken by Jacob Sagrans, July 9, 2015.
Yesterday I sat down and chatted with Reiner Krämer, who recently joined the SIMSSA team at McGill as a post-doctoral fellow. Reiner comes to us from the US, where he recently completed a PhD in music theory at the University of North Texas. His dissertation analyzed algorithmic music created through David Cope’s artificial intelligence software Emily Howell. As part of his dissertation research, Reiner investigated the feasibility of using machine learning techniques to analyze music written by a computer. He was excited to learn about the SIMSSA project last year at the annual meeting of the American Musicological Society and the Society for Music Theory—when he saw the announcement for the postdoc opening, he knew he had to apply.
So far, Reiner has mostly been familiarizing himself…Read the rest
Posted by sagransj on June 30th, 2015
I recently visited the McGill music technology labs, meeting many of the students (plus one postdoc) who are working on the SIMSSA project and learning about the work they are doing this summer.
Post author Jacob Sagrans talking with Ian Karp at the Digital Distributed Music Archives and Libraries (DDMAL) lab at McGill University’s Schulich School of Music. Photo taken by Ling-Xiao Yang, June 22, 2015.
Photo of SIMSSA project members outside the Schulich School of Music in Montreal. From left to right: Alexander Morgan, Ryan Bannon, René Rusch, Ian Karp, Evan Magoni, William Bain, Julie Cumming, Yihong Luo, Jacob Sagrans, Ichiro Fujinaga, Jon Wild, Catherine Motuz, Marina Borsodi-Benson, Andrew Hankinson, Alexandre Parmentier, Tristano Tenaglia, Karen Desmond, Jane Hatter. Photo taken by Darryl Cameron, June 3, 2015.
Below are brief profiles of everyone I talked to and summaries of the work they are doing. This…Read the rest
Posted by ich on March 20th, 2015
The subtitle of the workshop held today was Revisiting the collaborative process between music researchers and computer programmers. The workshop opened with a remark by CIRMMT Distinguished Lecturer Frans Wiering. Four papers were presented by SIMSSA members: Laura Risk and Lillio Mok, The fingerprint algorithm: Detecting and quantifying similarity in fiddle tunes (see below for more details); Jon Wild and Andie Sigler: Towards automated stylistic fingerprinting of Renaissance polyphony using dissonance-treatment schemata; Alex Morgan, Interval succession analysis, dissonance treatment, and transparency in Renaissance treatises and tepertoire; René Rusch and Ryan Bannon, Music analysis as a workflow? An automated approach to studying voice leading in the Bach chorales.Read the rest
Posted by lrisk on March 20th, 2015
Here is a PDF of the presentation that Lillio Mok and I gave today at the Workshop on Digital Musicology: Revisiting the Collaborative Process Between Music Researchers and Computer Programmers. Thanks to CIRMMT (Research Axis 2) for hosting the conference and to Dr. Frans Wiering, the workshop keynote, for his insightful comments. The attached PDF combines our Powerpoint from today with a more detailed presentation that we gave to the ELVIS group a few weeks ago.
Some background on the project:
I’m a doctoral candidate in musicology at McGill, and Lillio Mok is an undergraduate student in Computer Science. We started working on this algorithm in summer 2014.
My dissertation research focuses on traditional music in Quebec, and particularly on some of the musical and cultural shifts that occurred in the late 19th and early 20th centuries and that helped define traditional music (usually…Read the rest
Posted by ich on March 16th, 2015
Catherine Motuz gave a talk entitled: Contrepoint et humanités numériques: L’analyse des models improvisatoires en utilisant “ELVIS” at CESR (Centre d’Études Supérieures de la Renaissance) in Tours, France.Read the rest
Posted by ich on March 11th, 2015
Our paper: Single Interface for Music Score Searching and Analysis (I. Fujinaga and A. Hankinson) has been accepted as a poster at the First International Conference on Technologies for Music Notation and Representation (TENOR 2015) (28–30 May 2015, Paris, France).Read the rest
Posted by ich on March 10th, 2015
Congratulations to Barbara Swanson, our post-doc, for being awarded a two-year SSHRC Postdoctoral Fellowship at York University with Prof. Leslie Korrick. Here’s the title and the project description:
Painting the Concerto delle donne: Female Vocal Virtuosity in Early Modern Italian Art
The Concerto delle donne (1580–1597) was the first professional female vocal ensemble, comprised of three virtuosic singers at the court of Ferrara, Italy. Envied by courts across Europe, the Concerto delle donne quickly inspired a craze for female singers and sound. Prestigious courts in Mantua, Florence and Rome established rival ensembles within a few years of the _Concerto delle donne’_s debut. No images are known to survive of the three core singers—Laura Peverara, Anna Guarini, and Livia D’Arco. This is despite their riveting vocal virtuosity as described by leading contemporary writers like Torquato Tasso and Giovanni Battista Guarini; their pioneering status as professional women at court, often…Read the rest
Posted by ahankins on February 16th, 2015
We always love hearing how our tools are helping libraries and archives, even if they’re not being used specifically for music analysis. Our Diva.js image viewer is one of our flagship projects, and we’re excited to see what the folks at Duke University Libraries have been doing with Diva.
The David M. Rubinstein Rare Book & Manuscript Library has been engaged in a project to digitize their collection of early Greek manuscripts dating from the 9th to 17th centuries. Each book is captured at a very high resolution. They then convert their digitized images to the Pyramid TIFF format, and present them online. By the end of 2015 they are hoping to have all 106 of their Early Manuscripts Collection digitized and available online.
Additionally, they have significant documents like a manuscript…Read the rest
Posted by ich on February 5th, 2015
Audrey Laplante gave a talk at the VitrinHN / DHShowcase 2015 on 23 January 2015 at Concordia University. The talk was entitled "Introduction to SIMSSA".Read the rest
Posted by ahorwitz on February 4th, 2015
This last weekend, a group of students from McGill attended the 2015 meeting of the Northeast Music Information Special Interest Group (NEMISIG) at Ithaca College and Cornell University in Ithaca, NY. Post-doctoral fellow Andrew Hankinson was there along with graduate students Ling-Xiao Yang and Andrew Horwitz. During the Research Talks section on the first day, Andrew Hankinson gave a talk on SIMSSA, mentioning:
He also gave an overview of the other work happening in the lab, including the CRIC robotic harpsichord under construction and Gabriel Vigliensoni’s work on user context in music recommendation systems.
With all the work that we’ve been putting into music over the last few months, this year’s…Read the rest
Posted by jhatter on January 28th, 2015
Jennifer’s ideas for the exhibit include:
Posted by cmotuz on September 30th, 2014
Yesterday’s CIRMMT Workshop, the fourth to feature the SIMSSA project, kicked off with a summary and update by Ichiro Fujinaga.
Ichiro opened the session with a summary of all SIMSSA has become so far. It is like "Google scores…minus Google." (Well ok, a little bit of Google: Douglas Eck is on our advisory board.) Its main toolset still revolves around Optical Music Recognition, but it now also offers sophisticated musical querying, in the form of ELVIS.
The main goal of SIMSSA is to provide access to digitized scores worldwide, from a single website. This involves not only the interlinking of libraries (see our Partners to see which libraries are on board so far), but also finding a wealth of music in books that have already been digitized by Google Books, Hathi Trust or other online resources.
SIMSSA is redefining what "access" to scores means. Right…Read the rest
Posted by ich on August 29th, 2014
Press Release by Schulich School of Music.
News Release in McGill Reporter.Read the rest
Posted by Mike Winters on October 1st, 2013
I’ve been working on the ELVIS project for a little bit over a month now. Though I began working with Christopher on the web-interface and back-end, in the past few weeks, I’ve been focused mostly on sonification.
Sonification, in case you don’t know, is a systematic process in which data is transformed into sound. At the moment, I’ve been listening to the extracted intervals of Dufay, Josquin, Palestrina, and Beethoven at speeds up to 10,000 intervals per second. Though I predicted that sonification could be used to differentiate these corpora, it turns out that it can probably do much more. For the Digging into Data Conference, I’ve generated some recordings in which sonification exposes dramatic changes in the temporal evolution of the data, and helps identify patterns at local and global time scales.
To give you an idea of what each interval sounds like, imagine a pianist lightly striking…Read the rest
Posted by Christopher Antila on September 27th, 2013
Summary: I discuss the basic implementation details of a music-analysis computer program I’m helping to write. Our goal is to find musical patterns that will help us describe musical style more precisely. You can visit our (temporary) website at http://elvis.music.mcgill.ca or if you’re reading this in the future and that doesn’t work, you can view our new website at http://elvisproject.ca. You can view our code (AGPLv3+) on GitHub here: https://github.com/ELVIS-project
One of the most useful things I’ve done over the past couple of years, at least in terms of learning about computers and programming, is to read the blog posts written by members of various free software communities. Is it about software I don’t use? Is it too technical for me? Is it not even about software, but the larger ideas of the community? Is it written in barely-comprehensible English? Doesn’t matter—-everything is useful and interesting, and…Read the rest
Posted by Catherine Motuz on September 17th, 2013
The afternoon session of the CIRMMT Workshop focused on the ELVIS project, which will continue to be a part of SIMSSA after the Digging into Data challenge officially ends in January, 2014.
Julie Cumming: Introduction to ELVIS (Electronic Locator of Vertical Interval Sequences)
Julie began the session with an introduction to what ELVIS is and how it all got started. Ultimately, ELVIS has the same goal as Rodan:to provide an open-source tool set for the online analysis on music, which can be operated not only by computer programmers, but also by musicologists who would not consider themselves to be tech-savvy.
The idea behind ELVIS dates from a 15th-century music treatise by Johannes Tinctoris, the Liber de arte contrapuncti of 1477. This treatise hashes out the pairs of successive intervals allowed in Renaissance counterpoint, showing not only all possibilities but also judging some better than others. 536 years later, these…Read the rest
Posted by Catherine Motuz on September 14th, 2013
Laurent Pugin: Looking at printed music anthologies in the context of digitization
Laurent Pugin, a former McGill postdoc now working for RISM Switzerland, focused on the challenges of using OMR software on large collections of music. Using Aruspix, a programme that uses machine learning and a graphical user interface (GUI) to perform OMR on early printed music, Laurent has been processing the resources of Early Music Online. This resource, the product of a JISM rapid igitization grant involving the Royal Holloway, British Library and RISM UK, incudes 324 anthologies of printed music from the sixteenth century, or around 10000 pieces. The pieces are rich in metadata but because the images are produced from microfilms, the image quaility is variable.
The variable quality, along with book formats, notation types and printing methods, reduced the selection of pieces that Aruspix could process. Partbooks and choirbooks were machine-readable, but…Read the rest
Posted by Catherine Motuz on September 13th, 2013
Andrew Hankinson & Ryan Bannon: An introduction to Rodan
Andrew began the session by outlining the deficiencies of commercially-available optical music recognition software, as these provided the impetus for him to bring the process back to the drawing board. In most cases, OMR is so unreliable that it is faster for an advanced user to enter notation by hand than to correct all the errors made by the system. It is also improssible to improve the system because all the (numerous) processes involved in OMR are carried out in a "black box" process: you see what goes in and what comes out but none of what happens in the middle. It is difficult for two users to carry out work on a single source simultaneously, and each user is limited to the processing power of their desktop computer or laptop. OMR systems don’t learn, which has two implications: first, one…Read the rest
Posted by Catherine Motuz on September 12th, 2013
Last Saturday, the DDMAL Lab hosted a workshop outlining the many dimensions of the ever-expanding SIMSSA project. The morning began with prof. Ichiro Fujinaga extending a warm welcome to all participants, stating that this workshop represents the 5th or so in a series, and is being held partially in preparation for an upcoming grant application due October 1st.
Ichiro Fujinaga: Introduction
As an introduction to SIMSSA for those new to the project, Ichiro gave his elevator pitch: "SIMSSA is Google Scores, minus Google," going on to clarify that this doesn’t mean that Google will never be involved, but simply that they aren’t as the project presently exists. The many dimensions of SIMSSA involve developing and improving systems for Optical Music Recognition (OMR), learning how to construct sophisticated queries on musical documents, and amassing OMR, musical texts, and search & analysis tools onto a single website. A tangential…Read the rest
Posted by Catherine Motuz on July 7th, 2013
On May 11th, after a successful and well-attended series of talks the afternoon before, the ELVIS team met in the CIRMMT seminar room for a kind of "Hack Day." This involved getting out our laptops and sitting around the room, each with a set of specific tasks to address that day, alone or in groups, taking advantage of a lively but focused atmosphere and the resources of each others’ presence. For example, having Myke Cuthbert around made it possible to see with certainty which issues in VIS stemmed from the VIS programme itself, and which stemmed from Music21, which runs in conjunction with it, and to fix as many as possible. The presence of both musicologists and programmers in the room made it possible for concerns about music-analysis functionality to be addressed immediately. In some cases, teams were able to consult each other for advice on making queries, such as…Read the rest
Posted by Catherine Motuz on July 7th, 2013
On Friday, May 10th and Saturday, May 11th, the ELVIS team met here at McGill to exchange ideas and to work together on pushing ELVIS forward.
The session began on Friday with a series of talks by each of the teams to a crowded seminar room, showing what directions they have been following in their research. Julie Cumming, Catherine Motuz and Christopher Antila spoke on behalf of McGill, showing the ELVIS database and especially the VIS software that we have been using to analyze vertical interval successions. The team from Yale, including Chris Whyte (who, replacing his advisor who couldn’t make it for the conference, dubbed himself "the poor man’s Ian Quinn") and Kirill Zikanov, explained the projects that they are working on. These involved the classical-archives.com database of over four thousand pieces in MIDI format, including Byrd, Vivaldi, and major composers from Bach to Wagner. The main metadata…Read the rest
Posted by Catherine Motuz on April 3rd, 2013
So far, assembling the ELVIS database has been a manual process, with each file being converted (if necessary), uploaded, and catalogued with metadata by students. (Ichiro calls this method—I presume affectionately—"gradsourcing".) This is not a bad way to begin: by having to think about every file we put on the site, we now have a good idea of what kind of database we have, what kind of metadata it’s appropriate for us to collect, and problems that might come up with certain filetypes (MIDI files have been an adventure which I will relate in another post!). In order to expand our collection to the one we hoped for, we have to start automating processes. We are exploring how to automate uploading pieces that share the same metadata, and will hopefully have news on this soon. In the meantime, Ryan, a recent addition to our team, has learned a clever way…Read the rest
Posted by Catherine Motuz on December 10th, 2012
Christmas has arrived early. With a good selection of pieces in the ELVIS database and the VIS/music21 interface up and running, it is time to come up with queries to run.
One of the main goals of music theory and musicology since they began has been to try to pin down what gives a composer their personal style. Palestrina and Bach sound different even if they are both writing for a 4-part choir, as do Palestrina and Josquin (to slightly specialized ears), but where exactly do the differences lie? After all, these composers all follow many of the same contrapuntal rules, avoiding parallel 5ths and using many of the same formulations to start and end phrases. One of the main type of queries will be to see if analyzing the interaction of melodic and vertical intervals might provide musical fingerprints for composers. We will start with the above-named three…Read the rest
Posted by Catherine Motuz on November 14th, 2012
Seven members of McGill’s ELVIS team have just returned from New Orleans, where ELVIS was presented alongside the SIMSSA project at the AMS/SMT conference. On November 1st, in a joint AMS/SMT session entitled "New Digital Projects for the Study and Dissemination of Medieval and Renaissance Music," a panel discussion where Julie Cumming presented ELVIS and also delivered a paper on collaborative research in digital humanities. The members of the panel as well as some of the discussions which ensued can be found here as well as in the program book, available for download on the AMS site. The crowd of around seventy scholars was very enthusiastic, and we are especially touched by Susan Boynton’s (Columbia University) public appreciation of both the Liber and Salzinnes projects as she described their usefulness as pedagogic tools.
That evening, the ELVIS members from different universities took advantage of being in the…Read the rest
Posted by Catherine Motuz on November 7th, 2012
As Nate Silver wowed the world last week with his statistical prowess, proving triumphantly that for all the value we place on our gut instincts, they don’t compare with mathematics when it comes to, say, predicting election results. Keeping this in mind, as the first strings of numbers pour out of VIS, our current task is learning how to interpret them. Following Ichiro Fujinaga’s maxim "Never reinvent the wheel," the ELVIS team is happy to have among us Jamie Klassen, a BA student in mathematics and statistics, as well as Alex Morgan, a music theory graduate student with math proficiency, and Prof. Jon Wild, who did his bachelor degrees at McGill in both music and physics.
The numbers produced by VIS queries are called n-grams, with N representing the number of successive vertical intervals represented. The most basic n-gram in ELVIS is the 2-gram, made up of two…Read the rest
Posted by Catherine Motuz on September 17th, 2012
In a collaborative project between musicologists and computer programmers, there is a decision to be made about who learns what. Should programmers learn enough about music that they can translate the questions of musicologists into code, or should musicologists and theorists learn how to use analytic software such as music21 (which involves some basic programming skills) in order to ask their own questions? The time commitment is enormous on both sides.
Even Humdrum, whose music encoding notation is so intuitive that it is possible to sing or play directly from it, starts to look technical when it comes to asking questions. For instance, this is how one would search a piece for French 6th chords:
solfa file | extract -i ‘*solfa’ | ditto | grep ‘6-.4+’ | grep 2
The advantage to this system is that one can ask basically anything, but the disadvantage is that it requires a…Read the rest
Posted by Catherine Motuz on September 3rd, 2012
A necessary requirement for any kind of corpus research is, obviously, a corpus. It is one thing to have access to libraries of the last thousand years of music, but quite another to have it in a format readable by a machine. Constructing the ELVIS corpus is not complete, but as the time approaches to run queries, it seems like an excellent moment to thank some of the people who have contributed the notation files that make it up. Some of the files provided come from public sites, such as cpdl.org (known as the "Choral Wikipedia), and others have been donated by other research projects and even individuals. Files are uploaded individually in order to make sure that a minimum amount of metadata is attached to each entry. Because of the time this takes (1-2 minutes per entry in general), not all donated pieces are up on the database yet.…Read the rest
Posted by Catherine Motuz on August 28th, 2012
There has been a lot of coming and going in the lab over the summer months, people heading off in all directions for their weeks off. Some visiting family, some taking a holiday, and one touring the Americas with their highly successful rock band. That would be Gabriel. Gabriel is the Ph.D. student in the lab responsible for making OMR work: He does the nitty gritty of turning black-and-white images into sources of information. His screen is constantly filled with staff-finders and note shape processors which he writes, tests, and tweaks, unassumingly and patiently developing some of the most cutting-edge technology in the world.
But this year, Gabriel has been whisked away by his band of a former life, Lucybell, onto a reunion tour that marks their foundation at the Universidad de Chile 21 years ago. He has already played to packed houses all over Chile, and will be…Read the rest
Posted by cmotuz on August 6th, 2012
Yesterday morning a few dozen librarians attending the IAML conference in Montreal attended a CIRMMT workshop exhibiting SIMSSA. Ichiro opened the presentation with an overview of what SIMSSA is and reminding the specialized audience that the project needs librarians on board in order to become a true "partnership" project and receive maximum funding.
Eleanor Selfridge-Field gave a stunning keynote presentation called "Between an Analog Past and a Digital Future." Her talk gave a whirlwind tour of the exciting state of digital music collections today, the issues surrounding them and the many possible paths for the future. She showed us images of recovered music and watermarks from faded documents, discussed projects using optical recognition systems for differentiating the handwriting of scribes, and of course talked through the very practical implications for musicologists of not having to travel halfway across the world to see sources. "I’m wondering," she…Read the rest
Posted by Catherine Motuz on July 6th, 2012
The ELVIS project is booming, with over 4000 entries. Composers with over 100 entries include Chopin, Handel, Haydn, Obrecht, Palestrina, Scarlatti, Schubert, and Bach, with more than 1000 pieces or movements.
At first we met regularly to discuss the best ways to organize our information, learning that in the field of information science, computers are changing the way we have to think about our data. Previously, we would put it into categories such as those provided by the Library of Congress, but now that keyword searches are so prevalent and the field of corpus linguistics is blossoming, so tagging data is gaining headway. The downside to tagging is that you have to know what you are looking for so that you can type it into the search box. Meanwhile, categories offer easy browsing, but the problem with categories is that there has to be the right pre-defined category…Read the rest
Posted by Catherine Motuz on July 5th, 2012
The last few months have been hectic, and ironically a lot of my computer time has gone to inputting music into Finale, because OMR is in such an early stage that it’s still faster to manually input pieces, not only when working off of a dirty facsimile, as I so often do, but even when working from scores made in Finale in the first place. So I’ve edited about 45 pieces in the last month - at least they will go into the ELVIS database. The rest of the time, I’ve got to know a few of the ins and outs of Drupal, and have been managing the ELVIS website (both the technical side and helping to establish rules and vocabularies) as well as contributing to it. To practise installing a module without worrying about side-effects, I put in a tag cloud so that we can see the most common…Read the rest
Posted by Wendy Liu on July 4th, 2012
Posted by Alastair Porter on July 4th, 2012
As part of the Simssa project we have been developing Rodan, a software application for helping people to convert images of music into a searchable symbolic format. A main aspect of the Rodan project that I have been working is the integration of the software and supporting infrastructure. Rodan is a complex tool, with six different pieces of software working together in the background to perform the recongition process.
The main server application is written using the Django web framework. We use the PostgreSQL database server to store all data generated by the application. We use a python library called celery to perform long-running operations like music recognition. This helps us to keep the interactive webpage responsive. The main site sends a message to celery that requests an action to be performed on an image by adding the message to a queue, run by RabbitMQ. An…Read the rest
Posted by Brian Stern on July 3rd, 2012
Posted by Gregory Burlet on June 19th, 2012
Over the past few weeks I’ve added some new functionality and goodies to the Neon.js neume notation editor. Ornamentation: dots can now be toggled on puncta (neumes with one note). Clefs: shift clefs vertically on the staff, insert new clefs anywhere on the staff, delete clefs, and update the shape of a clef. Divisions: insert all four types of divisions, delete divisions, and move divisions. Custos: are tied to the first note on the next staff. The next tasks on the burner are handling more ornamentation: draw, insert, update, and delete episemata as well as drawing more complex neume structures in the editor.
You can see a demo of Neon.js in action here.
Also, the Neon.js paper was accepted to ISMIR 2012! See you in Porto!Read the rest
Posted by cmotuz on March 19th, 2012
First of all, the whole lab congratulates Dr. John Ashley Burgoyne, who has now officially received his Ph.D. title. Ashley is getting ready to move to the Netherlands in June to start a Post-Doctoral position at the University of Amsterdam to study what makes tunes stick in our ears. We also congratulate Remi Chiu, the senior musicologist on the SIMSSA project, who will head into his thesis defence next week having just accepted a position for next academic year at Loyola University in Baltimore, Maryland. Bravo to both and all the best with future endeavours. The lab also bid a fond farewell to Mathieu Bergeron this week, who just finished his post-doctoral position with us and will stay in Montreal and continue to work on his own projects.
The lab is abuzz with preparing paper proposals for the ISMIR 2012 conference, to be held in Porto this October. Jason has…Read the rest
Posted by cmotuz on March 12th, 2012
The ELVIS project is in full swing, after a very short ramp up. Our team of Research Assistants has been collecting public domain music notation files and uploading them into what will be a vast database of music spanning from 1300-1900. McGill will focus on collecting early music, that is, up until 1700 or so, while researchers at other Universities will focus on later repertoire.
What allowed us to get going so quickly with making our data website is a content management platform called Drupal. In a world where hundreds of different versions of essentially the same software are written to give users this bell but not that whistle, or worse yet, where companies use software that come with all the bells and whistles possible, using only a small percentage of its capability while the massive program weighs down the whole operating system.
Needless to say Drupal is…Read the rest
Posted by cmotuz on March 5th, 2012
As spring has coyly come and harshly vanished again, the ELVIS project has started to show its first shoots. Right before the start of Reading Week, around 40 students and researchers assembled for a mini-conference hosted by CIRMMT’s Research Axis 3 (RA3). Specialists in Music Information Retrieval (MIR) congregated in Montreal to share their ideas and projects on Thursday and Friday February 16th and 17th, while Saturday morning was devoted to the ELVIS project.
The event began with a talk by J. Stephen Downie as part of CIRMMT’s Distinguished Lecture Series. His talk ranged from the broad question of whether MIR is part of science or part of musicology, to the nitty gritty details of adapting speech recognition technology to detect musical features. He also addressed one of the major criticisms by musicologists of technology - that the results of computer-operated queries (whether audio feature…Read the rest
Posted by cmotuz on February 27th, 2012
Thanks to the tireless efforts of Gregory Burlet, SIMSSA has a new web application! It’s called the Neume Editor ONline (NEON), and it’s making a splash. Right now, NEON renders mei files of square-note notation into graphic form. In the above example from the demo, you can see how the application has rendered our file of the Salve Regina from the Liber Usualis into the neume shapes and pitches specified by the code. The shapes are modelled on those in the Finale medieval plug-in, to which Greg has added dots and is working on adding the more unusual shapes from the Liber Usualis.
But rendering is not all! The main goal of this application is to make it work backwards too. If you go to the demo and click on a neume, you’ll see that you can drag it around. Eventually, these edits will alter…Read the rest
Posted by cmotuz on February 9th, 2012
In my last post I mentioned that I estimated that there were around nine million different sources of printed music in the world. How did I come up with that number?
Most of my information comes from RISM, the international organization which works with over 7000 libraries worldwide to try to document all of the written and printed music in the world, from Anthems and Allemandes to Zajal and Zarzuela. The RISM website makes its own estimate of the number of music prints in the world, with the usual disclaimer that even they, the world’s collectors of music, can only attempt to hit somewhere in the ballpark. RISM reckons:
about 1.8 million music manuscripts between 1600 and 1800
at least 2 million music manuscripts between 1800 and 1950
around 140 000 printed music books before 1800
Posted by cmotuz on January 23rd, 2012
It’s been very interesting this past week to compare and contrast all the excitement about SOPA/PIPA with the world of Academia. Traditionally it’s been universities who have been accused of holing up in the ivory tower, but by stark contrast to corporations who benefit from the commercialization of information (to the point of imprisoning people for uploading copyrighted material), those of us publicly funded for high-quality research—whatever fruits may bring—are now keen to send all the ideas we come up with into the world as soon as they might be of use to anybody at all.
That’s why it’s important to have competitions around like the Digging into Data challenge, and why I’m especially pleased to report that the ELVIS project, an international team of researchers led by our own Julie Cumming, has won one of the prizes on offer. As is usual when…Read the rest
Posted by cmotuz on January 20th, 2012
So far, the SIMSSA project has focused on developing OMR search systems and testing them on relatively small documents. Even the very large Liber Usualis, with its 2340 pages, however, is very small when compared to an entire collection of digitized music scores. IMSLP has 157,490 scores and counting, and many libraries from around the world are joining the trend of digitizing their collections. Based on the RISM catalogue and logical extensions, I would estimate there to be around 9 million scores in existence, ranging from 1-1000 pages each, for a total of 100-200 million pages. As our ultimate goal is to be able to do such large scale searches, we have spent some time in the past weeks to considering the problems of processing power that these are going to pose.
Most of us are used to search times measured in hundredths of a second—Google just showed…Read the rest
Posted by cmotuz on January 11th, 2012
Happy New Year!
It’s been a lot of fun coming to grips with OMR and discovering for myself how brilliant ideas get put together to develop a cutting edge piece of technology. There’s no denying it though, every so often we hear:
"That’s all very clever. What’s it for?"
My job as musicologist on the team is to make sure that not only does the answer to this question guide our research (not to mention our user interfaces), but to try to see the applications of new developments the moment they enter the horizon of technical possibility. That’s why Remi and I sat down with a pen and paper and asked "If computers can learn to read music and analyze it directly from scores, how can this help musicologists / theorists and performers?" Here are some of the things we came up with:
The most obvious application is…Read the rest
Posted by cmotuz on November 30th, 2011
The Salzinnes project has gone public! I’m writing from Austria now, here to play in some concerts, yet I got the news right away in the form of a Facebook notification. But of course.
After feeding the first 60 or so pages of Salzinnes into the computer’s OMR system, Gamera, we decided it was time to see how efficiently it was actually learning. Gamera learns by making a library of images or "glyphs," then recognizing new ones by comparing them to its library. It doesn’t just recognize images by their shape and size - after all, sizes can vary drastically depending on the resolution of an image - but by looking at elements such as how symmetrical they are, their black-to-white pixel ratio, and by the patters produced by their pixels when piled onto the x or y axis.
Here you can see that clivis and a podatus…Read the rest
Posted by cmotuz on November 17th, 2011
Last week the Salzinnes project made its first public appearance in a series of presentations at the AMS Conference in San Francisco. The responses have been encouraging, ranging from appreciation of the diva interface as a tool for working with manuscripts to offers to help us get connected with other libraries and collections to provide us with other sources that we might incorporate into the project.
Last week the lab was in a flurry of activity getting the demo ready, working out bugs and making it more pleasing to the eye. How it all came together made me wonder how five programmers can coordinate to get a piece of code up and functioning. There’s already a bit of banter across the lab floor, but with complex files, the need to concentrate in a relatively quiet lab, and busy schedules where some things wind up being done on weekends or…Read the rest
Posted by gvigliensoni on November 10th, 2011
working in the creation of fully-searchable manuscripts and music documents. For extracting the pitches from within these documents we need to extract the position of the glyphs in the page as well as the position of the staff. The only staff finder algorithm that considers non-parallel staff lines is the Miyao algorithm. Although it works very good in most cases, with our last manuscript, the Salzinnes Antiphonal, I have had some problems with this algorithm.
The following image shows the staff lines being recognized by the Miyao processing. It can be seen that the algorithm does a good work in the recognized points, i.e., the points and staff lines are properly localized, but there are big portions of the staff lines that are not recognized at all. On top of that, there are overlapped zones of staffs, like in the second staff where the green and light blue zones coincide.…Read the rest
Posted by cmotuz on November 7th, 2011
The premise of DIVA is that it’s a user-friendly interface that allows the browsing of high-resolution, multi-page documents. High quality images take a long time to download on an HTML website, so normally image quality is sacrificed to allow each image to be loaded before the site is browsable. For a book like Salzinnes with over 400 pages, even low-quality images would take a long time, so until now the usual strategy has been to present the book on different pages as thumbnails, with each image expandable (making browing very time consuming indeed), or to force the user to download a bulky .pdf in order to browse the whole book at once. DIVA is different: it’s optimized to load only what is in the viewing window at any given moment, making it possible to browse through high quality images with minimal load time and no need to download a file.…Read the rest
Posted by cmotuz on November 4th, 2011
It’s been a busy week here in the lab as we all prepare for the presentation of the SIMSSA project at the AMS Conference next week.
Our .tiff files have arrived and Remi and Laura have been back in gamera, once again teaching the computer to distinguish between a podatus and a c-clef, a custos and a punctum inclinatum, and pitches in general from the random specs of dirt and discolouration on each page of music. I had a go on the program too at last, only to find myself proceeding at a snail’s pace in comparison to the other two. When I complained, Laura came in and showed me a myriad tricks and shortcuts she had found to speed things up, in particular how to tell the computer to approve the classification of many instances of the same object at once instead of sorting through one…Read the rest
Posted by ahankins on October 25th, 2011
As part of our ongoing development of tools necessary to support the OMR of musical materials, we are launching the first beta version of LibMEI, a C++ application library that supports the reading and writing of MEI music notation files.
This launch is timed to coincide with a talk given at the 2011 International Society of Music Information Retrieval (ISMIR) conference in Miami, FL. This talk, titled "The Music Encoding Initiative as a document-encoding framework" describes how MEI can be used to rapidly develop support for new notation encoding systems.
The LibMEI website has more information about this software, including how to get and use software, documentation, and an issue tracker.Read the rest
Posted by cmotuz on October 18th, 2011
This past week, while we wait for our .tiffs to be ready for OMR, we’ve been combing and parsing the plethora of information available in the CANTUS database. And what a fantastic resource it is! For each Latin ecclesiastical chant in each manuscript, CANTUS gives a page of information. The information for the first chant set to music in Salzinnes, for instance, looks like this:
As the first two letters of SIMSSA stand for "Single Interface," we’d like to make this information available on the same system as the searchable images of the Salzinnes antiphonal. As space is not an issue, we’d also like to automatically expand the abbreviations to their full names without having to refer to the legend, so I’ve been getting my head around reading the above, especially the liturgical use information, thanks to the legends provided on the CANTUS website. The feast, Dom.…Read the rest
Posted by ahankins on October 13th, 2011
We’ve been working on getting our new book, the Salzinnes Antiphonal, prepped and ready for the OMR process. Part of this process involves separating the different components of the images into different files, minimizing clutter and reducing the amount of information that the computer will have to work with.
We start with a fairly handsome page. Meet Salzinnes f. 9r:
The first thing we need to do is to convert this image to black-and-white through a process called "binarization." This process makes it very easy for a computer to decide what parts of the image are important and what aren’t. If it’s a black pixel, the computer needs to do something about it; if it’s a white pixel, it can ignore it. The binarization process cleans up some of the foxing, page creases, and other bits of the image that the computer might otherwise eventually confuse with…Read the rest
Posted by cmotuz on October 11th, 2011
First of all, a hearty congratulations to SIMSSA’s Wendy Liu and lab-partner Saining Li for their good citizenship award at the Montreal Hackathon, for improving their wikinotes.ca site to make note sharing more accessible to students.
The question of the week is: How does the quality of an image affect a computer’s success at Optical Music Recognition (OMR)? Most of the images of music put up on the Internet are in the forms .jpeg or .pdfs that began their existence as .jpeg files. The problem with .jpeg is that it involves "lossy compression," or compression by discarding bits of data. Jpegs are clever because they discard data that the eye tends not to notice at a distance or in web viewing, but when we try to manipulate the file or blow it up to many times its size, this loss of data becomes apparent: little halos appear around notes…Read the rest
Posted by cmotuz on September 30th, 2011
Welcome to the new blog for the project "Single Interface for Music Score Searching and Analysis" (SIMSSA). Here you can learn about the ins and outs of this cutting-edge project, which involves developing Optical Music Recognition (OMR) to allow computers read music of many different notational styles, making digital scores searchable. Just as when you scan a text, a computer will use Optical Character Recognition (OCR) to convert graphics into letters, OMR converts notated music into a form where it can be separated out into its components and analyzed. You can already browse the results of last year’s project, where the lab taught a computer how to read the Liber Usualis.
Now our exciting piece of news is that we’ve received and put online a digital copy of the Salzinnes Antiphonal. This is the manuscript which we’ll use for the next step of our project: teaching a computer…Read the rest