About me

I am an assistant professor (MS-3.1) at the School of Electrical and Computer Engineering, University of Campinas, Brazil. I also cooperate with the Interdisciplinary Nucleus for Sound Studies (NICS/Unicamp), where I perform interdisciplinary research on sound, music and engineering.

Research interests

Music Information Retrieval: techniques to extract useful information from music and related applications.

New Interfaces for Musical Expression: new musical instruments and other musical devices that allow interesting interactions with music.

I am always open for collaborations and prospective students. If you want to check what I am currently working with, please, keep reading the blog (below) or check my publications.

Contact

 

Identifying musical genres by finding palletes of sounds

Western contemporary popular music is commonly divided into genres that cater to different niches: some listeners appreciate loud, distorted guitars, whereas others prefer harmonious symphonies, and most people search for different styles at different moments of their life. Genres are linked to social aspects of music listening, but are also related to typical musical composition choices involving instrumentation and playing technique. Because, it is possible to identify short (around 1s to 3s) sound excerpts, called textures, that have auditory characteristics that are typical from each genre. In other words, each music genre is related to a pallete of typical sound textures.

This work by my PhD student Juliano Foleiss proposes a new method to identify the music genre of a track by finding its texture pallete. Its idea is to calculate a vector representation for all textures in the track and then performing clustering to identify vector groupings. Then, the system identifies the genre related to each cluster (or: the genre that typically uses each sound pallete element). The genre that most explains the clusters is chosen as the genre for the whole track. The study was recently published on Elsevier’s Applied Soft Computing:

  • [DOI] J. H. Foleiss and T. F. Tavares, “Texture selection for automatic music genre classification,” Applied Soft Computing, p. 106127, 2020.
    [Bibtex]
    @Article{Foleiss2020,
    author = {Juliano Henrique Foleiss and Tiago Fernandes Tavares},
    title = {Texture selection for automatic music genre classification},
    journal = {Applied Soft Computing},
    year = {2020},
    pages = {106127},
    month = {feb},
    doi = {10.1016/j.asoc.2020.106127},
    publisher = {Elsevier {BV}},
    }

The article shows that the classification accuracy improvement was due to the system detecting the diversity of sounds that are used throughout each track. Consequently, classifying small audio excerpts cause a sensible performance drop when compared to classifying full audio tracks.

An important question to the future is how to make type of this system work with the huge diversity of new genres that are appearing on quickly-growing, quickly-changing online communities and social networks. In this case, genres are only loosely defined, so genre labels have little meaning. Maybe, in this situation, we have more a problem of clustering than a problem of classification?

Exploring cultures guided by their music

There is a lot of music being made and shared nowadays. Musicians can use the Internet to sell music, to get famous, or simply to showcase their art. They can be easily seen by their friends, their friends’ friends, and so on. At a first glance, this seems to be the general purpose of music-making, but there is music that is not necessarily made for an artistic reason. There is music sang at birthday parties, and lullabies to make babies sleep, and children’s songs that guide their games and plays. Some cultures have songs that are sang in funerals, in weddings, or during battles. This type of music, which is only meaningful within its context, is called ethnic music.

Many people eager to record and broadcast their music as a form of art, but ethnic music rarely reaches music stores. This is because its meaning is typically liked to local rituals, hence it simply makes little sense to record this type of music. As a consequence, ethnic music is rarely listened to outside of its original culture.

Listening to ethnic music usually involves a previous step of studying that culture: first, you find out about the vocal songs of some culture, then you search for it within the collection. There are many tools that help musicologists analyze ethnic music datasets. However, this approach does not help the non-musicologist public to you find out about other, similar music within that same collection, and they can easily miss some interesting content that is just a few steps (or clicks?) away.

This is what inspired the Macunaíma project. In this project, we devised an automatic recommendation system that could quickly find what type of songs you like within an ethnic music collection. This facilitated users to pleasantly listen to ethnic music using a web-radio interface, that is, without any previous study.

We tested our idea in a user study with the the Música das Cachoeiras collection. It is an ethnic music collection that was recorded in an sailing expedition upstream Rio Negro. It contains a diversity of ritualistic music, local artists, and some music groups that are clearly influenced by contemporary Western traditions. The source code for our demo is online on GitHub, and the complete study was recently published in this article:

  • [DOI] T. F. Tavares and L. Collares, “Ethnic music exploration guided by personalized recommendations: system design and evaluation,” SN Applied Sciences, vol. 2, iss. 4, 2020.
    [Bibtex]
    @Article{Tavares2020,
    author = {Tiago Fernandes Tavares and Leandro Collares},
    title = {Ethnic music exploration guided by personalized recommendations: system design and evaluation},
    journal = {{SN} Applied Sciences},
    year = {2020},
    volume = {2},
    number = {4},
    month = {mar},
    doi = {10.1007/s42452-020-2318-y},
    publisher = {Springer Science and Business Media {LLC}},
    }

The Macunaíma project is very special to me. It was first imagined around 2012-2013, when I was finishing my PhD. I still have its first draft, which uses Macunaíma – the name of the anti-hero protagonist of the Macunaíma book – as an acronym for MApas CUlturais NAvegando na Identidade Musical e Artística (Cultural Maps Navigating Through Musical and Artistic Identitities). In 2015, the project got a grant by the São Paulo Research Foundation (FAPESP). It ultimately converged to a simpler web-radio interface (that is, not a map), and music is, in fact, the only form of culture that is being used. Perhaps this could point to interesting work in the future?

Identifying song lyric narratives with machine learning

Song lyrics can convey stories with different narrative contexts, such as discovering love, enjoying life, or contemplating the universe. These narratives are linked to how we relate and listen to them. This work by my former undergraduate student André Dalmora applied machine learning algorithms to identify narrative contexts of Brazilian Popular Music lyrics.

He tested several variations of sparse topic models and machine learning algorithms and found out that machine learning algorithms are still not very good at interpreting lyrics. However, he also found out that humans tend to disagree with each other about the narrative contexts of lyrics. Maybe our initial question -“can machine learning reliably identify the narrative of a song?” – could be switched to another one: “can anyone reliably identify the narrative of a song?”.

His paper was published at the 17th Brazilian Symposium on Computer Music, and it received great feedback from the community. I am very excited to keep working on this theme, which seems very promising and full of unanswered questions.

Source code and dataset: http://www.github.com/aldalmora/NC_Music_Brazil

  • A. Dalmora and T. F. Tavares, “Identifying Narrative Contexts in Brazilian Popular Music Lyrics Using Sparse Topic Models: A Comparison Between Human-Based and Machine-Based Classification,” in XVII Brazilian Symposium on Computer Music, São João del Rei, MG, Brazil, 2019.
    [Bibtex]
    @INPROCEEDINGS{Dalmora2019,
    AUTHOR={Andre Dalmora and Tiago Fernandes Tavares},
    TITLE={Identifying Narrative Contexts in Brazilian Popular Music Lyrics Using
    Sparse Topic Models: A Comparison Between Human-Based and Machine-Based
    Classification},
    BOOKTITLE={XVII Brazilian Symposium on Computer Music},
    ADDRESS={São João del Rei, MG, Brazil},
    url={http://compmus.ime.usp.br/sbcm/2019/assets/proceedings.pdf#page=20},
    MONTH={sep},
    YEAR={2019},
    }

Low-latency pitch detection for the electric bass guitar

MIDI instruments allow musicians to control synthesizers and explore a vast universe of sounds. More recently, instruments based on electronic sensors and videocameras, have been widely explored by the NIME community. My student Christhian Fonseca took some steps in this direction in his graduation work some years ago, when he used a pitch detection algorithm to allow his bass guitar to control a digital synthesizer in real time. The problem he found is that the lowest notes take too long to be detectable by standard algorithms like Yin and Autocorrelation, which causes a disturbing latency between playing notes in the bass and hearing the synthesizer’s response.
Later, Christhian started his MsC research focusing on attenuating this issue. He used a plucked string physical model to predict played notes using characteristics of their transient state, that is, before they become a steady wave. He was able to reduce the latency of the detection by almost 50%, which is an important step towards using the electric bass guitar as a real-time MIDI controller.
  • C. Fonseca and T. F. Tavares, “Low-Latency f0 Estimation for the Finger Plucked Electric Bass Guitar Using the Absolute Difference Function,” in XVII Brazilian Symposium on Computer Music, São João del Rei, MG, Brazil, 2019.
    [Bibtex]
    @INPROCEEDINGS{Fonseca2019,
    AUTHOR={Christhian Fonseca and Tiago Fernandes Tavares},
    TITLE={Low-Latency f0 Estimation for the Finger Plucked Electric Bass Guitar
    Using the Absolute Difference Function},
    BOOKTITLE={XVII Brazilian Symposium on Computer Music},
    ADDRESS={São João del Rei, MG, Brazil},
    url={http://compmus.ime.usp.br/sbcm/2019/assets/proceedings.pdf#page=128},
    MONTH={sep},
    YEAR={2019}
    }

Teaching music to Deaf persons

Music can extend far beyond sound. It is also about dancing, communicating, and being together as a group. This idea guided my former MsC student Erivan Duarte towards teaching music for Deaf persons. He developed a mobile app that converts sounds to vibrations and colors on the screen, and used it as a teaching device in lessons aimed at persons with severe to full hearing impairment. His dissertation is in the university’s repository.

Erivan’s work was focused in engineering and music education. It was only possible due to his hard work and to his kindness in his interaction with the people involved in this research.

  • E. G. Duarte and T. F. Tavares, Perspectivas para Educação Musical dos SurdosCampinas, SP, Brazil: , 2017.
    [Bibtex]
    @misc{Duarte2017a,
    AUTHOR={Erivan Gonçalves Duarte and Tiago Fernandes Tavares},
    TITLE={Perspectivas para Educação Musical dos Surdos},
    BOOKTITLE={Encontro de Educação Musical da Unicamp 2017},
    ADDRESS={Campinas, SP, Brazil},
    MONTH={apr},
    YEAR={2017},
    }
  • E. G. Duarte and T. F. Tavares, “A Tool for the Musical Education of Deaf People,” in Proceedings of the 16th Brazilian Symposium on Computer Music, 2017.
    [Bibtex]
    @InProceedings{Duarte2017,
    author = {Erivan Gonçalves Duarte and Tiago Fernandes Tavares},
    title = {A Tool for the Musical Education of Deaf People},
    booktitle = {Proceedings of the 16th Brazilian Symposium on Computer Music},
    year = {2017},
    month = {sep},
    owner = {tiago},
    timestamp = {2017.08.17},
    url =
    {http://compmus.ime.usp.br/sbcm/2017/assets/SBCM2017Proceedings.pdf#page=164},
    }

Content-based audio search using query-by-multiple-examples

Contemporary musicians frequently resort to short audio recordings called “samples”. Often, they get access to a new, large sample collections, and browsing along it can be time-consuming. One possible method to browse through collections is to use some sort of content-based similarity measure. This usually consists of mapping samples to a content-meaningful vector space and then using the Euclidean distance as a measure of dissimilarity.

The problem with this approach is that audio samples can be similar to each other for different reasons: maybe a waterfall sounds similar to a cymbal because of its noisy texture, but is more similar to a choir in the sense that it is a sustained sound? Back in 2013/2014, in my post-doc at the Interdisciplinary Nucleus for Sound Studies I proposed a method to mitigate this problem. In this method, the musician provides two or more sounds containing the desired characteristic. The search algorithm quickly adapts and guesses specific dimensions of the timbre space that span that characteristic and then sorts the collection items according to this new similarity measure.

  • [DOI] T. F. Tavares and J. Manzoli, “Query-by-Multiple-Examples: Content-Based Search in Computer-Assisted Sound-Based Musical Composition,” in Sound and Music Computing 2014, 2014.
    [Bibtex]
    @InProceedings{Tavares2014a,
    author = {Tiago Fernandes Tavares AND Jônatas Manzoli},
    title = {Query-by-Multiple-Examples: Content-Based Search in Computer-Assisted Sound-Based Musical Composition},
    booktitle = {Sound and Music Computing 2014},
    year = {2014},
    month = {sep},
    doi = {10.5281/zenodo.850561},
    owner = {tiagoft},
    }

Web-based, motion-controlled musical interaction

In 2013, I was fascinated by other motion-triggered interactive installations, and wanted to create my own. At that time, I only had my own laptop to use as hardware, and I wanted it to be incredibly easy to use – as easy as browsing a website. After experimenting with WebKit, video processing, octatonic scales, Markov chains, and piano samples, the first javascript-based version of Motus was born. It was presented live in the form of an interactive musical installation, and the public seems to have a lot of fun interacting with it. Also, I had a very positive feedback from people using it at their own houses.

The souce code for Motus is available on Github, and the algorithm is thoroughly explained in an article (see reference below). Also, it is deployed online, so anyone with a Chrome browser can use it. This is one of my first art-driven projects, and I wonder if I will have time to get back to it in the future.

  • [DOI] T. F. Tavares, “An interactive audio-visual installation using ubiquitous hardware and web-based software deployment,” PeerJ Computer Science, vol. 1, p. e5, 2015.
    [Bibtex]
    @Article{Tavares2015,
    Title = {An interactive audio-visual installation using ubiquitous hardware and web-based software deployment},
    Author = {Tavares, Tiago Fernandes},
    Journal = {PeerJ Computer Science},
    Year = {2015},
    Month = {5},
    Pages = {e5},
    Volume = {1},
    Abstract = {This paper describes an interactive audio-visual musical installation, namely MOTUS, that aims at being deployed using low-cost hardware and software. This was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. This scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation. The resulting system is versatile and can be freely used from any computer with Internet access. Spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware.},
    Doi = {10.7717/peerj-cs.5},
    ISSN = {2376-5992},
    Keywords = {Audio-visual interaction, Computer music, Webcam},
    }