About me

I am an assistant professor (MS-3.1) at the School of Electrical and Computer Engineering, University of Campinas, Brazil. I also cooperate with the Interdisciplinary Nucleus for Sound Studies (NICS/Unicamp), where I perform interdisciplinary research on sound, music and engineering.

Contact

Research interests

Music Information Retrieval: techniques to extract useful information from music and related applications.

New Interfaces for Musical Expression: new musical instruments and other musical devices that allow interesting interactions with music.

I am always open for collaborations and prospective students. If you want to check what I am currently working with, please, keep reading the blog (below) or check my publications.

Identifying song lyric narratives with machine learning

Song lyrics can convey stories with different narrative contexts, such as discovering love, enjoying life, or contemplating the universe. These narratives are linked to how we relate and listen to them. This work by my former undergraduate student André Dalmora applied machine learning algorithms to identify narrative contexts of Brazilian Popular Music lyrics.

He tested several variations of sparse topic models and machine learning algorithms and found out that machine learning algorithms are still not very good at interpreting lyrics. However, he also found out that humans tend to disagree with each other about the narrative contexts of lyrics. Maybe our initial question -“can machine learning reliably identify the narrative of a song?” – could be switched to another one: “can anyone reliably identify the narrative of a song?”.

His paper was published at the 17th Brazilian Symposium on Computer Music, and it received great feedback from the community. I am very excited to keep working on this theme, which seems very promising and full of unanswered questions.

Source code and dataset: http://www.github.com/aldalmora/NC_Music_Brazil

  • [PDF] A. Dalmora and T. F. Tavares, “Identifying narrative contexts in brazilian popular music lyrics using sparse topic models: a comparison between human-based and machine-based classification,” in Xvii brazilian symposium on computer music, São João del Rei, MG, Brazil, 2019.
    [Bibtex]
    @INPROCEEDINGS{Dalmora2019,
    AUTHOR={Andre Dalmora and Tiago Fernandes Tavares},
    TITLE={Identifying Narrative Contexts in Brazilian Popular Music Lyrics Using
    Sparse Topic Models: A Comparison Between Human-Based and Machine-Based
    Classification},
    BOOKTITLE={XVII Brazilian Symposium on Computer Music},
    ADDRESS={São João del Rei, MG, Brazil},
    url={http://compmus.ime.usp.br/sbcm/2019/assets/proceedings.pdf#page=20},
    MONTH={sep},
    YEAR={2019},
    }

Low-latency pitch detection for the electric bass guitar

MIDI instruments allow musicians to control synthesizers and explore a vast universe of sounds. More recently, instruments based on electronic sensors and videocameras, have been widely explored by the NIME community. My student Christhian Fonseca took some steps in this direction in his graduation work some years ago, when he used a pitch detection algorithm to allow his bass guitar to control a digital synthesizer in real time. The problem he found is that the lowest notes take too long to be detectable by standard algorithms like Yin and Autocorrelation, which causes a disturbing latency between playing notes in the bass and hearing the synthesizer’s response.
Later, Christhian started his MsC research focusing on attenuating this issue. He used a plucked string physical model to predict played notes using characteristics of their transient state, that is, before they become a steady wave. He was able to reduce the latency of the detection by almost 50%, which is an important step towards using the electric bass guitar as a real-time MIDI controller.
  • [PDF] C. Fonseca and T. F. Tavares, “Low-latency f0 estimation for the finger plucked electric bass guitar using the absolute difference function,” in Xvii brazilian symposium on computer music, São João del Rei, MG, Brazil, 2019.
    [Bibtex]
    @INPROCEEDINGS{Fonseca2019,
    AUTHOR={Christhian Fonseca and Tiago Fernandes Tavares},
    TITLE={Low-Latency f0 Estimation for the Finger Plucked Electric Bass Guitar
    Using the Absolute Difference Function},
    BOOKTITLE={XVII Brazilian Symposium on Computer Music},
    ADDRESS={São João del Rei, MG, Brazil},
    url={http://compmus.ime.usp.br/sbcm/2019/assets/proceedings.pdf#page=128},
    MONTH={sep},
    YEAR={2019}
    }

Teaching music to Deaf persons

Music can extend far beyond sound. It is also about dancing, communicating, and being together as a group. This idea guided my former MsC student Erivan Duarte towards teaching music for Deaf persons. He developed a mobile app that converts sounds to vibrations and colors on the screen, and used it as a teaching device in lessons aimed at persons with severe to full hearing impairment. His dissertation is in the university’s repository.

Erivan’s work was focused in engineering and music education. It was only possible due to his hard work and to his kindness in his interaction with the people involved in this research.

  • E. G. Duarte and T. F. Tavares, Perspectivas para educação musical dos surdosCampinas, SP, Brazil: , 2017.
    [Bibtex]
    @misc{Duarte2017a,
    AUTHOR={Erivan Gonçalves Duarte and Tiago Fernandes Tavares},
    TITLE={Perspectivas para Educação Musical dos Surdos},
    BOOKTITLE={Encontro de Educação Musical da Unicamp 2017},
    ADDRESS={Campinas, SP, Brazil},
    MONTH={apr},
    YEAR={2017},
    }
  • [PDF] E. G. Duarte and T. F. Tavares, “A tool for the musical education of deaf people,” in Proceedings of the 16th brazilian symposium on computer music, 2017.
    [Bibtex]
    @InProceedings{Duarte2017,
    author = {Erivan Gonçalves Duarte and Tiago Fernandes Tavares},
    title = {A Tool for the Musical Education of Deaf People},
    booktitle = {Proceedings of the 16th Brazilian Symposium on Computer Music},
    year = {2017},
    month = {sep},
    owner = {tiago},
    timestamp = {2017.08.17},
    url =
    {http://compmus.ime.usp.br/sbcm/2017/assets/SBCM2017Proceedings.pdf#page=164},
    }

Content-based audio search using query-by-multiple-examples

Contemporary musicians frequently resort to short audio recordings called “samples”. Often, they get access to a new, large sample collections, and browsing along it can be time-consuming. One possible method to browse through collections is to use some sort of content-based similarity measure. This usually consists of mapping samples to a content-meaningful vector space and then using the Euclidean distance as a measure of dissimilarity.

The problem with this approach is that audio samples can be similar to each other for different reasons: maybe a waterfall sounds similar to a cymbal because of its noisy texture, but is more similar to a choir in the sense that it is a sustained sound? Back in 2013/2014, in my post-doc at the Interdisciplinary Nucleus for Sound Studies I proposed a method to mitigate this problem. In this method, the musician provides two or more sounds containing the desired characteristic. The search algorithm quickly adapts and guesses specific dimensions of the timbre space that span that characteristic and then sorts the collection items according to this new similarity measure.

  • [DOI] T. F. Tavares and J. Manzoli, “Query-by-multiple-examples: content-based search in computer-assisted sound-based musical composition,” in Sound and music computing 2014, 2014.
    [Bibtex]
    @InProceedings{Tavares2014a,
    author = {Tiago Fernandes Tavares AND Jônatas Manzoli},
    title = {Query-by-Multiple-Examples: Content-Based Search in Computer-Assisted Sound-Based Musical Composition},
    booktitle = {Sound and Music Computing 2014},
    year = {2014},
    month = {sep},
    doi = {10.5281/zenodo.850561},
    owner = {tiagoft},
    }

Web-based, motion-controlled musical interaction

In 2013, I was fascinated by other motion-triggered interactive installations, and wanted to create my own. At that time, I only had my own laptop to use as hardware, and I wanted it to be incredibly easy to use – as easy as browsing a website. After experimenting with WebKit, video processing, octatonic scales, Markov chains, and piano samples, the first javascript-based version of Motus was born. It was presented live in the form of an interactive musical installation, and the public seems to have a lot of fun interacting with it. Also, I had a very positive feedback from people using it at their own houses.

The souce code for Motus is available on Github, and the algorithm is thoroughly explained in an article (see reference below). Also, it is deployed online, so anyone with a Chrome browser can use it. This is one of my first art-driven projects, and I wonder if I will have time to get back to it in the future.

  • [DOI] T. F. Tavares, “An interactive audio-visual installation using ubiquitous hardware and web-based software deployment,” Peerj computer science, vol. 1, p. e5, 2015.
    [Bibtex]
    @Article{Tavares2015,
    Title = {An interactive audio-visual installation using ubiquitous hardware and web-based software deployment},
    Author = {Tavares, Tiago Fernandes},
    Journal = {PeerJ Computer Science},
    Year = {2015},
    Month = {5},
    Pages = {e5},
    Volume = {1},
    Abstract = {This paper describes an interactive audio-visual musical installation, namely MOTUS, that aims at being deployed using low-cost hardware and software. This was achieved by writing the software as a web application and using only hardware pieces that are built-in most modern personal computers. This scenario implies in specific technical restrictions, which leads to solutions combining both technical and artistic aspects of the installation. The resulting system is versatile and can be freely used from any computer with Internet access. Spontaneous feedback from the audience has shown that the provided experience is interesting and engaging, regardless of the use of minimal hardware.},
    Doi = {10.7717/peerj-cs.5},
    ISSN = {2376-5992},
    Keywords = {Audio-visual interaction, Computer music, Webcam},
    }