On 5-7 August 2019, three of our lab members (Annaliese Micallef Grimaud, Scott Bannister, Kelly Jakubowski) gave talks at the Society for Music and Perception (SMPC) conference in New York City, hosted by New York University. It was a very full programme (156 talks, 164 posters, 7 symposia, 1 keynote), with presentations spanning a broad range of topics in our field. Below we present a summary of just some of the highlights of this event, from both Kelly’s and Annaliese’s perspectives.
Attending SMPC 2019 was a great opportunity to catch up with some American colleagues I hadn’t seen in years, or had only previously ‘met’ on Skype or email. One of the main highlights for me was getting to chair a symposium on music-evoked autobiographical memories (MEAMs), in which I also presented some of my ongoing research on the topic. A primary goal of the symposium was to give an overview of the state-of-the-art in MEAMs research from varying methodological perspectives. Amy Belfi began the symposium by presenting results from lab experiments to cue MEAMs and also gave an overview of some new work in which she has compared different methods for coding verbal descriptions of MEAMs. I built on some of these experimental findings to talk about my recent research on MEAMs in everyday life using survey and diary methods. The results of these studies aligned very well with previous lab experiments, with a few minor exceptions—for instance, the MEAMs I captured tended to be of more specific memories than in previous lab studies, likely due to the more personalised nature of the music cues people experience in their everyday life (as compared to experimenter-selected music). Petr Janata then presented some ongoing work using a very novel and intensive method for tracking MEAMs in the brain, in which he collected fMRI data from the same participant in response to music for several hours and then again one year later, and was able to reveal replicable patterns of brain activation in correspondence with the participant’s MEAM experiences. Finally, Amee Baird gave a detailed overview of research on MEAMs in people with neurological conditions. As a clinical psychologist she has particular insight into working with a variety of such populations, and revealed evidence of preserved MEAMs in people with Alzheimer’s Disease and acquired brain injury, whereas people with Behavioural variant Frontotemporal Dementia showed more impairments in the frequency and specificity of MEAMs in comparison to healthy controls. Beyond this symposium there were a couple other presentations on the subject; for instance, I saw a very interesting poster presented by D. Gregory Springer in which positive and negative autobiographical memories related to music were coded using an automated software (Linguistic Inquiry and Word Count) and a method from Gabrielsson’s studies of strong experiences with music. This aligns quite clearly with the work of Amy Belfi described above and highlights the fact that there are still many underexplored questions in this research area in regard to the best methods for analysing and categorising rich, textual reports of autobiographical memory experiences.
Post-conference drinks featuring MSL team/MSL alumni enjoying NYC
It was also nice to see a range of research that balanced scientific control with ecological methods and/or the use of existing big data sources. For example, Megan Curtis and her students presented a couple different studies that made use of listeners’ Top 100 song lists as generated by Spotify. One study showed that independent raters were able to predict some of the listeners’ personality traits on the basis of such playlists. A couple different presentations also made use of the MuPsych app developed by Will Randall to collect data related to music listening and responses to music in everyday life (e.g. work by Elizabeth Kinghorn on emotional motivations for music listening, work on everyday earworm experiences across the lifespan by Georgia Floridou). Anna Kasdan also spoke very openly about the challenges faced in recording mobile EEG data during a chamber music festival in a remote, woodland location. The LIVELab symposium chaired by Laurel Trainor highlighted new methodological advances for capturing audience responses to live music performances in real-time, as facilitated by the specially designed concert hall at McMaster University. Although I did not get to attend the full symposium, I did see one very interesting talk in this session by Molly Henry, who compared EEG recordings of 20 audience members watching a live concert to 20 audience members watching the same concert in a recorded format and found that the social neural networks between audience members were more densely connected when the performance was live, indicating that audience members’ brain rhythms were more synchronised in a live performance setting. In line with the idea of enabling other researchers to make use of datasets and open science practices, Amy Belfi presented a poster on the Famous Melodies Stimulus Set of 109 melodies selected to be highly familiar to US participants, and Sarah Sauvé presented the Melody Annotated String Quartet dataset, which was developed to provide a ground truth for MIR melody extraction of string quartets.
There were a variety of initiatives for students at the conference, which I thought was a nice way to try and integrate relative newcomers into the field and encourage them to converse with more senior researchers. These activities ranged from a bingo game that encouraged conversation at the opening reception, to an opportunity for students to go for lunch with an academic of their choice, to several lunchtime panel discussions. I served on a panel about applying to graduate school, and it was great to see so many enthusiastic students, many of who were still undergraduates, attending this session and expressing a keen interest in becoming researchers in our field. In particular, one undergraduate student whom I talked to individually said she did not know anyone else at the conference, but was surprised by how essentially everyone she had met at SMPC had been extremely welcoming and helpful. I told her this is certainly not something that can be said about every academic field, and I am indeed very lucky and proud to be working in a field where colleagues are generally very open to new ideas from new people who are keen to become the next generation of music science researchers.
SMPC 2019 was my first ever SMPC conference, and hence I was very much looking forward to it. Needless to say, it exceeded all my expectations and proved to be a fantastic experience. It was great being immersed in such a rich and diverse research environment and engaging with the music science community in the beautiful setting of New York City.
Research topics varied from cross-cultural studies, to embodiment, music and language, and neuroscience to name a few. Other presentations took a closer look at musical features such as beat and meter, and timbre. As my research focusses on musical features in relation to emotion perception, it was rather interesting to attend presentations which looked at musical and acoustical features from other perspectives. One such feature which was extensively explored from different angles was timbre. For example, Caitlyn Trevor compared scary film soundtracks (referred to as scream-like music) to human screams and investigated the notion that scream-like musical excerpts mimic actual screams by utilising a unique acoustic feature in human screams; the niche range of a scream’s modulation power spectrum. A comparison of the mean modulation power spectrum (MPS) amplitudes in 50 human screams to 50 scream-like musical excerpts (such as the well-known music excerpt from Psycho) confirmed the theory that in fact scream-like music does mimic human screams by utilising acoustic features present in human screams. Renee Timmers compared synthesized film music sampled from sound libraries to recordings of the same film music using a live orchestra, and investigated whether listeners would be able to detect the difference, have a preference for a particular source, and if the different sources had an effect on the listeners’ emotional response to the music. Results indicated that the different sources did not affect the listeners’ emotional responses. Furthermore, there was no significant difference in rank order of source preference, which exhibits the high-quality sound libraries currently available. Sharmila Sreetharan investigated the role of timbre in the recognisability of different alarm sounds. The hypothesis suggested that a richer timbral sound (reminiscent of an acoustic structure of a musical instrument) would increase sound recognisability and recall. This was attained by comparing two versions of the same three-note melodies used for alarms; ones with flat amplitude envelopes and the others with percussive amplitude envelopes. Results showed that melodies with percussive amplitude envelopes had been recognised more than ones with flat amplitude envelopes. Additionally, sequences with flat amplitude envelopes were rated as more annoying-sounding than the sequences with the richer acoustic structure, indicating how timbre can be useful in different settings, and is a feature that should be taken more into consideration.
A personal highlight for me was discovering the different methodologies presented in different research projects, such as utilising motion capture to investigate emotion embodiment in individuals and recording participants’ dancing movements in response to music. Suvi Saarikallio explained how music is a forum for embodied emotional expression and investigated whether music serves as an emotion embodiment for adolescents. This was attained by instructing adolescents to represent different emotions by drumming on a djembe. Utilising motion capture, their hand movements were observed, and distinct differences in movements expressing high arousal and low arousal emotions could be clearly seen. María Marchiano investigated how musical meter is encoded in feet movements made by people dancing at an EDM party by analysing audiovisual recordings of the party, discovering that all participants at the party exclusively used the same two feet motion-patterns.
Music and movement was also one of the themes of a symposium titled ‘Musical Expression in the Eye of the Beholder’, chaired by Jonna Vuoskoski. The symposium encapsulated research on music, movement, and multimodal music perception. Jonna Vuoskoski explained how emotional expression emerges from the interplay between the structural features of the music and the expressive efforts of the performer, which includes body movement. One of the aims of this research project was to identify if visual kinematic information of a performer’s movements also played a role in the conveying of perceived emotions, and the main aim of the study was to investigate how visual and auditory cues effected the conveying of emotion during a musical performance. This was attained by recording the body movements of a pianist and violinist whilst they played musical passages in different ways, to convey different emotions. Participants then had to identify which emotion was being conveyed by either: looking at only visuals of the movements, audiovisual recordings, audio-only clips, and time-warped audiovisual. Results indicated that for most cases, participants could accurately recognise different emotional expressive intentions based on visual information alone. Marc Thompson followed by talking about the impact of gestures and mannerisms on the observer’s experience of the music for the same experiment. The performers’ gestures and mannerisms were tracked using optical motion capture. The recordings were then turned into stick-figure animations and participants had to rate which emotion was being expressed by looking at the movements made by the stick-figures. Results showed that high-activity movements were likely to be rated by participants as conveying high-arousal emotions such as happiness and anger, whilst low-activity movements were correlated with low-arousal emotions.
Birgitta Burger presented a study that investigated which movement characteristics support emotion perception in music-induced movements. Stick-figure animations created from motion capture recordings of dancers moving to musical stimuli were utilised in this study. Participants rated the emotions perceived in the performances by looking at the stick-figure animations. Results indicated that the emotions rated had distinct movement characteristics attributed to them, which suggests that there is a relationship between emotions perceived in dance movements to specific movement features. Finally, Petri Toiviainen presented a study that looked at kinematics of perceived dyadic interaction in music-induced movement. Optical motion capture was used to record dancers’ movements as they moved to music in pairs. Stick-figure animations were then compiled from the recordings and participants had to provide perceptual ratings of how similar the pairs of dancers were moving to each other and also their level of interaction. Synchrony in dance movements was quantified by looking at temporal coupling, spatial coupling, and torso orientation. Results indicated that torso orientation is a strong predictor of perceived interaction between the dancers, whilst spatial and temporal coupling served as better predictors of perceived synchrony in dance movements.
As Kelly mentioned, there were multiple activities aimed at students during the conference. As a student myself, I signed up for the faculty-student lunch programme, which gave us the opportunity to meet with an academic in a small group and pick their brain on their research, advice on our own research, the world of academia, and anything else really. I had the honour of meeting Steve Keller, the Sonic Strategy Director of Pandora, the largest streaming music platform in the US, and he generously took the whole group to lunch. Steve Keller is an audio alchemist and audio branding is his field of expertise. Steve works on blending art and science together, and explores how sound can enhance experiences, influence individuals’ perceptions and also behaviours. It was truly fascinating listening to an expert talk about the multiple uses of sound within different environments and experiences, such as altering the perception of the level of spiciness in food via the accompanying music during the dining experience. Furthermore, we had the opportunity to ask Steve how he balances and merges together his professional and academic careers. It was great having the opportunity to hear about Steve’s personal experiences, career development and his experience in academia, and getting tips on our own personal development.
During the opening reception, we got to play conference bingo, which proved to be a great ice-breaker and a good way to engage with multiple attendees and learn something new about them in the process. As any other bingo game works, the aim was to complete a bingo pattern in the grid. Instead of numbers, each square in the grid contained information related to the conference and attendees, such as ‘will be presenting data during my presentation’ and some other general ones, such as ‘jet-lagged’. Each person you talked to could only tick one of the boxes, which meant that to try and complete a line, you had to talk to a minimum of five different people. This was a fantastic idea, as this informal game pushed people to engage with one another and especially, approach other individuals that they had never spoken to before.
A final highlight (which was targeted to all conference attendees) was the dinner cruise, where the conference attendees were able to socialise with each other in a more casual setting, whilst cruising down the Hudson River and looking at breath-taking views of the New York skyline, the Statue of Liberty, the Brooklyn Bridge and more, which was surely memorable.
SMPC 2019 was a greatly-organised conference, with a fantastic programme of talks, posters, and symposia that showcased different areas of music science research. It is safe to say that my first experience at a SMPC conference has been unforgettable, and I look forward to the next one.