-
Christodoulou, Anna-Maria; Arnim, Hugh Alexander von & Jensenius, Alexander Refsum
(2025).
Supporting Narrative Comprehension in Programmatic Music through Music and Light.
-
Snaprud, Per; Jensenius, Alexander Refsum; Endestad, Tor & W?ien, Randi
(2025).
Concientia.
doi:
https:/happeningnext.com/event/samtale-om-bevissthet-og-det-%C3%A5-skape-eid3a0d68lba3.
Show summary
I forbindelse med utstillingen Conscientia p? Gamle Munch arrangeres en samtale om bevissthet og det ? skape.
Utgangspunktet for tema til utstillingen er bevissthet og forhold knyttet til det ? skape.
En del av utstillingen tar utgangspunkt i en serie med selvportretter av Randi W?ien basert p? MR bilder tatt av hennes eget hode. Bildene representerer et sett av intrikate m?nstre i ulike st?rrelser og sammensetninger og kan refereres til ulike organer, andre vesener og ren natur. Samtidig er de en representasjon av kunstneren slik hun er satt sammen i sitt eget hode. Keramiker Jorid Krosse lager objekter med form og m?nster som tar inspirasjon fra naturen og kan relateres til organiske strukturer, hoder og andre vesener. I samspill med maleriene vil de keramiske objektene settes i en relasjon til det kroppslige.
Samtalen om bevissthet og det ? skape bruker utstillingen som et utgangspunkt til ? f? belyst hva den skapende prosessen kan bety for v?r egen utvikling og hvordan hjernen fungerer og responderer p? skapende prosesser. Tema for samtalen vil v?re forholdet mellom kunst og bevissthet, om relasjonen mellom maleri og objekt og om hvordan det ? skape kunst kan p?virke v?r forst?else av oss selv og omgivelsene.
For tiden forskes det mye p? hva som faktisk skjer i hjernen n?r man skaper noe. Vi har f?tt med oss to av de fremste forskerne p? temaet fra universitetet i Oslo.
Alexander Refsum Jensenius er professor i musikkteknologi ved Universitetet i Oslo, hvor han ogs? leder RITMO Senter for tverrfaglige studier av rytme, tid og bevegelse og MishMash Senter for KI og kreativitet. Han forsker p? hvordan lyd og musikk p?virker kropp og sinn, bevisst og ubevisst.
Tor Endestad er f?rsteamanuensis i kognitiv- og nevropsykologi p? univeristetet i Oslo og er tilknyttet Ritmo. Han leder FRONT neurolab og forsker p? kognitiv psykologi og kognitiv nevrovitenskap med fokus p? hjerneavbildningsmetodikk. P?g?ende forskningsprosjekter omfatter studier av basale mekanismer i oppfattelse av rytme og tid, oppmerksomhet og hukommelse.
Til ? moderere samtalen har vi f?tt med oss Per Snaprud. Han er vitenskapsjournalist og f?r det hjerneforsker. Han arbeider i det Stockholm baserte magasinet 澳门葡京手机版app下载 og Framsteg og har tidligere v?rt virksom ved Dagens Nyheters og Sveriges Radios vitenskapsredaksjoner. Han er ogs? forfatter av boken ?Medvetandets ?terkomst, om hj?rnan, kroppen och universum?.
Victoria Johnson er fiolinist, underviser ved Institutt for musikkvitenskap og deltar i ulike forskningsprosjekter ved UiO. Hun har hatt solokonserter blant annet under Festspillene i Bergen, Ultima, Borealisfestivalen og Soundwaves i London. Hennes lidenskap for samtidsmusikk har resultert i flere bestillingsverk og plateinnspillinger. I denne sammenhengen vil hun spille musikk som er direkte komponert til bildene og objektene i utstillingen.
-
Jensenius, Alexander Refsum
(2025).
Kan musikk skape fred?
[Internet].
Ungdomsavisa.
doi:
https:/ungdomsavisa.com/index.php?artID=763&navB=1.
Show summary
I kveld var en rekke personligheter fra musikkmilj?et samlet til debatt ved Universitetet i Oslo. Temaet var ?Kan musikk skape fred??. Blant deltakerne var Birgitte Grimstad og Lars Klevstrand , som har underholdt med musikk i flere ti?r. Debatten ble ledet av blant annet professor i musikkvitenskap ved UiO, Alexander Refsum Jensenius.
-
Jensenius, Alexander Refsum; Sandvik, Kristin Bergtora; Grimstad, Birgitte; R?ysum, Andreas Hoem & Klevstrand, Lars
(2025).
Kan musikk skape fred?
doi:
https:/www.oslopeacedays.no/program/2025/fred-og-musikk.
Show summary
P? konsert f?ler vi samhold med fremmede, viser forskning. I et kort ?yeblikk samler musikken oss. Hvordan kan musikk ogs? samle oss i urolige tider? Flere artister jobber for fred, p? ulike m?ter. M?t noen av dem p? Scene Domus Bibliotheca! Hva er det med akkurat musikk som forener oss? Bli med p? musikksnakk med artistene Birgitte Grimstad, Lars Klevstrand og Andreas R?ysum. Du m?ter ogs? fredsforsker Kristin Bergtora Sandvik. Her vil musikkprofessor Alexander Refsum Jensenius lede samtalen med ulike sp?rsm?l knyttet til tematikken – kanskje svarer de p? ditt sp?rsm?l ogs?? Samtalen er beregnet for et publikum uten faglig bakgrunn i temaet.
-
-
Jensenius, Alexander Refsum
(2025).
Are we still needed?
doi:
https:/filmskolen.no/artikler/2025/ki-i-filmbransjen-2-0.
Show summary
Den kunstige intelligensens frammarsj fortsetter ? v?re den st?rste kulturelle og samfunnsmessige omveltningen siden den industrielle revolusjonen. Siden fjor?rets konferanse har det skjedd vanvittig mye, derfor inviterer vi igjen til en dag fylt med internasjonale n?kkelpersoner, banebrytende prosjekter og nye perspektiver p? hvordan KI endrer m?ten vi utvikler, produserer og opplever film.
-
-
-
-
Jensenius, Alexander Refsum
(2025).
What is the role of AI in creative activities?
-
Marin-Bucio, Diego
(2025).
Machinic Movement Matrix: A framework and tool for human-AI dance-creation.
Show summary
The Machinic Movement Matrix (MMM) is a conceptual framework for analyzing and designing human–machine creative interaction in dance. Developed through practice-led research combining dance anthropology, posthumanist theory, and AI design, the MMM offers a structured method for identifying how artificial systems participate in—or fall short of—genuine creative collaboration. Rather than focusing on the aesthetic outcomes of human–machine interaction, the MMM examines the underlying creative roles, levels of interactivity, and power dynamics that shape these processes.
-
Blenkmann, Alejandro Omar
(2025).
iElectrodes Toolbox: Fast, Robust, and Open-Source Localization of Intracranial Electrodes.
doi:
https:/cuttingeeg.org/practicalmeeg2025/bouquet/.
Show summary
Precise anatomical localization of intracranial electrodes is crucial for interpreting invasive recordings in clinical and cognitive neuroscience research. The open-source iElectrodes toolbox offers a fast, semi-automated, and robust solution for localizing subdural grids, depth electrodes, and strips from MRI and CT images, supporting automatic anatomical labeling. iElectrodes was initially introduced in Blenkmann et al. (2017), and has been updated with major methodological innovations in Blenkmann et al. (2024). To date, it has >2000 downloads.
In this 90-minute session, I will first provide an introductory lecture on the core functionalities of iElectrodes, including image pre-processing steps, semi-automatic electrode localization, brain shift compensation, and standardized anatomical registration. We will cover the recent major upgrades to the toolbox: the GridFit algorithm for robust localization of SEEG and ECoG electrodes under challenging conditions (e.g., noise, overlaps, and high-density implants), and CEPA (Combined Electrode Projection Algorithm) for smooth compensation methods for grids, addressing brain deformations based on mechanical modeling principles. These developments significantly enhanced the robustness and precision of intracranial electrode localization.
In the second part of the session, we will move into a hands-on tutorial, where participants will learn how to use the toolbox through practical exercises. Using real patient datasets (anonymized), we will cover:
? Preprocessing MRI and CT images.
? Semi-automatic detection and localization of electrode coordinates using clustering and GridFit algorithms.
? Brain shift correction using CEPA.
? Automatic anatomical labeling of electrodes.
? Generation of an iElectrodes localization project file.
? Exporting electrode coordinates into formats compatible with Fieldtrip, EEGLAB, and text reports.
? Integration with further analysis workflows.
This session is intended for both clinical and cognitive neuroscience research users working with SEEG or ECoG. Attendees will leave with practical skills for reliable and reproducible electrode localization, ready to apply to their own datasets.
-
Blenkmann, Alejandro Omar
(2025).
Auditory Prediction and Its Neural Correlates.
doi:
https:/csan2025.saneurociencias.org.ar/symposia/neurophysiological-bases-of-memory-consciousness-and-interoception-in-humans-from-the-neuron-to-neural-networks/.
Show summary
This panel presents research on the neural mechanisms underlying auditory prediction—a fundamental process for the perception of sound, language, and music. Evidence is presented on how the brain anticipates and processes auditory stimuli through neural networks that integrate prior sensory information, thereby facilitating the interpretation of speech and complex musical structures.
Using neurophysiological and neuroimaging studies, researchers have identified the neural correlates of auditory prediction in regions such as the auditory cortex and areas involved in memory and attention. Additionally, computational models are analyzed to explain how the brain dynamically adjusts its expectations in response to variations in auditory stimuli.
-
Guo, Jinyue; T?rresen, Jim & Jensenius, Alexander Refsum
(2025).
Cross-modal Analysis of Spatial-Temporal Auditory Stimuli and Human Micromotion when Standing Still in Indoor Environments (poster).
doi:
10.5281/zenodo.17502603.
-
Duch, Michael Francis; Furunes, Alexander Eriksson; Jensenius, Alexander Refsum & Olsen, Cecilie Sachs
(2025).
Kunstnerisk forskning for en kompleks verden.
doi:
https:/akademietforyngreforskere.no/wp-content/uploads/2025/11/Jubileumsbok-digital-3.pdf.
Show summary
Kunstfagene spiller en viktig rolle i ? utvide m?ten vi jobber med og forst?r komplekse samfunnsproblemer. Likevel blir kunstfagene stadig oversett og nedprioritert i forskningspolitikken. Vi sp?r derfor: hva er, b?r og kan kunstens rolle v?re i det norske forskningslandskapet?
-
Arnim, Hugh Alexander von; Christodoulou, Anna-Maria; Burnim, Kayla; Upham, Finn; Kelkar, Tejaswinee & Jensenius, Alexander Refsum
(2025).
LightHearted—A Framework for Mapping ECG Signals to Light Parameters in Performing Arts.
-
Lindblom, Diana Saplacan & Murashova, Natalia
(2025).
AI in Society: Virtual and Physical AI.
doi:
https:/www.uio.no/om/澳门葡京手机版app下载/skole/fagped-dag/program.html.
Show summary
This talk explores the evolving landscape of artificial intelligence in various societal contexts, focusing on the integration and implications of AI (virtual) tools such as e.g., ChatGPT, Microsoft Copilot, but also the use of "physical AI", such as in social robots. It showcases practical cases, showing some insights from our initial fieldwork applying the Ethical Risk Assessment of AI in practice (ENACT) in various private and public (learning) organizations, as well as some insights from our work within Human-Robot Interaction and social robots area in the Vulnerability in Robot Societies (VIROS), Predictive and Intuitive Robot Companion (PIRC) research projects, and our recently funded ROBOts as Welfare Technologies and Actors for ELderLy Care: A Nordic Model for Integration of Advanced Assistive Technologies (ROBOWELL) project.
-
Laczko, Balint; Rognes, Marie Elisabeth & Jensenius, Alexander Refsum
(2025).
Poster for "Image Sonification as Unsupervised Domain Transfer".
doi:
10.5281/zenodo.17513165.
Show summary
The process of image sonification maps visual features into perceived auditory features. Most established sonification methods rely on identifying salient visual features in the input data and then mapping their distribution to a proportional distribution of auditory features. However, this approach requires both domain expertise and manual feature engineering. Here, we propose a new method of image sonification, leveraging recent advances in representation learning and domain transfer. Our approach introduces a pair of variational auto-encoder models that learn disentangled latent representations of the images and sounds, respectively, and a separate network that maps between these representations. The resulting sonification system encodes images into the latent space and then decodes them as sounds. Both representations and their mapping are learned in an entirely unsupervised manner. When evaluating the system in an interactive real-time setting, we observed that the model successfully learned disentangled representations of image and sound factors in our synthetic datasets.
-
Jensenius, Alexander Refsum
(2025).
Musikk og KI - Utfordringer og muligheter.
Show summary
Kunstig intelligens p?virker det meste om dagen ogs? musikklivet og musikkbransjen. Men hva er egentlig KI og hva er utfordringer og muligheter innenfor kunst og kultur? Presentasjonen diskuterer ulike pedagogiske tiln?rminger og gir eksempler p? hvordan det nye KI-senteret MishMash skal angripe problemstillingene.
-
Wallace, Benedikte
(2025).
P?NSJ: Kunstig Intelligens i musikk.
[Radio].
NRK.
-
-
-
T?rresen, Jim
(2025).
Keynote: Techno-Ethical Considerations when Applying Machine Learning in Real-world Systems.
doi:
https:/www.icmlt.org/2025.html.
-
T?rresen, Jim
(2025).
Invited talk: Intelligent Robotics in Healthcare.
-
T?rresen, Jim
(2025).
Guest lecture: Intelligent Robotics in (Home) Healthcare.
-
Lindblom, Diana Saplacan
(2025).
Overview of my research: background, results, experiences, publications and future directions.
-
Jensenius, Alexander Refsum
(2025).
Technologies supporting research on music-related body motion.
doi:
https:/www.liser.lu/events/EXPAR2025-09-18.
Show summary
As researchers, we are increasingly using emerging technologies, such as multiple mobile eye tracking, virtual reality, and physiological indicators (e.g., heart rate and respiration) to study professionals’ individual and collaborative work practices. In this workshop, we will demonstrate how these technologies can be provided to professionals in various fields (e.g., education, healthcare, business, engineering, the arts) as a resource for self-reflection, enabling them to study and improve their own practices.
The goal of this workshop is to introduce and facilitate participants to experience novel approaches that use these emerging technologies and tools to help practitioners study their own skills and understand their learning processes. We will also show how focus groups and stimulated recall interviews can encourage and guide professionals to discover ways to incorporate these new technologies into their practice as resources for reflection and growth.
The workshop’s theme is educational practice and research, with a focus on showing how we can offer teachers theoretically driven and empirically validated methodologies for witnessing the micro-processes of collaborative mathematics learning. We will show and discuss how multiple mobile eye-tracking and virtual reality can be used in educational practice and for teacher training and professional development.
This approach and these emerging technologies are applicable not only in education, but also in all other fields of research that aim to study individual and collective practices, as well as professional learning, during the process of acquiring new skills or improving existing ones.
-
Lindblom, Diana Saplacan
(2025).
"The Wooden Gripper Was Warmer and Made the Robot Less Threatening"– a Study on Perceived Safety Based on Robot Gripper’s Visual and Tactile Properties.
-
Jensenius, Alexander Refsum
(2025).
Video Visualization - Learn to use MG Toolbox.
doi:
https:/www.liser.lu/events/EXPAR2025-09-19.
Show summary
This workshop is targeted at students and researchers working with video recordings You will learn to use MG Toolbox, a Python package with numerous tools for visualizing and analyzing video files. This includes visualization techniques such as motion videos, motion history images, and motiongrams; techniques that, in different ways, allow for looking at video recordings from different temporal and spatial perspectives. It also includes some basic computer vision analysis, such as extracting quantity and centroid of motion, and using such features in analysis. MG Toolbox for Python is a collection of high-level modules that generate all of the above-mentioned visualizations.The toolbox is relevant for everyone working with video recordings of humans, such as in linguistics, psychology, medicine, human-computer interaction, and educational sciences.
-
Jensenius, Alexander Refsum
(2025).
RITMO, MishMash and the fourMs Lab.
Show summary
A presentation for the Ukrainian Research Council.
-
-
Jensenius, Alexander Refsum
(2025).
Noen resultater fra tre ?r med forsknings澳门葡京手机版app下载 med tre orkestre.
Show summary
Denne presentasjonen oppsummerer resultater fra forskningsstudier p? og med tre skandinaviske symfoniorkestre. I alle tilfeller har b?de kvalitative og kvantitative data blitt samlet inn p? pr?ver og konserter i konsertsaler.
-
Lartillot, Olivier
(2025).
Computational music analysis.
-
Wosch, Thomas; Vobig, Bastian & Lartillot, Olivier
(2025).
Human Interaction assessment and Generative segmentation in Health & Music.
doi:
https:/www.youtube.com/watch?v=I4jaZIzX0wg.
Show summary
Improvisation in music therapy has been shown to be an effective technique for engaging clients in emotionally rooted (inter)action to treat affective disorders such as major depression (Aalbers et al., 2017; Erkkil? et al., 2011). During improvisation, however, a variety of musical information is exchanged, resulting in a highly complex musical and interpersonal situation. While traditional models of music therapy analysis emphasise aural analysis and assessment of single sessions (Bruscia, 1987), more recent and elaborated methods, such as microanalysis, focus on the detailed development of improvisation sessions (Wosch, 2021; Wosch & Erkkil?, 2016), which comes at the cost of a more time-consuming application process. Digital processing, as in music information retrieval and machine learning, seems promising to accelerate the analysis process, but requires considerable preliminary work in data preprocessing and formalisation of the high-level concepts used in music therapy to develop a suitable dataset for model training. Moreover, additional benefits of digital processing comprehend a more detailed and precise analysis of musical data.
-
Sudo, Marina; Ziegler, Michelle; Akkermann, Miriam & Lartillot, Olivier
(2025).
Towards Collaborative Analysis: Kaija Saariaho’s Io (1986–87).
-
Sudo, Marina & Lartillot, Olivier
(2025).
Contemporary Music Analysis and Auditory Memory: The Use of Computational Tools as an Aid for Listening.
doi:
https:/fabricadesites.fcsh.unl.pt/ncmm/ncmm-2025-program/.
Show summary
Music analysis involves categorising and interpreting sonic elements to uncover the structure and meaning of a work. In contemporary music studies, analysts often face methodological challenges in this process, especially when dealing with works that contain high degrees of complexity and ambiguity in terms of timbre, texture and temporal structure. This paper proposes a methodological model for analysing spatiotemporal complexities commonly observed in contemporary repertoires, utilising computational tools to enhance auditory memory and expand interpretative possibilities.
Auditory memory plays a pivotal role in aural analysis, an approach that serves as a valuable alternative or complement to traditional score-based analysis. Rooted in Pierre Schaeffer’s typomorphology of objets sonores and the work of other analysts in electroacoustic music studies, the general principles of aural analysis can be outlined in a three-step process: 1) attentive listening to the acoustic properties of sounds, 2) describing and categorising their sonic variations, and 3) assessing their functions within a large-scale formal structure. Computational sound visualisation tools are frequently employed in this process to assist in transcribing and retaining musical events that are either absent from the score or difficult to interpret aurally due to textural complexities and/or timbral elusiveness. Despite their increasing use, however, the full potential of these tools remains largely unexplored in contemporary music studies. By digitally decomposing the transformation processes of ambiguous musical flows and supporting the organisation and structuring of auditory memory, computational analysis of audio data and various visualisation methods can deepen our understanding of both local sonic morphology and large-scale formal trajectory.
In line with these considerations, the paper investigates how specialised computer interfaces can facilitate music analytical processes. Two research questions guide this investigation: 1) How can we analyse a stream of sonic textures; and 2) How can we outline the formal structure of a work that embraces extremes of sonic energy and polyrhythmic intricacy? To explore these questions, we have developed muScope, a new computer program that enables users to browse within high-resolution sonograms in tandem with a range of graphical representations capturing audio, timbral, rhythmic and structural descriptions. The analysis of spectral “fluctuations” allows for the identification of rapid pulsations at the middle ground between rhythm and timbre. Self-similarity matrix representations can serve as a tool for outlining the structural division of the audio data based on various sonic attributes. We integrate these visual representations into an analytical workflow designed to support the construction of a composition’s formal structure.
Our methods are demonstrated through an analysis of excerpts from Kaija Saariaho’s Io for large ensemble and electronics (1986–87) and Rapha?l Cendo’s Corps for piano and ensemble (2015). This integrated analytical approach offers new insights into the interplay between musical perception, memory and analytical interpretation using digital tools.
-
-
-
-
Lartillot, Olivier
(2025).
Computational Music Analysis: Toolbox and application to music psychology & therapy.
-
-
Laczko, Balint
(2025).
Presentation of PhD project: Perceptually Aligned Deep Image Sonification.
doi:
https:/comma.eecs.qmul.ac.uk/creative-audio-synthesis-and-interfaces-workshop/.
Show summary
Imaging technology has dramatically expanded our understanding of biological systems. This overabundance of images has come with unique problems, such as visual overload, which can potentially obscure data relationships and induce eye fatigue or divert vision from important tasks. Image sonification offers potential solutions to these problems by channeling data into the auditory domain, leveraging our natural pattern recognition skills through hearing. In my PhD project I have been exploring the potential of Machine Learning in solving the two fundamental challenges of image sonification: the perceptually aligned representations of images and sounds, and the cross-modal mapping between them. In this talk I will present my journey through timbre spaces, Differentiable DSP, and cross-modal domain transfer in search of new methods for image sonification.
-
Laczko, Balint
(2025).
C2HO Workshop on Image Sonification with Pixasonics.
doi:
https:/www.hf.uio.no/imv/english/research/networks/creative-computing-hub-oslo/pages/c2ho-workshops/image-sonification-workshop.html.
Show summary
How can we turn images into sound? And what can we learn about those images by listening? Why listen instead of looking? These are some questions Bálint Laczkó's research on image sonification aims to find answers to. Bálint has been researching applications of image sonification in biology and medical research and started working on a toolbox for Python, which he will be presenting during this workshop.
The toolbox originates from bio-medical research, but it can also be used creatively for sound design. Bring your laptop with Python (or Anaconda) installed, and some images you'd like to squeeze some sounds out of!
-
-
Laczko, Balint
(2025).
Oral presentation on ICAD 2025 about Pixasonics: An Image Sonification Toolbox for Python.
Show summary
Pixasonics is a new Python library for interactive image analysis and exploration through image sonification. It uses real-time audio and visualization to help uncover patterns in image data. With Pixasonics, users can launch one or more small web applications (running in a Jupyter Notebook), probe image data using various feature extraction methods, and map those feature vectors to synthesis parameters. The target users are researchers interested in exploring image and volumetric data and creative users who want an intuitive tool for experimental sound design. Pixasonics’ design aims to strike a balance between an easy-to-use web application with minimal boilerplate code necessary and a library that can be integrated into more advanced workflows. Real-time exploration is at the heart, but it can also be used to script non-real-time sonifications of large datasets. This paper presents Pixasonics, its structure, interface, and advanced features, and discusses preliminary feedback from biology researchers and music technologists.
-
Hübenette, Saira Jameela
(2025).
Spatial and temporal processing of auditory stimuli.
Show summary
Tracking a moving sound in space requires continuous prediction of its next location. This implies that we must combine information about the time and location of the sound, to accurately predict its trajectory. Using EEG, auditory motion processing has previously been found in frontal, central, and parietal areas of the brain.
Prior studies have suggested that space and time for auditory stimuli are processed separately in the brain. Despite the seemingly distinct processing pathways of spatial and temporal sound information, they must integrate at some stage to enable the prediction of movement. It remains unknown if these processes are initially working in parallel and then converging at some point, or if there are several instances of convergence throughout.
The goal of this study is to delineate the neural correlates of spatial and temporal processing of moving sounds, and to assess how and when they converge to aid in the tracking of the sound movement.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; Ugstad, Magnus; Walderhaug, Bendik; Hole, Erik & S?rli, Anders Ruud
(2025).
Sinsenfist p? ROSA-konferansen 2025.
-
Hübenette, Saira Jameela; Solbakk, Anne-Kristin; Danielsen, Anne; Endestad, Tor & Blenkmann, Alejandro Omar
(2025).
Hearing in motion: Spatial and temporal processing of auditory stimuli.
doi:
https:/www.dropbox.com/scl/fi/ijya3kj1w1dl2e7vyjo5y/ICAC25-poster.pdf?rlkey=2516eohjz9vyx3g7qj7rjsedc&st=wm335i35&dl=0.
Show summary
Tracking a moving sound in space requires continuous prediction of its next location. This implies that we must combine information about the time and location of the sound, to accurately predict its trajectory. Using EEG, auditory motion processing has previously been found in frontal, central, and parietal areas of the brain. Prior studies have suggested that space and time for auditory stimuli are processed separately in the brain. Despite the seemingly distinct processing pathways of spatial and temporal sound information, they must integrate at some stage to enable the prediction of movement. It remains unknown if these processes are initially working in parallel and then converging at some point, or if there are several instances of convergence throughout.
The goal of this study was to delineate the neural correlates of spatial and temporal processing of moving sounds, and to assess how and when they converge to aid in the tracking of the sound movement.
-
Oddekalv, Kjell Andreas; Walderhaug, Bendik; Bj?rkheim, Terje; Ugstad, Magnus; S?rli, Anders Ruud & Hole, Erik
(2025).
Sinsenfist p? Bilkollektivets 30-?rs-jubileum.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; Hole, Erik; Walderhaug, Bendik; S?rli, Anders Ruud & Ugstad, Magnus
(2025).
Sinsenfist p? Audunbakken 2025.
-
Oddekalv, Kjell Andreas; Walderhaug, Bendik; Hole, Erik; Bj?rkheim, Terje; Ugstad, Magnus & S?rli, Anders Ruud
(2025).
Sinsenfist p? Festival Bohem 2025.
-
Oddekalv, Kjell Andreas
(2025).
Manifesting the Rapping Chimera.
Show summary
In Albert Bregman’s Auditory Scene Analysis, he describes “chimeric percepts” – auditory streams we hear as “one”, which in reality originates from multiple different sound sources. In music, however, this apparent failure of our perceptual system is treated as a feature rather than a bug.
In hip-hop music, there has been a tradition of emphasising key words and syllables either live – via “hypepersons” doubling the lead vocalist, or in the studio – by stacking vocal tracks in various ways. These doublings – the chimericity, or degree to which a stream is perceived as one or many – can be transparent or opaque to the listener, and is yet another manipulable parameter for aesthetic effects in the modern producer’s arsenal.
In this presentation, different techniques and approaches of summoning and taming the rapping chimera will be showcased through an original track produced and recorded especially for AR@K. What is the truth of the chimera? Is it one beast or an amalgamation of multiple? Or is it really both, and is that the creature’s very reason to exist?
-
Oddekalv, Kjell Andreas
(2025).
Chimericity in the Orchestration of Rap Vocals.
Show summary
In hip-hop music, the layering and sequencing of vocal tracks is an integral part of the compositional process. Ranging from simple “backtracks” adding subtle emphasis to key words and syllables to intricate interplay between emcees and/or effects-processed supernatural simulacra – these techniques can be understood, differentiated between and recreated using the conceptual framework of chimericity.
Albert Bregman describes chimeric percepts as an error in our auditory scene analysis – a failing to identify that a sound originates with multiple sources rather than a singular one. He also notes that in music such chimeric passages are considered a feature rather than a bug. Nowhere is this as evident as in hip-hop music.
This paper will introduce the concept of chimericity. How it can be more or less transparent or opaque. How it can be layered and/or sequential (vertical/horizontal). How it can be influenced via mode and voice. How it relates to central aspects of Black American aesthetics such as Tricia Rose’s (via Artur Jafa) concept of flow, layering and rupture(s) in line, Olly Wilson’s heterogeneous sound ideal and more broadly to Signifyin(g) as theorized by Henry Louis Gates, Jr.
Eileen Southern urges us to perform and create Black music to understand and analyse it. As analysts we should take inspiration from practitioners’ workflows. Using audio waveforms across multiple tracks in a DAW arrangement/timeline view is one intuitive visualization of chimericity (among others) this paper will showcase.
-
Vuoskoski, Jonna Katariina; Perik, Lieke; Foldal, Christian Dyhre & Stupacher, Jan
(2025).
Rhythmic complexity and trait empathy modulate internal motor simulation in response to music.
-
-
-
-
D'Amario, Sara; L?ve, Andreas; Foldal, Maja Dyhre & Jensenius, Alexander Refsum
(2025).
Functional Near Infrared Spectroscopy (fNIRS) Responses of Professional Violinists during Orchestra Performances.
-
D'Amario, Sara
(2025).
Well-being Experiences of Professional Women Musicians during Live Concert Performances.
Show summary
-
Rolfsjord, Sigmund Johannes Ljosvoll; Arnim, Hugh Alexander von; Fatima, Safia & Baselizadeh, Adel
(2025).
Multimodal Transfer Learning for Privacy in Human Activity Recognition.
-
-
Danielsen, Anne
(2025).
Intervju om "Take on Me".
[Radio].
NRK-P2.
-
Blenkmann, Alejandro Omar
(2025).
The role of the Orbitoforontal cortex in building predictions and detecting violations.
-
-
Danielsen, Anne; Langer?d, Martin Torvik & London, Justin
(2025).
Where is the Beat in that Complex Note? Effects of Instrument Asynchrony and Attack on the Perceived Timing of Compound Musical Sounds.
doi:
https:/www.jyu.fi/en/file/rppw20programmeandabstracts-5pdf.
Show summary
In musical ensembles most notes/chords are sounded by more than one instrument at
the same time, and we hear them as simultaneous, even when their onsets are not
precisely simultaneous. Here we obtain estimates for the perceptual centers of such
compound sounds when there are microtiming asynchronies between the instruments.
In Exp1 three combinations of fast-attack instruments (acoustic kick drum/synthetic
kick, kick/hi-hat, kick/bass) were presented with five levels of instrument asynchrony
relative to the kick (-40, -20, 0, 20, and 40 ms); the ISI was 600ms (100 bpm) and the
task was to align a click with the compound sound. An RMANOVA shows main effects
(p<.001) of Asynchrony and Instrument combination, and a U-shaped relationship
between Asynchrony and P-center, such that asynchrony in both directions relative to
the kick (kick early and kick late) delays the P-center for the compound sound. In Exp2
we used combinations of fast and slow attack instruments. Ten combinations (three
fast-attack–fast-attack, three slow-attack–slow-attack, four fast-attack–slow-attack)
were presented with seven levels of instrument asynchrony: -80, -40, -20, 0, 20, 40, 80.
RMANOVAs of the results revealed different relationships between asynchrony and
P-center (p<.001): Fast/fast attack combinations replicated the U-shape of experiment
1. In fast/slow combinations, the P-center followed the fast-attack instrument linearly,
and in the slow/slow combinations, P-centers followed the higher-pitched instrument. P-
centers of compound sounds depend on both the asynchrony between the instruments
and the shape of their attacks. Combinations of fast-attack instruments with extreme
asynchrony produce bimodal distributions, indicating perceptual segregation of the
two sounds. The findings align with studies showing that sharp sounds are used as
landmarks for segmentation and timing in speech and music.
-
-
London, Justin; Paulsrud, Thea S?rli & Danielsen, Anne
(2025).
Musical expertise affects the rhythmic perception of sung and spoken speech syllables: The effect of top-down motor representations.
doi:
https:/www.jyu.fi/en/file/rppw20programmeandabstracts-5pdf.
Show summary
Previous research (Danielsen et al. 2022) has shown that musical expertise affects
the perception of the temporal location (i.e., P-center) of an instrumental sound. Here
we extend this research to the context of vocal music. In two experiments expert
singers in jazz and classical genres were presented with a range of stimuli, including
neutral Stimuli (e.g., noise bursts, clicks), vowel sounds sung by jazz and classical
singers, and spoken versions of the vowel sounds. As with our previous study, neutral
stimuli produce largely the same responses in both participant groups, while a linear
mixed model showed that jazz participants placed their p-centers earlier (22 ms;
p=.044) and with lower variability (21 ms; p = .025) than classical participants. Contra
our hypothesis, the between-group differences for P-center location and variability
persisted in the context of spoken sounds. Why should this be so? Expert musicians
develop highly specific motor representations of their own actions and use them when
singing and playing. For singers, these these models overlap with speech production
more generally. This could explain the carry-over to speech stimuli. Likewise, the
vocal stimuli presented our participants with not only acoustic cues for the P-center
location of the sounds themselves, but also cues for synchronizing individual actions
in performance (coordinating the behaviors that produce the sounds with others). This
indicates that the vestiges of joint action that remained in our experimental context
were enough to engage their top-down sensorimotor models, as would be used in an
actual singing or speaking context.
-
Danielsen, Anne
(2025).
Rytme og kognisjon - sansning, struktur og samspill.
-
Polak, Rainer
(2025).
The musical beat is multimodal.
-
Polak, Rainer
(2025).
Music is Multimodal: A Multi-Data Corpus of Music and Dance Derformance.
-
Barbero, Francesca; Lenc, Tomas & Polak, Rainer
(2025).
Rhythm categorization is present in human newborns and further shaped across the lifespan.
-
Lenc, Tomas; Barbero, Francesca & Polak, Rainer
(2025).
Revealing rhythm categorization in human brain activity.
-
Guérin, Ségolène; Coulon, Emmanuel; Lenc, Tomas; Polak, Rainer; Keller, Peter & Nozaradan, Sylvie
(2025).
Culture-Driven Plasticity and Imprints of Body-Movement Pace on Musical Rhythm Processing.
-
Polak, Rainer; Dutta, Sagar; Psaroudakis, Georgios; London, Justin & Jacoby, Nori
(2025).
The musical beat is multimodal.
-
Polak, Rainer; Dutta, Sagar; Psaroudakis, Giorgos; London, Justin & Jacoby, Nori
(2025).
Music is multimodal: introducing a multi-data corpus of music and dance performance.
-
Bacot, Baptiste
(2025).
Mr. Bill: a case study of the entangled platforms of electronic music production.
Show summary
Mr. Bill is a professional producer and educator who has been publishing content for over fifteen years. As an Ableton Live expert user, he regularly livestreams his music production sessions on Twitch. This content—the video files, the Ableton Live sets, and other resources as well—is available on his website (mrbillstunes.com). Additionally, Mr. Bill is the founder of Billegal Beats, a music and sample pack label that publishes material on music streaming platforms and Splice, respectively. These activities, along with music releases, touring, merchandising sales, are promoted and discussed on a dedicated Discord server. Mr. Bill is therefore serves as a perfect case for questioning the entangled platforms that support electronic music production on various levels.
By exploring the material Mr. Bill has generated over the last decade, this paper addresses two questions regarding research methodology and musical creativity. Firstly, it examines the issues surrounding the value of music platformization data: how can the wealth of digital information generated be leveraged for empirical musicology and popular music studies? Secondly, it engages with the impacts of platformization on musical creativity: whereas music production has always been a covert activity, it is now conducted in semi-public and interactive formats where viewers can watch and comment. What is the weight of this new setting on the creativity of the producers and eventually, on the final product? Concluding remarks will highlight the layered and interdependent structure of platforms as key factor for building a successful career, while also reassessing the so-called “democratization” that companies claim to bring through their digital products.
-
Bacot, Baptiste
(2025).
Sampling and resampling in Bass music. A musicological perspective from the DAW.
Show summary
This presentation explores sampling practices in EDM production. Sampling involves extracting audio fragments from preexisting records and using them in new musical contexts. Though rooted in post-WW2 avant-garde music, sampling is now ubiquitous across numerous musical genres. Despite its longstanding use, it is most often examined from a legal and copyright perspective (Demers 2006; Br?vig 2023), whereas musicological aspects—more complex than they appear at first glance—are often overlooked.
Although detailed accounts of sampling are scarce, they do exist (Butler 2014; Déon 2020; Harkins 2020) and encourage conceptualization at the intersection of creativity and technology. Following this route, and leveraging Ratcliffe’s (2014) typology of sampled material within EDM, as well as drawing inspiration from sketch studies (Sallis 2015), this paper examines a collection of Ableton Live projects (.als files) and video live streams by senior producer, DJ, and educator Mr. Bill. Analyzing this material reveals the sampling stages clearly, along with the digital processing that follows the import of sounds into the project.
This investigation highlights various sampling techniques, particularly resampling, which involves sampling sounds from within the DAW. This self-sampling approach allows producers to create new sounds from existing materials within the current project through the signal routing capabilities of the software. Consequently, sampling becomes an internal process for generating new sonic material without needing external sounds. This study contributes to a better understanding of the technological practices in hyper-contemporary popular music by shedding light on the micro-level implications of recording as a technology.
-
Marin-Bucio, Diego
(2025).
Matriz de Movimiento Maquínico (MMM): Una herramienta conceptual sobre la creación de danza humano-máquina.
doi:
https:/masters.filescat.uab.cat/muet/classe-magistral-amb-diego-marin-bucio/.
Show summary
This lecture introduces a conceptual framework for mapping the ontological roles of technology in choreographic creation. Drawing on case studies of AI-integrated works, it examines human–machine interaction and the redistribution of agency across dancers, choreographers, and computational systems. The resulting framework offers a practical tool for distinguishing when AI functions as an instrument and when it participates as an active co-creator in dance.
-
Marin-Bucio, Diego
(2025).
Co-creation of rhythm in Djembe Music-Dance.
Show summary
This talk examines the intricate and reciprocal relationship between music and dance in the context of Djembe rhythms. At the heart of these celebrations lies a unique interplay where rhythm serves as both a foundation and a catalyst for collective expression. Alongside Djembe playing, dancers and musicians engage in a dynamic process of mutual influence, creating a multimodal dialogue that evolves in real time. The discussion explored how rhythm emerges, not as a static entity, but as a shared negotiation where movement and sound coalesce to generate meaning and aesthetic congruence. Central to this process is the interplay between music and dance through structured improvisational elements, which provide a double-encoded catalyst that works as a framework and as a creative stimulus. Through video recordings and a comparative analysis of Djembe music and dance structures, I illustrate how this rhythmic catalyst takes shape through elastic sound and movement phrases that influence the decision-making of both, dancers and musicians.
This study reveals that Djembe's rhythms transcend unimodal agency to become a collective/multimodal phenomenon. The insights gained underscore the transformative power of rhythm in co-creative processes, offering remarkable perspectives on performance, dialogue, and the interplay between sound and movement.
-
Arnim, Hugh Alexander von; Erdem, Cagri; C?té-Allard, Ulysse Teller Masao & Jensenius, Alexander Refsum
(2025).
A Sensor is not a Sensor: Diffracting the Preservation of Sonic Microinteraction with the SiFiBand.
-
C?mara, Guilherme Schmidt
(2025).
Just Noticeable Difference Thresholds of Musical Microrhythm (Asynchrony and Non-isochrony) in Multi-Instrumental Groove-based Performance.
Show summary
Musicians can convey different ’timing feels’ in performance by manipulating asynchronies
between instrument onsets (’behind’/’ahead’/’on-beat’) as well as the degree
of non-isochrony within metrical subdivisions (’straight’/’swung’). The extent to which
we can perceive such microtiming nuances has only yet been examined in non-/quasirhythmic
contexts involving monotonic and single-layered stimuli, with mixed results
regarding effect of musicianship. Studies have also found that pupil size increases
linearly with asynchrony magnitude, but not yet examined non-isochrony. We measured
the just noticeable difference (JND) thresholds of asynchrony (Exp. 1) and
non-isochrony (Exp. 2) in a naturalistic, multi-layered groove (funk pattern, IOI
143ms) with 5 instruments (guitar/bass/kick/snare/hi-hat). Using a 1IFC staircase
and global displacements of individual instrument layers (asynchrony: 1-100ms
[early/late], non-isochrony: +1-71.5ms [late]), we tasked participants (N=64; musicians
N=32; non-musicians N=32) to determine whether instruments were playing;
"together" with or "before/after" other instruments (Exp. 1); and with "straight/even"
or "swung/uneven" rhythm (Exp. 2). Pupil response was also measured. As expected,
JND thresholds were higher than reported in previous literature (+4%/+2% of
IOI for asynchrony and non-isochrony, respectively) likely due to greater attentional
’noise’ from additional simultaneously playing instruments, and lower for for musicians
(14%/16%) than for non-musicians (22%/24%) due to greater training in the perception/
production of musical microrhythm. For the first time, we also demonstrate an
effect of both Instrument and Timing displacement, where onset displacements were
harder to detect: in string (22%/24%) rather than drum (15%/18%) instruments - likely
due to perceptually ’fuzzier’ acoustic attack profiles; and in late (20%) rather than early
(16%) displacements - likely due to forward acoustic masking effects. We also found a
linear relationship between pupil size and both asynchrony and non-isochrony, further
indicating that the pupil indexes mental effort in auditory processing of microrhythm
more generally.
-
Jensenius, Alexander Refsum
(2025).
MishMash, musikk og kunstig intelligens.
-
-
Jensenius, Alexander Refsum
(2025).
What happens in the body when you stand still?
Show summary
Professor Alexander Refsum Jensenius will talk about his decade-long exploration of human micromotion. Motion data from the 365 standstill sessions he carried out during 2023 reveals lots of biomechanical noise, but also some interesting signals.
-
-
Bishop, Laura
(2025).
Individuality and collectivity in professional orchestra string sections: Gauging the strength of coordination in body motion.
-
Miles, Oliver; Hazzard, Adrian; Moroz, Solomiya; Bishop, Laura & Vear, Craig
(2025).
Meaningful interactions in human-AI musicking.
-
Bishop, Laura
(2025).
Bodies in Concert: Assessing group coordination in live concert settings.
-
-
Jensenius, Alexander Refsum
(2025).
CoARA principles in practice. Insights from a crossdisciplinary Centre of Excellence.
Show summary
The presentation discusses the practical application of the NOR-CAM framework and CoARA principles at RITMO, a cross-disciplinary research centre. It highlights the challenges and strategies involved in promoting comprehensive and transparent research assessment, especially in interdisciplinary settings where values and evaluation criteria differ across fields. The author emphasizes the importance of redefining openness in research, professionalizing hiring committees, implementing structured career development programs, and fostering a culture of sharing and caring. These efforts aim to create a more equitable, supportive, and effective academic environment that values diverse contributions and supports researchers' well-being.
-
-
-
Riaz, Maham
(2025).
Where is That Bird? The Impact of Artificial Birdsong in Public
Indoor Environments.
Show summary
This paper explores the effects of nature sounds, specifically bird sounds, on human experience and behavior in indoor public environments. We report on an intervention study where we introduced an interactive sound device to alter the soundscape. Phenomenological observations and a survey showed that participants noticed and engaged with the bird sounds primarily through causal listening; that is, they attempted to identify the sound source. Participants generally responded positively to the bird sounds, appreciating the calmness and surprise it brought to the environment. The analyses revealed that relative loudness was a key factor influencing the experience. A too-high sound level may feel unpleasant, while a too-low sound level makes it unnoticeable due to background noise. These findings highlight the importance of automatic level adjustments and considering acoustic conditions in soundscape interventions. Our study contributes to a broader discourse on sound perception, human interaction with sonic spaces, and the potential of auditory design in public indoor environments.
-
Riaz, Maham
(2025).
VentHackz: Exploring the Musicality of Ventilation Systems.
Show summary
Ventilation systems can be seen as huge examples of interfaces for musical expression, with the potential of merging sound, space, and human interaction. This paper explores conceptual similarities between ventilation systems and wind instruments and explores approaches to "hacking" ventilation systems with components that produce and modify sound. These systems enable the creation of unique sonic and visual experiences by manipulating airflow and making mechanical adjustments. Users can treat ventilation systems as musical interfaces by altering shape, material, and texture or augmenting vents. We call for heightened attention to the sound-making properties of ventilation systems and call for action (#VentHackz) to playfully improve the soundscapes of our indoor environments.
-
Sveen, Henrik; Bishop, Laura & Jensenius, Alexander Refsum
(2025).
Cyclic Patterns and Spatial Orientations in Artificial
Impulsive Autonomous Sensory Meridian Response (ASMR) Sounds.
Show summary
Autonomous Sensory Meridian Response (ASMR) is a tingling sensation in the neck and spine often triggered by specific sounds. This paper reports a study on the impact of different cyclic patterns and spatial orientations—defined here as the perceived directionality and motion of sound sources in a three-dimensional auditory space—on inducing ASMR experiences. The results demonstrate that both the type of cyclic pattern and the spatial orientation significantly influence the intensity and nature of ASMR experiences. Furthermore, the research explores synthesizing ASMR-inducing sounds while preserving key audio characteristics from acoustically recorded ASMR content. Through survey data analysis and regression modeling, distinct patterns emerge regarding the relationship between personality traits and ASMR experience. The findings contribute to a deeper understanding of ASMR as a sensory phenomenon and provide insights into the potential applications of artificially generated ASMR stimuli. Additionally, the research sheds light on the role of spatiality in ASMR experiences and the synthesis of ASMR-inducing sounds for future studies and practical applications
-
Jensenius, Alexander Refsum
(2025).
What happens in the body when you stand still?
Show summary
Professor Alexander Refsum Jensenius will talk about his decade-long exploration of human micromotion. Motion data from the 365 standstill sessions he carried out during 2023 reveals lots of biomechanical noise, but also some interesting signals.
-
Jensenius, Alexander Refsum
(2025).
KI og musikkens fremtid.
Show summary
Alexander Refsum Jensenius leder RITMO, Senter for tverrfaglig forskning p? rytme, tid og bevegelse, med 60 ansatte. Han leter for tiden systematisk etter mulighetene og perspektivene KI kommer med. KI er en disruptiv teknologi som griper inn i etablerte n?ringsmodeller. Hvor er vi p? vei? Mange av perspektivene kan virke overveldende. Det er imidlertid viktig ? huske p? at selv om maskiner n? er med p? ? utvikle seg selv, er det prim?rt mennesker som vil utvikle ogs? morgendagens teknologier. Han mener det er sentralt at vi i Norge er med p? denne utviklingen. Her mener kunst- og kulturfeltet har en unik mulighet til ? bidra gjennom eksperimentell utforskning og kritisk refleksjon. Han mener vi vil se flere systemer som fokuserer p? kontinuerlig samhandling mellom mennesker og maskiner, slik som n?r musikere improviserer. Men at man ikke kommer videre med KI uten at de f?r en kropp som kan sanse og handle. Og at KI-systemer vil kunne bli mer empatiske, noe som vil forbedre menneske-maskin-kommunikasjon, men som ogs? reiser mange etiske problemstillinger.
-
-
-
Lerdahl, Erik; Buene, Eivind; Berg, Anna & Jensenius, Alexander Refsum
(2025).
Musikk, stillhet og kreativitet.
Show summary
Alexander Refsum Jensenius har st?tt stille 10 min hver dag i ett ?r. Han kalles Professor Stillstand. Han leder et senter med 60 ansatte som forsker p? rytme, tid og bevegelse, og vil gjerne forst? mer og dypere om hvordan lydene og inntrykkene av det vi omgir oss med p?virker oss. Hans f?rste erkjennelse er at han tror verden ville kunne bli et bedre sted om alle stod stille 10 minutter hver dag. Hva gj?r musikk og stillhet med oss selv, v?rt velv?re og v?r kreativitet?
-
Vuoskoski, Jonna Katariina
(2025).
The role of empathy in interpersonal coordination.
-
Jensenius, Alexander Refsum; Watne, ?shild; Maas?, Arnt & Agledahl, Vetle
(2025).
Musikksnakk: Allsang.
Show summary
Vi har aldri konsumert mer musikk enn n?, viser forskning. Likevel er det f?rre av oss som tar en aktiv del i musikkskapingen, enn f?r. Hva mister vi n?r vi ikke tar del i, eller vet hvordan man lager, musikk? Vi skal ogs? teste hvordan det er ? synge sammen. Hva skjer med oss da?
-
-
Vuoskoski, Jonna Katariina; Perik, Lieke; Foldal, Christian Dyhre & Stupacher, Jan
(2025).
Groove and rhythmic complexity modulate internal motor simulation in response to music.
-
-
Jensenius, Alexander Refsum
(2025).
NOR-CAM og RITMO. Erfaringer fra et interdisiplin?rt senter.
Show summary
## Sammendrag
Innlegget oppsummerer erfaringer med ? implementere CoARA- og NOR-CAM-prinsippene ved RITMO. Forfatteren deler refleksjoner rundt ?pne forskningspraksiser, utfordringer og muligheter ved tverrfaglig 澳门葡京手机版app下载, samt konkrete tiltak for rekruttering, karriereutvikling og et st?ttende akademisk milj?. Fokus ligger p? ? profesjonalisere ansettelsesprosesser, utvikle karriereprogrammer og bygge en kultur for deling og omsorg i forskningsmilj?et.
-
Jensenius, Alexander Refsum
(2025).
Tale under Konkurransen Unge Forskere 2025.
Show summary
I denne forelesningen forteller jeg om hvordan en barndomsfascinasjon for papirsylindres styrke f?rte til en fysikkoppgave p? videreg?ende, som igjen f?rte til finaleplass i konkurransen og deltakelse p? Nobelarrangementer i Stockholm. Opplevelsen ga innsikt i forskningsverdenen og motivasjon til videre studier og forskning.
-
Vuoskoski, Jonna Katariina; Foldal, Christian Dyhre; Perik, Lieke & Stupacher, Jan
(2025).
Trait empathy modulates internal motor simulation in response to rhythmic stimuli.
-
-
Schau, Kristopher & Jensenius, Alexander Refsum
(2025).
Nysgjerrige p?: rytmens hemmeligheter.
[Internet].
Nysgjerrige Norge.
Show summary
I denne episoden bes?ker Kristopher forskningssenteret RITMO ved Universitetet i Oslo. Der forsker de p? alt fra trommeroboter og mikromusikalske problemstillinger til hvordan vi p?virkes av ventilasjonslyd. Han m?ter senterleder Alexander Refsum Jensenius som forteller om forskning i skj?ringspunktet mellom musikk, bevegelse, psykologi og robotikk.
-
Marin-Bucio, Diego
(2025).
Matriz de Movimiento Maquínico (MMM).
Show summary
This lecture introduces a conceptual framework for mapping the ontological roles of technology in choreographic creation. Drawing on case studies of AI-integrated works, it examines human–machine interaction and the redistribution of agency across dancers, choreographers, and computational systems. The resulting framework offers a practical tool for distinguishing when AI functions as an instrument and when it participates as an active co-creator in dance.
-
Leske, Sabine Liliana; St?ver, Isak Elling August; Solbakk, Anne-Kristin; Endestad, Tor; Kam, Julia & Grane, Venke Arntsberg
(2025).
Behavioral, electromyographic and electrophysiological indicators of altered deviance
detection and mind wandering in adult ADHD.
-
Leske, Sabine Liliana; Blenkmann, Alejandro Omar; Lubell, Jamie; Llorens, Ana?s; Larsson, P?l Gunnar & Funderud, Ingrid
[Show all 12 contributors for this article]
(2025).
INTRACRANIAL CORRELATES OF ACTION-BASED AUDITORY PREDICTION ERRORS IN HUMANS.
-
Riaz, Maham
(2025).
The Art and Science of Immersive Sound Design in Games - What's the Secret?
Show summary
In modern games, sound design is far more than mere background noise—it conveys a story and shapes entire worlds. We will explore how gamification principles—interaction, feedback, progression, challenge, exploration, and motivation—integrate with sound design techniques such as spatial audio, adaptive mixing, and procedural audio to create responsive audio environments. Practical aspects of implementing game audio will be discussed within Unity (and Wwise).
-
Jensenius, Alexander Refsum
(2025).
Video Visualization - Learn to use MG Toolbox.
Show summary
This workshop is designed for students and researchers who work with video recordings. You will learn to use MG Toolbox, a Python package with numerous tools for visualizing and analyzing video files. This includes visualization techniques such as motion videos, motion history images, and motiongrams, which allow for viewing video recordings from different temporal and spatial perspectives in various ways. It also includes some fundamental computer vision analysis, such as extracting the quantity and centroid of motion, and using such features in analysis. MG Toolbox for Python is a collection of high-level modules that generate all of the visualizations mentioned above. The toolbox is relevant for everyone working with video recordings of humans, including linguists, psychologists, medical professionals, human-computer interaction specialists, and educators in the educational sciences.
-
Jensenius, Alexander Refsum
(2025).
Music, RITMO and AI.
Show summary
An introduction to RITMO and ongoing research on the topic of music and AI for a workshop between researchers from the University of Oslo, Queen Mary University of London, and KTH Royal Institute of Technology.
-
Danielsen, Anne
(2025).
RITMO. Forhistorien, idéen og prosessen.
-
Glette, Kyrre
(2025).
Biologically-inspired AI: A framework for designing adaptive robots.
-
-
Lindblom, Diana Saplacan
(2024).
Healthcare Professionals’ Attitudes Towards Caregiving Through Teleoperation of Robots in Elderly Care. Seminar at RITMO Centre of Excellence for Time, Rhythm, and Motion.
Show summary
This week's Food and Paper will be given by Diana Saplacan.
-
G?ksülük, Bilge Serdar & Tidemann, Aleksander
(2024).
Digital Collaboration in Dance and Music: Remote Interaction and Improvisation with Zoom and LoLa.
doi:
https:/www.ultima.no/en/ulysses-online-session-remote-connections-co-creation-across-distances.
Show summary
This digital forum explores artistic collaboration through online platforms, examining how technology can facilitate sustainable and eco-friendly ways of creating music collectively. Participants will gain insights into artist-driven initiatives and practical tools for creative co-creation, regardless of geographical distance. Following three short presentations from invited contributors, there will be an open dialogue and exchange of experiences, allowing ULYSSES artists to share their own projects.
-
Esterhazy, Rachelle; Arnim, Hugh Alexander von & Damsa, Crina I.
(2024).
Multimodal learning analytics to explore key moments of interdisciplinary knowledge-construction.
-
Solli, Sandra; Danielsen, Anne; Leske, Sabine Liliana; Blenkmann, Alejandro Omar; Doelling, Keith & Solbakk, Anne-Kristin
[Show all 7 contributors for this article]
(2024).
Rhythm-based temporal expectations: Unique contributions of predictability and periodicity.
Show summary
Flexibly adapting to our dynamic surroundings requires anticipating upcoming events and focusing
our attention accordingly. Rhythmic patterns of sensory input offer valuable cues for these temporal
expectations and facilitate perceptual processing. However, a gap in understanding persists regarding
how rhythms outside of periodic structures influence perception.
Our study aimed to delineate the distinct roles of predictability and periodicity in rhythm-based
expectations. Participants completed a pitch-identification task preceded by different rhythm types:
periodic predictable, aperiodic predictable, and aperiodic unpredictable. By manipulating the timing
of the target sound, we observed how auditory sensitivity was modulated by the target position in the
different rhythm conditions.
The results revealed a clear behavioral benefit of predictable rhythms, regardless of their periodicity.
Interestingly, we also observed an additional effect of periodicity. While both periodic and aperiodic
predictable rhythms improved overall sensitivity, only the periodic rhythm seemed to induce an
entrained sensitivity pattern, wherein sensitivity peaked in synchrony with the expected continuation
of the rhythm.
The recorded event-related brain potentials further supported these findings. The target-evoked P3b,
possibly a neural marker of attention allocation, mirrored the sensitivity patterns. This supports our
hypothesis that perceptual sensitivity is modulated by temporal attention guided by rhythm-based
expectations. Furthermore, the effect of rhythm predictability seems to operate through climbing
neural activity (similar to the CNV), reflecting preparation for the target. The effect of periodicity is
likely related to more precise temporal expectations and could possibly involve neural entrainment.
Our findings suggest that predictability and periodicity influence perception via distinct mechanisms.
-
Quiroga-Martinez, David R.; Blenkmann, Alejandro O.; Endestad, Tor; Solbakk, Anne-Kristin; Kim-McManus, Olivia & Willie, John T.
[Show all 10 contributors for this article]
(2024).
Enhanced frontotemporal theta and alpha connectivity during the mental manipulation of musical sounds.
-
Arnim, Hugh Alexander von & Kelkar, Tejaswinee
(2024).
The Shapeshifter: Motion Capture and Interactive Dance for Co-constructing the Body.
-
-
Saplacan, Diana
(2024).
User Studies in HRI: A Qualitative Research Perspective.
-
Saplacan, Diana
(2024).
Social Robots and Socially Assistive Robots (SARs) within Elderly Care: Lessons Learned So Far.
-
Danielsen, Anne
(2024).
Interdisciplinary music research: gains and challenges.
Show summary
Interdisciplinary music research: gains and challenges
Recent years have seen a steady increase in calls for interdisciplinary approaches to research from politicians, university administrators, and public and private funding agencies alike. Interdisciplinary research is needed, it is claimed, to solve many of the foundational crises faced by societies today. While interdisciplinary research holds great promise for large-scale problem-solving, it is also bedeviled by obstacles at the institutional and individual level that monodisciplinary research does not face to the same extent, such as insufficient infrastructure, organizational barriers, lower employability, and few well- established publication channels. Sometimes even more challenging, however, are the different research traditions of the disciplines involved, which might adhere to profoundly different methodological traditions, lack shared criteria for quality assessment, and even disagree regarding what counts as science.
In this talk, I will address the gains and challenges of working across radically different disciplines in music research, sharing my experience from three highly interdisciplinary projects: the RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion; the MusicLab Copenhagen research concert; and the TIME project on musical microrhythm.
-
Bishop, Laura; Hadjidaki-Marder, Elpida; Ledas, Sarunas & Liest?l, Gunnar
(2024).
Motion capture for augmented reality storytelling in archaeology and cultural heritage dissemination: Simulating an animal sacrifice at Ancient Phalasarna.
-
Jensenius, Alexander Refsum & Laczko, Balint
(2024).
Video Visualization.
Show summary
This workshop is targeted at students and researchers working with video recordings. You will learn to use MG Toolbox, a Python package with numerous tools for visualizing and analyzing video recordings. This includes visualization techniques such as motion videos, motion history images, and motiongrams; techniques that, in different ways, allow for looking at video recordings from different temporal and spatial perspectives. It also includes some basic computer vision analysis, such as extracting quantity and centroid of motion, and using such features in analysis.MG Toolbox for Python is a collection of high-level modules for generating all of the above-mentioned visualizations and analyses. This toolbox was initially developed to analyze music-related body motion but is equally helpful for other disciplines working with video recordings of humans, such as linguistics, psychology, medicine, and educational sciences.
-
Fleckenstein, Abbigail Marie; Vuoskoski, Jonna Katariina & Saarikallio, Suvi
(2024).
Being Musically Moved.
-
Jensenius, Alexander Refsum; Wendt, Kaja Kathrine; Ski-Berg, Veronica & Slette, Aslaug Louise
(2024).
S1E3 Forskerkarrierer - i tall og matriser.
[Internet].
Podcast.
Show summary
I tredje episode av NIFUs podkastserie Kunnskapsfloker snakker vi om forskerkarrierer. Hva er egentlig en ?forskerkarriere?, og hvor i samfunnet finner vi forskere? ? utvikle gode forskerkarrierer st?r h?yt p? dagsorden b?de i Norge og i Europa. Det utvikles for tiden nye rammeverk for karriereutvikling samt statistiske indikatorer som dokumenterer hvordan forskerkarrierer utvikler seg over tid. Men hvordan kan forskningssystemet tilrettelegge for mangfoldige forskerkarrierer? Gjester i episoden er Kaja Kathrine Wendt fra SSB/NIFU og Alexander Refsum Jensenius fra UiO. Programledere er Veronica Ski-Berg og Aslaug Louise Slette.
-
Swarbrick, Dana & Oddekalv, Kjell Andreas
(2024).
LAB.prat: RITMO x Popsenteret Live & Livestreamed Concert with Dana & The Monsters and Conversation with Dr. Dana Swarbrick.
-
Jensenius, Alexander Refsum
(2024).
Labprat #3: NM i stillstand.
Show summary
Klarer du ? st? stille til favorittl?ta di? Pr?v selv og vinn 1000kr!
Folk sier ofte at det er umulig ? ikke bevege seg til musikk, men stemmer det?
Onsdag 3. april kan du teste deg selv n?r professor Alexander Refsum Jensenius – ogs? kjent som Professor stillstand – inviterer til ?NM i stillstand? her p? Popsenteret.
Vinneren k?res samme kveld p? LAB.prat #3 med nettopp Alexander! Her f?r du ogs? vite mer om hva som faktisk skjer i kroppen n?r vi h?rer p? musikk.
Som vanlig ledes kvelden av fasilitator og ?MC? Dr. Kjell Andreas Oddekalv, ogs? kjent som ?Dr. Kjell? (eller hele Norges Kjelledegge som han selv liker ? si) fra Hiphop orkesteret Sinsenfist. Sammen med Alexander inviterer han til en uformell samtale og Q&A om kroppsrytmer og hvordan de p?virkes av omgivelsene v?re.
I tidsrommet mellom stillstandkonkurransen og LAB.prat er Popsenteret ?pent og du er velkommen til ? bes?ke utstillingen v?r og alt den har ? by p?!
-
Jensenius, Alexander Refsum
(2024).
20 years of concert research at the University of Oslo.
Show summary
In my talk I will give an overview of the concert research conducted in the fourMs Lab at the University of Oslo from the early 2000s to today. Over the years, we have explored and refined numerous data captures methods, from qualitative observation studies, interviews, and diaries to motion capture and physiological sensing. At the core has always been the attempt to shed light on the complexity of music performance. This includes understanding more about the subtleties of performer's sound-producing actions, sound-facilitating motion, and communicative and expressive gestures. It also includes the intricacies of inter-personal synchronization. Over the years, we have been able to expand from studying duos, trios, and quartets to full orchestras. Today, we have lots of data, some answers, and even more questions than when we started. An excellent starting point for future research.
-
Jensenius, Alexander Refsum
(2024).
Video Visualization and Analysis.
Show summary
In this workshop, I will introduce video visualization as a method for understanding more about music-related body motion. Examples will be given of various methods implemented in the standalone application VideoAnalysis and the Musical Gestures Toolbox for Python.
-
-
Carvalho, Vinicius Rezende
(2024).
Da trajetória acadêmica à experiência no exterior (From academic trajectory to experience abroad).
-
Bishop, Laura
(2024).
Coordination and individuality in orchestral string sections.
-
Fleckenstein, Abbigail Marie; Vuoskoski, Jonna Katariina & Saarikallio, Suvi
(2024).
Being Musically Moved.
-
Danielsen, Anne; Syvertsen, Tuva; Holm, Askil & Austlid, Alexander
(2024).
Trenger vi utdanning i l?tskriving og musikkproduksjon?
-
Polak, Rainer
(2024).
Embedded Audiency: Performing as Audiencing at Music-Dance Circle Events in Mali.
-
Polak, Rainer & Jacoby, Nori
(2024).
Biological Constraints and Cultural Possibilities in Rhythm Perception.
-
Jensenius, Alexander Refsum
(2024).
The Ambient project at RITMO.
Show summary
The AMBIENT project aims to study how such elements influence people's bodily behaviors and how they feel about the rhythms in an environment. This will be done by studying how different auditory and visual stimuli combine to create rhythms in various settings.
-
Jensenius, Alexander Refsum
(2024).
From air guitar to self-playing guitars.
Show summary
What can air guitar performance tell about people's musical experience and how does it relate to real guitar performance? Alexander Refsum Jensenius will tell about his decade-long research into music-related body motion of both performers and perceivers. He will also tell about how this has informed new performance paradigms, including the self-playing guitars that will be showcased at the festival.
?
Alexander Refsum Jensenius is a professor of music technology at the University of Oslo and Director of RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. He studies how and why people move to music and uses this knowledge to create new music with untraditional instruments. He is widely published, including the books Sound Actions and A NIME Reader.
-
Jensenius, Alexander Refsum
(2024).
Embodied music-related design.
Show summary
Abrahamson et al. (2022) recently called for a merging of Embodied Design-Based Research and Learning Analytics to establish a coherent and integrated focus on Multimodal Learning Analytics of Embodied Design. In Spring 2022, members of EDRL and selected international collaborators of the lab participated in “Rhythm Rising,” a workshop week hosted at University of Oslo’s RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion. The workshop featured activities for graduate students to learn the scientific research methodologies of gathering physical, physiological, and neurobiological data from study participants engaged in interactive learning of STEM content. The activities combined the respective expertise of Abrahamson (learning sciences) and Jensenius (embodied music cognition and technology) to investigate sensorimotor micro-processes hypothesized to form the cognitive basis of conceptual understandings, such as hand- and eye actions leading to the emergence of mathematical insight. Whereas the Oslo workshop spurred great enthusiasm among the graduate students, its duration only allowed time for initial data collection. Therefore, we would like to regather in Spring 2024 to continue our collaborative work and to share insights about data analysis, visualization, and interpretation. Concurrently, we’ll develop ideas for future joint research projects.
-
-
-
Saplacan, Diana
(2024).
Human Robot Interaction: Studies with Users.
-
Talseth, Thomas & Br?vig, Ragnhild
(2024).
Kommentar til innlegget "Refser NRK for ? la G?te-l?t konkurrere i Melodi Grand Prix".
[Journal].
VG.
-
Blenkmann, Alejandro Omar; Volehaugen, Vegard Akselsson; Carvalho, Vinicius Rezende; Leske, Sabine Liliana; Llorens, Anais & Funderud, Ingrid
[Show all 14 contributors for this article]
(2024).
An intracranial EEG study on auditory deviance detection.
-
Saplacan, Diana
(2024).
Qualitative Observational Video-Based Study on Perceived Privacy in Social Robots’ Based on Robots Appearances.
-
Jensenius, Alexander Refsum & Danielsen, Anne
(2024).
Tverrfaglighet: 40-grupper til besv?r.
Show summary
Vi er positive til fler- og tverrfaglige studiel?p og synes 40-grupper er en god idé. Strukturen er p? plass, men implementeringen er mangelfull. Til tider er det vanskelig ? skj?nne at vi jobber ved samme institusjon.
-
Solli, Sandra; Danielsen, Anne; Leske, Sabine Liliana; Blenkmann, Alejandro Omar; Doelling, Keith & Solbakk, Anne-Kristin
[Show all 7 contributors for this article]
(2024).
Rhythm-based temporal expectations: Unique contributions of predictability and periodicity.
-
Glette, Kyrre
(2024).
Evolutionary design of morphology and control for simulated and real-world robots.
-
Wallace, Benedikte
(2024).
Imitation or Innovation? Translating Features of Expressive Motion from Humans to Robots.
-
Bucio, Diego Antonio Marin
(2024).
Dancing Embryo: Danza y co-creatividad humano-IA.
-
T?rresen, Jim
(2024).
Ethical, Legal and Technical Challenges and Considerations.
-
Bucio, Diego Antonio Marin
(2024).
Dance in the unequal world of High Tech.
-
Bucio, Diego Antonio Marin
(2024).
Embodying the artificial: a multimodal human-machine performance.
-
Bucio, Diego Antonio Marin
(2024).
?Puede una IA bailar?
Show summary
Las reflexiones y resultados que se presentan en este artículo tienen su origen en la experiencia de dise?ar un bailarín de inteligencia artificial y la posterior cocreación de danza humano-IA en el proyecto ?Dancing Embryo? (Marin, Wallace y 6A9). La investigación gira en torno a tres etnografías que captan el proceso de colaboración entre humanos y máquinas para crear e interpretar danza. Esta investigación trasciende la mera innovación tecnológica para convertirse en una profunda indagación filosófica, que cuestiona la naturaleza y los límites del arte de la danza.
-
Basiński, Krzysztof; Dom?alski, Tomasz & Blenkmann, Alejandro Omar
(2024).
The effect of harmonicity on mismatch negativity responses to different auditory features.
-
Carvalho, Vinicius Rezende; Collavini, Santiago; Kochen, Silvia; Solbakk, Anne-Kristin & Blenkmann, Alejandro Omar
(2024).
Human single-neuron responses to a local-global oddball paradigm.
-
Solbakk, Anne-Kristin
(2024).
Inhibitory control and impulsive actions in ADHD.
-
Oddekalv, Kjell Andreas
(2024).
“I’m sorry y’all, I often drift – I’m talking gift” Microrhythmic analysis of rap – categorization, malleability and structural bothness.
-
T?rresen, Jim
(2024).
Invitert foredrag: Vil vi ha roboter til ? hjelpe oss n?r vi trenger hjelp?
-
Asko, Olgerta; Volehaugen, Vegard Akselsson; Leske, Sabine Liliana; Funderud, Ingrid; Llorens, Ana?s & Ivanovic, Jugoslav
[Show all 12 contributors for this article]
(2024).
Predictive encoding of deviant tone sequences in the human prefrontal cortex.
-
Oddekalv, Kjell Andreas
(2024).
Dr. Kjell: Hiphop 40 ?r i Norge.
-
Asko, Olgerta; Volehaugen, Vegard Akselsson; Leske, Sabine Liliana; Funderud, Ingrid; Anais, Llorens & Ivanovic, Jugoslav
[Show all 12 contributors for this article]
(2024).
Predictive encoding of deviant tone sequences in the human prefrontal cortex.
Show summary
The ability to use predictive information to guide perception and action relies heavily on the prefrontal cortex (PFC), yet the involvement of its subregions in predictive processes remains unclear. Recent perspectives propose that the orbitofrontal cortex (OFC) generates predictions about perceptual events, actions, and their outcomes while the lateral prefrontal cortex (LPFC) is involved in prospective functions, which support predictive processes, such as selective attention, working memory, response preparation or inhibition. To further delineate the roles of these PFC areas in predictive processing, we investigated whether lesions would impair the ability to build predictions of future events and detect deviations from expected regularities. We used an auditory deviance detection task, in which the structural regularities of played tones were controlled at two hierarchical levels by rules defined at a local (i.e., between tones within sequences) and global (i.e., between sequences) level.
We have recently shown that OFC lesions affect detecting prediction violations at two hierarchical levels of rule abstraction, i.e., altered MMN and P3a to local and simultaneous local + global prediction violations (https://doi.org/10.7554/eLife.86386). Now, we focus on the task's predictive aspect and present the latest results showing the involvement of PFC subregions in anticipation of deviances informed by implicit predictive information.
Behavioral data shows that deviance expectancy induced faster deviance detection in healthy adults (n=22), suggesting that participants track a state space representation of the task and anticipate upcoming deviant sequences.
The analysis of EEG data from patients with focal lesions to the OFC (n = 12) or LPFC (n = 10), and SEEG from the same areas in patients with epilepsy (n = 7), revealed interesting differences. Healthy adults (n = 15) showed modulations of the Contingent Negative Variation (CNV) – a marker of anticipatory activity - tracking the expectancy of deviant tone sequences. However, patients with OFC lesions lacked CNV sensitivity to the predictive context, while patients with LPFC lesions showed moderate sensitivity compared to healthy adults. These results were further supported by intracranial recordings, which revealed expectancy modulation of the high-frequency broadband signal from electrodes in OFC and LPFC, with an earlier latency of activity modulation for the OFC and a later one for the LPFC.
Altogether, the complementary approach from behavioral, intracerebral EEG, scalp EEG, and causal lesion data provides compelling evidence for the distinct engagement of the two prefrontal areas in predicting future events and signaling deviations.
-
Oddekalv, Kjell Andreas
(2024).
Panelsamtale: Fra mainstream til oppr?r til mainstream igjen – Hiphop 40 ?r i Norge.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; S?rli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2024).
Sinsenfist julekonsert 2024.
-
Bernhardt, Emil
(2024).
The Beauty and/of the Beat: Expressive Regularity in Schubert.
-
G?ksülük, Bilge Serdar
(2024).
Conducting Semi-Structured Dance Research in Motion Capture Labs.
-
G?ksülük, Bilge Serdar
(2024).
Remote Intercorporeality Through Telematic Technologies.
-
Lartillot, Olivier
(2024).
Successes and challenges of computational approaches for audio and music analysis and for predicting music-evoked emotion.
Show summary
Background
Decades of research in computational sound and music analysis has led to a large range of analysis tools offering rich and diverse description of music, although a large part of the subtlety of music remains out of reach. These descriptors are used to establish computational models predicting perceived or induced emotion directly from music. Although the models can predict a significant amount of variability of emotions experimentally measured (Panda et al., 2023), further progress seems hard to achieve, probably due to the subtlety of music and of the mechanisms underlying the evocation of emotion from music.
Aims
An extensive but synthetic panorama of computational research in sound and music analysis as well as emotion prediction from music is presented. Core challenges are highlighted and prospective ways forward are suggested.
Main contribution
For each separate music dimension (dynamics, timbre, rhythm, tonality and mode, motifs, phrasing, structure and form), a synthetic panorama of the state of the art is evoked, highlighting strengths and challenges as well as indicating how particular sound and music features have been found to correlate with rated emotions. The various strategies for modelling emotional reactions to audio and musical features are presented and discussed.
One common general analytical approach carries out a broad and approximate analysis of the audio recording based on simple mathematical models, describing individual audio or musical characteristics numerically. It is suggested that such loose approach might tend to drift away from commonly understood musical processes and to generate artefacts. This vindicates a more traditional musicological approach based on a focus on the score or approximations of it – through automated transcription if necessary – and a reconstruction of the types of traditional representations commonly studied in musicology. I also argue for the need to closely reflect the way humans listen to and understand music, inspired by a cognitive perspective. Guided by these insights, I sketch the idea of a complex system made of interdependent modules, founded on sequential pattern inference and activation scores not based on statistical sampling.
I also suggest perspectives for the improvement of computational prediction of emotions evoked by music. Discussion and conclusion
Further improvements of computational music analysis methods, as well as emotion prediction, seem to call for a change of modelling paradigm.
References
R. Panda, R. Malheiro, R. Paiva, "Audio Features for Music Emotion Recognition: A Survey", IEEE Transactions on Affective Computing, 14-1, 68-88, 2023.
-
Jensenius, Alexander Refsum; Danielsen, Anne; Kvammen, Daniel & Tollefsb?l, Sofie
(2024).
Musikksnakk: Musikk i urolige tider.
Show summary
P? konsert f?ler vi samhold med fremmede, viser forskning. I et kort ?yeblikk samler musikken oss. Hvordan kan musikk ogs? samle oss i urolige tider? Hva er det med akkurat musikk som forener oss? Bli med p? musikksnakk med artistene Daniel Kvammen og vokalist i FIEH, Sofie Tollefsb?l, og musikkforsker Anne Danielsen. Her vil musikkprofessor Alexander Refsum Jensenius lede samtalen med ulike sp?rsm?l knyttet til tematikken – kanskje svarer de p? ditt sp?rsm?l ogs?? Samtalen er beregnet for et publikum uten faglig bakgrunn i temaet.
-
Jensenius, Alexander Refsum
(2024).
The assessment of researchers is changing – how will it impact your career?
Show summary
Changes are happening in the world of research assessment, for example by recognizing several competencies as merits and a better balance between quantitative and qualitative goals. In Norway, for example, Universities Norway presented the NOR-CAM report in 2021 which sparked a movement for reform. As an early career researcher, it's crucial to understand how these changes may impact your research career. In this talk, Jensenius will discuss the evolving landscape of research assessment and what it means for you.
-
Lartillot, Olivier
(2024).
Introduction to the MiningSuite toolbox.
-
Blenkmann, Alejandro Omar
(2024).
Current challenges in human EEG/iEEG/SUA.
-
Lartillot, Olivier
(2024).
KI-verkt?y for h?ndtering, transkribering og analyse av musikkarkiver.
Show summary
Jeg presenterer en rekke verkt?y utviklet i 澳门葡京手机版app下载 med Nasjonalbiblioteket. AudioSegmentor deler automatisk b?ndopptak i individuelle musikkstykker. Dette verkt?yet forenklet digitaliseringen av Norsk folkemusikksamling. Vi bruker avanserte dyp l?ringsmetoder for ? skape et banebrytende automatisk musikktranskriberingssystem, MusScribe, f?rst finjustert for Hardingfele, og n? gjort tilgjengelig for musikkarkivprofesjonelle for et bredt spekter av musikk. Jeg diskuterer ogs? v?re p?g?ende fremskritt innen den automatiserte musikologiske analysen av folkemusikkstykker og omfattende samlinger.
-
Blenkmann, Alejandro Omar; Leske, Sabine Liliana; Llorens, Ana?s; Lin, Jack J.; Chang, Edward & Brunner, Peter
[Show all 12 contributors for this article]
(2024).
Novel tools for the anatomical registration of intracranial electrodes.
-
G?ksülük, Bilge Serdar
(2024).
Remote Dance Improvisation Through Advanced Telematic Technologies.
-
Jensenius, Alexander Refsum
(2024).
Muskelmusikk.
Show summary
Hva skjer i musklene n?r vi fors?ker ? st? stille? Hvordan kan men lage musikk fra kroppen. I pausen p? Forsker Grand Prix vil jeg underholde med et sceneshow hvor jeg utforsker interaktive muskelarmb?nd og en musikkhanske.
-
G?ksülük, Bilge Serdar
(2024).
Immersive Technologies in TYA: Bodily Concerns, Challenges and Opportunities.
-
G?ksülük, Bilge Serdar
(2024).
Immersive Technologies and Their Implications in Theatre for Young Audiences.
-
T?rresen, Jim
(2024).
Kunstig intelligens og forskningsetiske vurderinger.
-
-
Ziegler, Michelle; Sudo, Marina; Akkermann, Miriam & Lartillot, Olivier
(2024).
Towards Collaborative Analysis: Kaija Saariaho’s IO.
-
Vestre, Katharina; Mossige, Joachim; L?vvik, Ole Martin & Jemterud, Torkild
(2024).
Abels t?rn 12.1.2024.
[Radio].
Abels t?rn NRK P2.
-
Polak, Rainer; Lara, Pearson & Samuel, Horlor
(2024).
Audiency Beyond the Concert Hall: An Interaction-Based, Music-Theoretical Approach.
-
Bucio, Diego Antonio Marín & Polak, Rainer
(2024).
Exploring motion capture systems in dance research: a case study of djembe dance from West Africa.
-
-
Polak, Rainer; Holzapfel, Andre & Paschalidou, Stella
(2024).
Motion capture in the field: three reports of hardships in data collection and processing.
-
-
Riaz, Maham
(2024).
Comparing Spatial Audio Recordings from Commercially Available 360-degree Video Cameras.
Show summary
This paper investigates the spatial audio recording capabilities of various commercially available 360-degree cameras (GoPro MAX, Insta360 X3, Garmin VIRB 360, and Ricoh Theta S). A dedicated ambisonics audio recorder (Zoom H3VR) was used for comparison. Six action sequences were performed around the recording setup, including impulsive and continuous vocal and non-vocal stimuli. The audio streams were extracted from the videos and compared using spectrograms and anglegrams. The anglegrams show adequate localization in ambisonic recordings from the GoPro MAX and Zoom H3VR. All cameras feature undocumented noise reduction and audio enhancement algorithms, use different types of audio compression, and have limited audio export options. This makes it challenging to use the spatial audio data reliably for research purposes.
-
Riaz, Maham & Theodoridis, Ioannis
(2024).
Ventilation hacking.
Show summary
We examine innovative approaches to mitigate the issue of unwanted ventilation noise, transforming it from a disruptive element into a source of ambient or musical sound. We propose a range of solutions, from mechanical adjustments to acoustic treatments and digital interventions.
-
-
-
-
Br?vig, Ragnhild & Aareskjold-Drecker, Jon Marius
(2024).
Hey Siri, what are the royalty splits of the
song you wrote for me?
-
Carvalho, Vinicius Rezende; Collavini, Santiago; Kochen, Silvia; Solbakk, Anne-Kristin & Blenkmann, Alejandro Omar
(2024).
Single-neuron responses to a multifeature oddball paradigm.
-
-
-
-
Jensenius, Alexander Refsum
(2024).
Hvordan kan ?pen forskning lede til ?pen utdanning? Og omvendt?
Show summary
?penhet og akademisk frihet er hj?rnesteiner i et velfungerende forskningssystem. Dette m? vi bevare n?r vi skal bygge et helhetlig nasjonalt forskningssystem som ogs? inkluderer skjermet og gradert forskning. Hvordan skal vi klare ? bygge et forskningssystem som er s? ?pent som mulig og s? lukket som n?dvendig? ?pen forskning inneb?rer at forskningen gj?res tilgjengelig og deles av forskere, institusjoner, sektorer og over landegrenser. Det har v?rt lite fokus p? de positive sidene ?pen forskning kan ha innen utdanning. Hvordan kan vi motivere til mer forskningsn?r utdanning, men ogs? ?ke kvaliteten p? forskningen?
-
Guo, Jinyue
(2024).
Comparing Four 360-Degree Cameras for Spatial Video Recording and Analysis.
Show summary
This paper reports on a desktop investigation and a lab experiment comparing the video recording capabilities of four commercially available 360-degree cameras: GoPro MAX, Insta360 X3, Garmin VIRB 360, and Ricoh Theta S. The four cameras all use different recording formats and settings and have varying video quality and software support. This makes it difficult to conduct analyses and compare between devices. We have implemented new functions in the Musical Gestures Toolbox (MGT) for reading and merging files from the different platforms. Using the capabilities of FFmpeg, we have also made a new function for converting between different 360-degree video projections and formats. This allows (music) researchers to exploit 360-degree video recordings using regular videobased analysis pipelines.
-
S?yseth, Vegard; Bergfl?dt, Adrian Anderson; Glette, Kyrre; Watanabe, Shin & Otterdijk, Marieke van
(2024).
Utstilling av roboter p? SENT @ Teknisk museum.
-
Br?vig, Ragnhild
(2024).
Exploring the Intersection of AI, Ethics, and Music.
-
T?rresen, Jim
(2024).
From Adaptation of the Robot Body and Control Using Rapid-Prototyping to Human–Robot Interaction with TIAGo.
-
Br?vig, Ragnhild & Grydeland, Ivar
(2024).
Love your Latency and the Glitching Spatiotemporal Condition.
-
Jensenius, Alexander Refsum
(2024).
Can doing nothing tell us everything?
Show summary
Can doing nothing tell us everything? Meet Professor Alexander Refsum Jensenius, a music researcher exploring the deep connections between sound, space, and the human body. Through his fascinating studies on stillness and motion, Alexander has discovered surprising insights into how we interact with our environment.
-
Jensenius, Alexander Refsum
(2024).
The assessment of researchers is changing – how will it impact your career?
Show summary
Changes are happening in the world of research assessment, for example by recognizing several competencies as merits and a better balance between quantitative and qualitative goals. In Norway, for example, Universities Norway presented the NOR-CAM report in 2021 which sparked a movement for reform. As an early career researcher, it's crucial to understand how these changes may impact your research career. In this talk, Jensenius will discuss the evolving landscape of research assessment and what it means for you.
-
Jensenius, Alexander Refsum; Edwards, Peter; Klungnes, Kristina Mariell Dulsrud; Berg, Anna & Jenssen, Kjell Runar
(2024).
Musikksnakk: Filmmusikk.
Show summary
Musikk skaper stemning i filmer. Men hvordan klarer filmmusikken ? bevege oss s? mye? Og hva er starten p? fortellingen om hvorfor man bruker musikk for ? skape en viss stemning?
Se for deg en hai komme sv?mmende mot en uviten bader – i stillhet. Hva med Frodo og Sam som karrer seg opp Mount Doom til lyden av... ingenting? Eller Katniss Everdeen som kj?rer i en vogn i flammer gjennom Capitol, uten trommer som dr?nner og majestetiske horn? Litt kjedelig, ikkesant?
Musikk er viktig i film for ? lage en viss stemning. Men hvordan ble det s?nn? Er det bare for ? f? oss til ? f?le, eller ligger det en historie bak filmmusikken?
-
Jensenius, Alexander Refsum & Jerve, Karoline Ruderaas
(2024).
Verdens st?rste musikkeksperiment.
[Journal].
Ballade.
Show summary
I kveld m?tes NRKs popul?rvitenskapelige radioprogram Abels t?rn, KORK og forskningsprosjektet MusicLab for ? m?le hva som skjer mellom musikere og publikum n?r de utsettes for musikk.
-
Jensenius, Alexander Refsum; R?nning, Anne-Birgitte; Haug, Dag Trygve Truslew & S?ther, Steinar Andreas
(2024).
Frokostm?te: Humaniora og infrastruktur.
Show summary
Heller ikke en humanistisk forsker klarer seg helt p? egenh?nd. Men hvilken infrastruktur trenger vi for humanistisk forskning? Infrastrukturer kommer i mange st?rrelser og former, og vi snakker stadig mer om dem - s?rlig n?r samtalen dreier seg om det digitale skiftet. Derfor sp?r vi: hva er humanioras infrastrukturer? Hvilke forskjeller og likheter er det mellom de forskjellige fagene p? HF? Hvordan kan vi best s?rge for at n?dvendig infrastruktur er p? plass? For ? gi seg i kast med disse sp?rsm?lene har vi samlet et panel med erfarne forskere og undervisere fra ulike HF-fag som alle i tillegg har erfaring fra lederroller og verv med betydning for hvordan HF og UiO forholder seg til infrastruktur.
-
Jemterud, Torkild; Jensenius, Alexander Refsum; L?seth, Guro Engvig & Holthe, Kolbj?rn
(2024).
ABELS KORK - Verdens st?rste(?) musikkeksperiment.
[Radio].
NRK.
Show summary
Hvordan p?virker musikk oss? Hva skjer i hjernen v?r n?r vi h?rer en melodi vi liker – eller misliker? Hvorfor reagerer vi forskjellig p? ulike typer musikk? Og hvordan klarer et helt orkester ? spille plettfritt sammen? Og forresten: trenger de egentlig ? ha en dirigent? Hver fredag svarer panelet i Abels t?rn p? alle slags vitenskapelige sp?rsm?l, store og sm?, fra lytterne. Noen vil langt ut i verdensrommet, og andre er mer opptatt av hva som skjer p? kj?kkenbenken. Men musikk er noe vi alle har et forhold til. Den er rundt oss hele tiden, og det er mye ? undre seg over n?r det gjelder musikk og hvordan den taler til oss p? dype personlige plan. Derfor har Abels t?rn og KORK g?tt sammen med RITMO og Universitetsbiblioteket for ? lage en musikalsk utgave av det popul?re vitenskapsprogrammet. Vi introduserer: Abels KORK!
-
Jensenius, Alexander Refsum; Riaz, Maham; Oldfield, Thomas L & Juarez, Karenina
(2024).
RITMO-studenter presenterer nye installasjoner.
Show summary
Studenter tilknyttet RITMO stiller ut prosjektene sine p? Popsenteret: en interaktiv symaskin fra 1911, et lyttende og snakkende speil, og et interaktivt maleri. Hvordan kan slike objekter gi musikalske opplevelser?
-
Jónsson, Bj?rn Thór; Erdem, ?a?ri; Fasciani, Stefano & Glette, Kyrre
(2024).
Towards Sound Innovation Engines Using Pattern-Producing Networks and Audio Graphs.
Show summary
This study draws on the challenges that composers and sound designers face in creating and refining new tools to achieve their musical goals. Utilising evolutionary processes to promote diversity and foster serendipitous discoveries, we propose to automate the search through uncharted sonic spaces for sound discovery. We argue that such diversity promoting algorithms can bridge a technological gap between the theoretical realisation and practical accessibility of sounds. Specifically, in this paper we describe a system for generative sound synthesis using a combination of Quality Diversity (QD) algorithms and a discriminative model, inspired by the Innovation Engine algorithm. The study explores different configurations of the generative system and investigates the interplay between the chosen sound synthesis approach and the discriminative model. The results indicate that a combination of Compositional Pattern Producing Network (CPPN) + Digital Signal Processing (DSP) graphs coupled with Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) and a deep learning classifier can generate a substantial variety of synthetic sounds. The study concludes by presenting the generated sound objects through an online explorer and as rendered sound files. Furthermore, in the context of music composition, we present an experimental application that showcases the creative potential of our discovered sounds.
-
Jensenius, Alexander Refsum
(2024).
Challenges and Possibilities of Open Music Data.
Show summary
The Sempre Autumn conference was an online student study day, held on Friday 8th November 2024, with a combination of student presentations, research speed dating, and a special session on open research featuring Professor Iain Brennan (University of Hull), Professor Tuomas Eerola (Durham University), and Professor Alexander Refsum Jensenius (University of Oslo). The event was open to doctoral students at any stage of their research and those thinking of applying for doctoral study. We invited proposals for short presentations (10 minutes + 5 for Q&A) from doctoral students, on any aspect of music psychology or music education.
-
Glette, Kyrre
(2024).
Bio-inspiration and divergent search algorithms for robotics and sound exploration.
-
Solbakk, Anne-Kristin; Hope, Mikael; Solli, Sandra; Leske, Sabine Liliana; Foldal, Maja Dyhre & Blenkmann, Alejandro Omar
(2024).
Research seminar.
-
Jónsson, Bj?rn Thór
(2024).
Phytobenthos 1.
Show summary
A playlist of livestream recordings during several nights of stochastic sequencing through sets of sounds found during runs with different configurations of quality diversity search.
-
Norstein, Emma Stensby; Yasui, Kotaro; Kano, Takeshi; Glette, Kyrre & Ishiguro, Akio
(2024).
A bio-inspired decentralized approach to multi-morphology control.
Show summary
Traditional robot controllers are usually optimized
for a specific robot design, and will often fail if the robot’s morphology is changed. This can be a challenge for robustness to damage, self-reconfiguring robots, or in the case of morphology design search algorithms, where each new robot design dictates re-learning the controller. Here, we take on a highly bio-inspired approach, taking inspiration from the versatility of myriapod locomotion, to create a multi-morphology controller.
We propose a simple decentralized controller model, which
can, without change in parameters, adapt to various centipede-like morphologies and display different behaviors based on changes in the morphology and environment. The approach shows potential for robot design and could potentially be useful for understanding mechanisms of animal locomotion.
-
C?mara, Guilherme Schmidt
(2024).
Looking for the perfect JND: In search of more ecological thresholds for the perception of microrhytm in groove-based music.
Show summary
There is currently a gap in microrhythm research regarding to what extent we perceive nuances in the
timing of complex acoustic stimuli in realistic musical contexts. Classic studies tend to investigate the
so-called just-noticeable difference (JND) thresholds of timing discrimination in non- or quasi-rhythmic
contexts, and generally use non-musical sound stimuli such as clicks or sine waves. Findings from these
studies show that we can discriminate minute timing irregularities between such simple sounds with a
high degree of precision – as low as 2 milliseconds for onset asynchronies between tones (Hirsh 1959,
Zera & Green 1993). In more recent decades, studies have incorporated musical sounds into
discrimination experiments, though these are often still synthesized, and at best tend to resemble quasimusical/rhythmic contexts. Even so, these have revealed similar impressive acuity results, as well as
further revealing important effects of tempo (IOI) and degree of musical training on timing JNDs (Frane
& Shams 2017; Friberg & Sundberg 1995). To our knowledge, however, none have yet focused attention
towards JND thresholds in more realistic musical contexts. As such, the extent to which results derived
from non- or quasi-musical experimental settings translate to our perception of microrhythmic nuances
in real groove-based music – that is, highly multilayered ensembles featuring a range of complex
instrumental sounds and rhythmic patterns – remains somewhat poorly understood.
In this talk, I will present an overview of some of the abovementioned salient literature on perceptual
thresholds of microtiming, with focus on asynchrony (beat delay/anticipation) and anisochrony (swing).
In addition, I will present some preliminary results from our own ongoing series of JND experiments
which seek to generate more ecologically valid perceptual heuristics for microrhythm in simple, yet
realistic, groove-based musical contexts. Results from pilot experiments on a standard funk pattern
(modelled on James Brown’s Soul Power) already indicate that JNDs for simple detection of asynchrony
in a given instrument layer (guitar, bass, drums [hi-hat, kick, snare]) exceed those predicted by the
previous literature. This suggests that perhaps we are not as sensitive to certain forms of microrhythmic
nuances in realistic musical contexts as previously thought. They also point to important differences in
JND thresholds between musicians and non-musicians – individuals without musical training appear to
be significantly less sensitive to microrhythmic nuances than musicians – as well as between percussive
and stringed instruments – we appear to be more sensitive to asynchronies produced by sharper,
impulsive drum sounds, as opposed to wider, smoother ones such as those of the electric bass and guitar.
These latter findings in particular add to the growing awareness in the field of microrhythm studies that
sound-related features related to timbre are fundamental to the perception and production of timing in
groove-based contexts (C?mara et al. 2020a; 2020b). Different methodological approaches and
challenges will also be discussed, with focus on how different procedures/tasks (e.g. directly comparing
two grooves – one with, and one without, asynchrony – then identifying the one with asynchrony
[2AFC], as opposed to simply listening to one groove, then identifying whether asynchronies were
present or not [Yes/No]) can further affect JND timing thresholds and ultimately provide quite different
answers as to what extent ‘microrhythm matters’ perceptually to us as listeners.
-
Nielsen, Nanette
(2024).
Enacting musical aesthetics: the embodied experience of live music.
-
Oddekalv, Kjell Andreas; Bj?rkheim, Terje; S?rli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2024).
Sinsenfist p? Carls.
-
Oddekalv, Kjell Andreas
(2024).
Humaniorafestivalen 2024, Inn i historien: Dr. Kjell presenterer: 40 ?r med norsk hiphop | Stovner.
-
Oddekalv, Kjell Andreas & Laeng, Bruno
(2024).
LAB.prat #2: Professor Bruno Laeng - Vi lytter med ?ynene.
-
Oddekalv, Kjell Andreas & Swarbrick, Dana
(2024).
LAB.prat #1: Dr. Dana Swarbrick || Dana & The Monsters.
-
Blenkmann, Alejandro Omar
(2024).
The role of the Orbitofrontal Cortex in building predictions and detecting violations.
-
Vuoskoski, Jonna Katariina; Treider, John Melvin Gudnyson & Huron, David
(2024).
The attribution of virtual agency to music predicts liking.
-
Thedens, Hans-Hinrich & Lartillot, Olivier
(2024).
The Norwegian Catalogue of Folk Music Online.
-
Monstad, Lars L?berg & Lartillot, Olivier
(2024).
muScribe: a new transcription service for music professionals.
-
Oddekalv, Kjell Andreas & Jensenius, Alexander Refsum
(2024).
LAB.prat #3 og "NM i stillstand": Kan man st? stille til musikk?
-
Oddekalv, Kjell Andreas
(2024).
Vi skriv p? tog, og vi skriv p? tog - pitch.
-
Johansson, Mats Sigvard & Lartillot, Olivier
(2024).
Automated transcription of Hardanger fiddle music: Tracking the beats.
-
Oddekalv, Kjell Andreas
(2024).
The Sound of the crew in rap:
Rapping chimeras, illusory posses and other fantastical creatures summoned in the studio and cipher.
-
Oddekalv, Kjell Andreas
(2024).
Humaniorafestivalen 2024, Inn i historien: Dr. Kjell presenterer: 40 ?r med norsk hiphop | Bj?rnholt.
-
Grane, Venke Arntsberg; Endestad, Tor; St?ver, Isak Elling August; Leske, Sabine Liliana & Solbakk, Anne-Kristin
(2024).
Executive Function in a Treatment-Naive ADHD Cohort Diagnosed in Adulthood.
Show summary
Background: Attention Deficit Hyperactivity Disorder (ADHD) is an early onset neurodevelopmental condition presenting with diverse cognitive/behavioral impairments that persist into adulthood for half of those affected.
Objective: To examine whether unmedicated adults show general- or specific reductions in core executive functions (EFs).
Method: Performance on EF-tasks was assessed in adult patients with ADHD, Combined type (n=36) and in healthy controls (n=34), matched on gender, age, and education level. The tasks tapped memory span/working memory (Digit Span), interference control/response inhibition (Color Word Interference Test; CWIT-Inhibition), set-shifting/switching (CWIT-Switching; Trail Making Test [TMT]), and abstract reasoning (Wisconsin Card Sorting Test; [WCST]).
Results: There was no group difference in immediate memory span, but the patients performed significantly worse than controls when there was an additional demand on working memory. Statistically controlling for individual differences in information processing speed (using an independent reaction time measure) did not alter the result. Patients performed inferiorly on basic psychomotor speed conditions of the TMT, but the most pronounced group difference appeared on the set-shifting condition. The CWIT-Inhibition condition did not distinguish the groups, but patients had a near-significant tendency to perform more poorly when a concurrent demand on rapid set-switching was introduced. They completed
fewer card-sorting categories on the WCST than controls, with more errors overall. There was no significant difference in perseverative errors, but patients committed more non-
perseverative errors and failures to maintain set, indicating more random choices and/or losing track of the current sorting principle.
Conclusion: ADHD-related reductions of attention maintenance, switching, and working memory, but not inhibitory control, support the literature indicating that ADHD in adulthood is neither associated with specific deficits in inhibitory control, and nor with a general executive impairment. Accordingly, clinical assessment should span a range of EF tests, including the core control functions studied here.
-
Vuoskoski, Jonna Katariina & Stupacher, Jan
(2024).
Investigating internal motor simulation in response to music stimuli with varying degrees of rhythmic complexity.
-
Monstad, Lars L?berg & Lartillot, Olivier
(2024).
Automated transcription of Hardanger fiddle music: Detecting the notes.
-
Lartillot, Olivier
(2024).
MIRAGE Closing Seminar: Digitisation and computer-aided music analysis of folk music.
Show summary
One aim of the MIRAGE project is to conceive new technologies allowing to better access, understand and appreciate music, with a particular focus on Norwegian folk music. This seminar presents what has been achieved during the four years of the project, leading in particular to the digital version of the Norwegian Catalogue of Folk Music. We are also conceiving tools to automatically transcribe audio recordings of folk music. More advanced musicological applications are discussed as well. To conclude, we introduce the new spinoff project, called muScribe, aimed at the development of transcription services, for a broad range of music, besides folk music, in a first stage tailored to professional organisations such as archives, publishers and producers.
-
Lartillot, Olivier
(2024).
Overview of the MIRAGE project.
-
Jensenius, Alexander Refsum
(2024).
Mock PhD Interview.
Show summary
The objective of the interview mockup is to provide an example of what a PhD interview looks like. We want to provide a safe space to ask questions to an experient interviewer and to understand how to better prepare for the interview if you're applying to PhD positions in other countries.
LatAm BISH Bash is a series of meetings and networking events that connect engineers, researchers, students, and companies working on speech, acoustics, and audio processing.
This time, we will have a PhD mockup interview conducted by Alexander Jensenius, who is a professor of music technology and Director of RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion.
-
Volehaugen, Vegard Akselsson; Leske, Sabine Liliana; Funderud, Ingerid; Carvalho, Vinicius Rezende; Endestad, Tor & Solbakk, Anne-Kristin
[Show all 7 contributors for this article]
(2024).
Unheard Surprises: Attention-Dependent Neocortical Dynamics Following Unexpected Omissions Revealed by Intracranial EEG.
-
Nielsen, Nanette & Martin, Remy Richard
(2024).
Affective framing, care, and (en)action in musical encounters.
-
T?rresen, Jim
(2024).
Ethical and Regulatory Perspectives of Robotics and Automation.
-
Jensenius, Alexander Refsum
(2024).
Some Challenges in Musical Artificial Intelligence.
Show summary
In this presentation, I highlight RITMO's interdisciplinary approach, combining musicology, psychology, and informatics to study rhythm as a fundamental human property. I emphasise the intersection of humans and machines in AI, advocating for a balanced approach that incorporates both rule-based and learning-based systems, especially in music. I also address critical aspects like code sharing, data accessibility (FAIR principles), privacy, copyright, and ethical considerations within the AI landscape. Finally, I call for the development of AI for creative use, considering its impact on knowledge, ethics, and human experience, while also examining policy and societal rights.
-
Solbakk, Anne-Kristin & Jensenius, Alexander Refsum
(2024).
Research Ethics and Legal Perspectives.
-
Marin-Bucio, Diego
(2024).
Aproximaciones a la inteligencia artificial en la creación de danza: la IA como herramienta, títere o colaborador.
Show summary
This presentation reported the research outcomes of incorporating generative AI in the creation of concert dance, evaluating its role as a tool, puppet, or collaborator. Employing an interdisciplinary theoretical framework encompassing 4E cognition, phenomenology of perception, Actor-network theory and Entrainment theory, human-AI interactions are examined. Ethnographic methodologies and phenomenological analysis reveal power dynamics and modes of creative interrelation in dance creation with AI and other technologies. The results present a taxonomy and conceptual model, highlighting how AI can transcend from a mere creative catalyst to an active agent in producing aesthetic works. This structured approach facilitates the understanding, differentiation and application of emerging technologies in dance, proposing new paradigms for human-machine creative collaboration.
-
Leske, Sabine Liliana; Endestad, Tor; Volehaugen, Vegard Akselsson; Foldal, Maja Dyhre; Blenkmann, Alejandro Omar & Solbakk, Anne-Kristin
[Show all 7 contributors for this article]
(2024).
Predicting the Beat Bin: Beta Oscillations Predict
the Envelope Sharpness in a Rhythmic Sequence.
-
Leske, Sabine Liliana; St?ver, Isak Elling August; Solbakk, Anne-Kristin; Endestad, Tor; Kam, Julia & Grane, Venke Arntsberg
(2024).
Behavioral and Electrophysiological Markers of Altered Mind Wandering and Sustained Attention in Adult ADHD.
Show summary
Background: Attention deficit hyperactivity disorder (ADHD) is a neurodevelopmental disorder that often persists into adulthood. Difficulties with executive control and sustained attention are key characteristics of the disorder that often lead to thoughts that are unrelated to the current task, i.e., mind wandering.
Objective: Investigate behavioral and neurophysiological correlates of sustained attention and mind wandering in adults with ADHD not on medication when examined.
Method: Sustained attention and propensity to mind-wander was investigated in ADHD patients (n=17) and in healthy controls (n=17), matched on gender, age, and IQ. Participants performed an auditory oddball task (participant feedback: On or Off Task), which required continuous manual responses to standard and target tones, while Electroencephalography (EEG) was measured. Sustained attentional control was additionally examined with the Test of Variables of Attention (T.O.V.A.). General IQ was estimated with a reduced version of the Wechsler Adult Intelligence Scale, 4th edition (WAIS-IV).
Results: ADHD patients reported significantly more episodes of mind wandering (Off Task), exhibited reduced target detection accuracy and more Off Task impulsive responses compared to controls.
Patients showed a significantly reduced P300 for the target-standard difference components, for both On and Off Task conditions. This target-standard P300 difference was less modulated between On versus Off Task conditions in patients compared to controls. In the T.O.V.A. test, patients committed significantly more Commission Errors, but did not differ from controls on other variables (Attention Comparison score, Reaction Time variability and latency, Omission Errors).
Conclusion: In comparison to controls, patients showed deteriorated behavioral performance, more episodes of mind wandering, and reduced P300 in a sustained attention task. The P300 decrease likely reflects impaired control of attentional resources allocated to the task, which has an impact on high-level cognitive processing abilities, while early sensory-related stimulus processing is intact.
-
Nielsen, Nanette
(2024).
Improvisation as praxis: music as a form-of-life.
-
T?rresen, Jim
(2024).
Invited talk: Sensing and Understanding Humans by a Robot – and vice versa.
-
-
Upham, Finn
(2024).
Heart Rate consistency and Heart Rate Variability constraints in orchestral musicians across performances.
-
Br?vig, Ragnhild & Aareskjold-Drecker, Jon Marius
(2024).
Hey Siri, can you write me a chipmunk soul track? A snapshot of AI tools currently used in music production.
-
Vuoskoski, Jonna Katariina
(2024).
Some of our favourite songs make us sad, which may be why we like them.
[Internet].
https://www.newscientist.com/article/2426284-some-of-our-fav.
-
Lartillot, Olivier
(2024).
Harmonizing Tradition with Technology: Enhancing Norwegian Folk Music through Computational Innovation.
Show summary
My work involves developing computational tools to safeguard and elevate the cultural significance of music repertoires, with a focus on a cooperative project with the National Library of Norway related to their collection of Norwegian folk music. Our first phase centered on transforming unstructured audio tapes into a systematic dataset of melodies while ensuring its access and longevity through efficient data management and linking with other catalogues.
Our core activity involves transcribing audio recordings into scores, comparing the traditional manual method with our modern attempts towards automation. Providing detailed performance notation, the close alignment between scores and audio recordings will help improve comprehension and overall accessibility, as well as a more advanced structuring of the collection.
Challenges arose when incorporating this music into the International Inventory of Musical Sources (RISM) database due to the incompatible 'incipit' concept, unfitting genres like Hardanger fiddle folk music. We suggest innovative generalisations for this concept. Moreover, we're creating techniques to digitally dissect the musical corpus, aiming to extract key features of each tune. This initiative not only serves as an alternative to incipits but also provides novel metadata formats, increasing the usability and connectivity within its content and with other databases.
-
Laczko, Balint & Jensenius, Alexander Refsum
(2024).
Poster for "Synth Maps: Mapping The Non-Proportional Relationships Between Synthesizer Parameters and Synthesized Sound".
Show summary
Parameter Mapping (PM) is probably the most used design approach in sonification. However, the relationship between a synthesizer’s input parameters and the perceptual distribution of its output sounds might not be proportional, limiting its ability to convey relationships within the source data in the sound. This study evaluates a basic Frequency Modulation (FM) synthesis module with perceptually motivated descriptors, measures of spectral energy distribution, and latent embeddings of pre-trained audio representation models. We demonstrate how these metrics do not indicate straightforward relationships between synthesis parameters and perceived sound. This is done using interactive audiovisual scatter plots—Synth Maps—that can be used to explore the sound distribution of the synthesizer and qualitatively evaluate how well
the different representations align with human perception. Link to the code and the interactive Synth Maps are available.
-
Jensenius, Alexander Refsum
(2024).
Sound Actions: Conceptualizing Musical Instruments.
Show summary
El martes 6 de agosto, a las 14:30 horas, se realizará la charla "Sound Actions: Conceptualizing Musical Instruments" que impartirá Alexander Refsum Jensenius, profesor de Tecnología Musical y Director del Centro RITMO para Estudios Interdisciplinarios en Ritmo, Tiempo y Movimiento en la Universidad de Oslo.
Alexander Refsum Jensenius es profesor de Tecnología Musical y Director del Centro RITMO para Estudios Interdisciplinarios en Ritmo, Tiempo y Movimiento en la Universidad de Oslo.
En la charla, que se realizará en idioma inglés, el académico presentará algunos aspectos destacados de su libro "Sound Actions: Conceptualizing Musical Instruments". Esto incluye una discusión sobre las diferencias entre los instrumentos acústicos y electroacústicos y cómo los instrumentos de hoy en día no son solo "creadores de sonido", sino que cada vez más son "creadores de música". Ejemplificará este cambio con varios de sus propios nuevos instrumentos para la expresión musical (NIME).
-
D'Amario, Sara & Bishop, Laura
(2024).
Self-Reported Experiences of Musical Togetherness in Music Ensembles.
-
Bravo, Pedro Pablo Lucas
(2024).
Self-Assembly and Synchronization: Crafting Music with Multi-Agent Embodied Oscillators.
Show summary
This paper proposes a self-assembly algorithm that generates rhythmic music. It uses multiple pulsed oscillators embedded in cube-shaped agents in a virtual 3D space. When these units connect with each other, their oscillators synchronize, triggering regular sound events that produce musical notes whose sound dynamics change based on the size of the structures formed. This study examines the synchronization time of these oscillators and the emergent properties of the structures formed during the algorithm's execution. Moreover, the resulting sound, determined by multiple interactions among agents, is analyzed in the time and frequency domains from its signal. The results show that the synchronization time slightly increases when more agents participate, although with high variability. Also, a quasi-regular pattern of increase and decrease in the number of structures over time is observed. Additionally, the signal analysis illustrates the effect of the self-assembly strategy in terms of rhythmical patterns and sound energy over time. We discuss these results and the potential applications of this multi-agent approach in the sound and music field.
-
Blenkmann, Alejandro Omar
(2024).
Electrophysiological correlates of auditory regularity expectations and violations at short and long temporal scales: Studies in intracranial EEG and prefrontal cortex lesion patients.
-
-
Glette, Kyrre; Ellefsen, Kai Olav; Norstein, Emma Stensby & Bruin, Ege de
(2024).
Automatic Design of Robot Bodies and Brains with Evolutionary Algorithms - Tutorial.
Show summary
The evolution of robot bodies and brains allows researchers to investigate which building blocks are interesting for evolving Artificial Life, and how controllers and morphologies can be shaped together for automated robot design. This tutorial aims to introduce evolution of robot body and control, and some of the key challenges one faces when doing experiments in Evolutionary Robotics. These include finding good ways to represent robots (genotypic encodings), challenges related to co-optimizing morphology and control, how environments shape body and control, and selecting the right physical substrate for evolving robots.
After introducing these challenges and showing relevant examples from our own and other labs’ research, we will present a short demo of how to run Evolutionary Robotics experiments in practice, with the Unity ML-Agents framework.
-
G?ksülük, Bilge Serdar; Tidemann, Aleksander & Jensenius, Alexander Refsum
(2024).
Telematic Testing: One Performance in Three Locations.
-
Bravo, Pedro Pablo Lucas
(2024).
Csound vs. ChucK: Sound Generation for XR Multi-Agent Audio Systems in the Meta Quest 3 using the Unity Game Engine.
Show summary
Extended Reality (XR) technologies, particularly headsets like the Meta Quest 3, are revolutionizing the field of immersive sound and music applications by offering new depths of user experience. As such, the Unity game engine emerges as a preferred platform for building such auditory environments. As part of its capabilities, Unity allows the programming of sound generation through a low-level digital signal processing API, which requires specialized knowledge and significant effort for development. However, wrappers that integrate Unity with programming languages for sound synthesis can facilitate the implementation of this task. In this work, we focus on applications for the Meta Quest 3 involving multiple spatialized audio sources; such applications can be framed as XR multi-agent audio systems. We consider two wrappers, CsoundUnity and Chunity, featuring Csound and ChucK programming languages. We test and analyze these wrappers in a minimal XR application, varying the number of audio sources to measure the performance of both tools in two device environments: the development machine and the Meta Quest 3. We found that CsoundUnity performs better in the headset, but Chunity performs better in the development machine. We discuss the advantages, limitations, and computational issues found on both wrappers, as well as the criteria for choosing them to develop XR multi-agent audio applications in Unity.
-
Vrasdonk, Atilla Juliana; Keller, Peter E.; Endestad, Tor & Vuoskoski, Jonna Katariina
(2024).
The influence of improvisational freedom on flow in flamenco duos.
-
-
D'Amario, Sara & Bishop, Laura
(2024).
Self-Reported Experiences of Musical Togetherness in Music Ensembles.
-
Bravo, Pedro Pablo Lucas
(2024).
Interactive Sonification of 3D Swarmalators.
Show summary
This paper explores the sound and music possibilities obtained from the sonification of a swarm of coupled oscillators moving in a virtual space called "Swarmalators". We describe the design and implementation of a Human-Swarm Interactive Music System based on the 3D version of the Swarmalator model, which is used for signal analysis of the overall sound output in terms of scalability; that is, the effect of varying the number of agents in a swarm system. We also study the behaviour of autonomous swarmalators in the presence of one user-controlled agent, which we call the interactive swarmalator. We observed that sound frequencies barely deviate from their initial values when there are few agents, but they diverge significantly in a highly dense swarm. Additionally, with the inclusion of the interactive swarmalator, the group's behaviour tends to adjust towards it. We use these results to explore the potential of swarmalators in music performance under various scenarios. Finally, we discuss opportunities and challenges to use the Swarmalator model for sound and music systems.
-
D'Amario, Sara
(2024).
Cardiac coupling of orchestral musicians and audience members during orchestra performances.
-
Jensenius, Alexander Refsum; Vo, Synne; Kelkar, Tejaswinee & Kjus, Yngvar
(2024).
Musikksnakk: Musikk p? Spotify - hvordan funker algoritmene?
Show summary
Hvorfor er det slik at plateselskaper ?nsker at artister skal lage TikTok?er for ? promotere musikken sin? Hva bestemmer hvilke musikkanbefalinger du f?r i Spotify? Og hvordan bruker plateselskapene dataene dine til ? generere klikk og lytt? Bli med p? en samtale om algoritmer p? apper som TikTok og Spotify - og hvordan de p?virker musikksmaken din!
Til ? diskutere dette kommer:
- Synne Vo. Hun er en artist som slo igjennom p? TikTok, og bruker plattformen aktivt for ? promotere musikken sin. Hun kommer til panelet for ? dele sine erfaringer med bransjen og appene.
- Yngvar Kjus. Han er professor i musikk og medier p? UiO, og har forsket mye p? popul?rmusikk, musikkproduksjon og musikkbransjen.
- Tejaswinee Kelkar. Hun er er en sanger og forsker innen musikk og bevegelse. Hun har tidligere jobbet som dataanalytiker i Universal Music Norway og ved RITMO Center of Excellence ved Universitetet i Oslo.
Samtalen ledes av Alexander Refsum Jensenius. Han er professor i musikk ved Universitetet i Oslo, og leder av RITMO - Senter for tverrfaglig forskning p? rytme, tid og bevegelse. Han pr?ver hele tiden ? forst? mer om hvordan og hvorfor mennesker beveger seg til musikk.
-
Danielsen, Anne
(2024).
There’s more to timing than time: P-centers, beat bins and groove in musical microrhythm.
Show summary
How does the dynamic shape of a sound affect its perceived microtiming? In the TIME project, we studied basic aspects of musical microrhythm, exploring both stimulus features and the participants’ enculturated expertise via perception experiments, observational studies of how musicians produce particular microrhythms, and ethnographic studies of musicians’ descriptions of microrhythm. Collectively, we show that altering the microstructure of a sound (“what” the sound is) changes its perceived temporal location (“when” it occurs). Specifically, there are systematic effects of core acoustic factors (duration, attack) on perceived timing. Microrhythmic features in longer and more complex sounds can also give rise to different perceptions of the same sound. Our results shed light on conflicting results regarding the effect of microtiming on the “grooviness” of a rhythm.
-
Glette, Kyrre
(2024).
Evolution of morphology and control - from simulation to reality.
-
Jensenius, Alexander Refsum
(2024).
Interdisiplin?ritet - et musikkperspektiv.
Show summary
Skal snakke om skj?ringspunktet mellom psykologi, informatikk og musikk og arbeidet som foreg?r p? instituttet han leder: RITMO, Senter for Interdisiplin?re studier i rytme, tid og bevegelse ved Universitetet i Oslo. Alexander er b?de forsker og musiker. Han har en sammensatt bakgrunn best?ende av musikk, informatikk, fysikk og matematikk og hans praktisk rettede forskning har bredt nedslagsfelt. Digitale verkt?y som har blitt utviklet ved RITMO blir n? ogs? brukt innen medisinsk forskning p? ADHD og Cerebral Parese.
-
Laczko, Balint
(2024).
Two-part guest lecture about spatial audio and Ambisonics for MCT students.
-
Blenkmann, Alejandro Omar
(2024).
Audiopred Project: Neurophysiological mechanisms of auditory predictions.
-
-
Jensenius, Alexander Refsum
(2024).
20 Years of Piano Research at the University of Oslo.
Show summary
In this lecture-recital, I will present piano-related research from the Department of Musicology over the last twenty years. I will also reflect on my role in this history, both as an artist and scientist. Finally, I will scrutinize the department's new Disklavier while performing various exploratory etudes.
-
T?rresen, Jim
(2024).
Invited talk: Interdisciplinary AI and robotics research spanning from psychology to law.
-
Jensenius, Alexander Refsum
(2024).
NOR-CAM as an enabler for flexible academic career paths in and out of Norway.
Show summary
Dette webinaret er det fjerde i rekka og vil handla om forslaget om attraktive karrierar i akademia. Europakommisjonen sitt utgangspunkt er at internasjonalt utdannings澳门葡京手机版app下载 og kvalitetsutvikling i h?gare utdanning ikkje blir st?tta og verdsett i s? stor grad som naudsynt i dei akademiske karrierane, og at dette er eit hinder for utvikling av europeisk h?gare utdanning.
Kva inneber dette forslaget, og korleis ser det ut fr? perspektivet til europeiske og norske universitet? Korleis heng den europeiske prosessen for utvikling av akademiske karrierar saman med det som skjer i Noreg?
-
Jensenius, Alexander Refsum
(2024).
Musikk og kunstig intelligens.
-
Jensenius, Alexander Refsum
(2024).
Mock PhD Interview.
Show summary
The objective of the interview mockup is to provide an example of what a PhD interview looks like. We want to provide a safe space to ask questions to an experient interviewer and to understand how to better prepare for the interview if you're applying to PhD positions in other countries.
-
-
Jensenius, Alexander Refsum
(2024).
Tverr faglighet? Muligheter og utfordringer med fler- og tverrfaglighet.
Show summary
Tverrfaglighet nevnes gjerne i festtaler og s?knadstekster, men hvordan er virkeligheten? I denne presentasjonen vil professor Alexander Refsum Jensenius diskutere egne erfaringer med fler- og tverrfaglige forskningsprosjekter.
Han vil ogs? presentere hvordan RITMO jobber med ? utvikle en forskningskultur og prosjekts?knader p? kryss og tvers av gjeldende fagdisipliner.
-
Jensenius, Alexander Refsum & Bochynska, Agata
(2024).
Opphavsrettslige utfordringer ved overgangen til FAIR forskningsdata ved UiO.
-
Jensenius, Alexander Refsum
(2024).
Hjernen i sentrum: Kunst.
Show summary
Hvorfor er noen musikalske og andre ikke? Hvordan har det seg at kunst kan treffe oss s? voldsomt - og s? ulikt! Ulike kunstneriske uttrykk som musikk, malerkunst, litteratur, dans og teater kommer uten fasit og tolkes vidt forskjellig fra person til person. Er det hjernen som styrer dette? Det er ?penbart at hjernen v?r er aktiv og ikke passiv n?r vi opplever kunst. Hvorfor er det s?nn? Gir kunstneriske opplevelser god hjernetrim? Er kunst viktig for hjernehelsen?
-
Jensenius, Alexander Refsum
(2024).
Vurderinger i akademiske karrierel?p.
Show summary
UHRs arbeidsgruppe for ?pen vurdering utarbeidet i 2021 en veileder for vurdering i akademiske
karrierel?p – NOR-CAM. Det finnes ogs? andre initiativer for vurdering av akademiske karrierer,
deriblant det europeiske Coalition for Advancing Research Assessment (CoARA). Men hva er verdien
av disse vurderingsveilederne? Hvem er de for og hva er de ment ? f? til? Og hvilke vurderingsveiledere
er det som blir viktig i tiden som kommer?
-
-
Jensenius, Alexander Refsum
(2024).
Fostering the emergence of new research data careers.
Show summary
Equipping future graduates, researchers, and society at large with the skills needed to support the digital transition is becoming a priority on European, national, and institutional agendas. Research data management (RDM) and FAIR data are part of this skillset, and research data careers are increasingly in demand in both the public and private sectors. At organisational level, the availability of staff with data competencies is crucial to support the implementation of FAIR RDM practices and, ultimately, to foster the transition towards Open Science. Data collected by the European University Association show for example how universities are creating dedicated research data support services and hiring specific support staff, but significant disparities exist between countries and institutions. RDM responsibilities still fall to existing members of staff. In many cases, technical skills are only partially available and new dedicated staff is required. Universities who have hired specific research data support roles may still have problems meeting the growing demand for research data expertise. Within this context, a major challenge is represented by the absence of a shared recognition and definition of research management professional profiles, despite recent progress being made at European level through ERA Action 17 on research management. This session will address needs, challenges and opportunities related to the emergence of new research data careers, including the identification of key skills, clear career paths and their integration into research assessment systems. It will do so by showcasing best practices and reflecting on ways forward with a panel of experts representing different actors, i.e. university leaders, research data practitioners and policymakers.
-
-
Jensenius, Alexander Refsum
(2024).
Interdisciplinarity.
-
Jensenius, Alexander Refsum
(2024).
Musikk, Data og KI.
Show summary
Musikk er en av de mest komplekse menneskelige kommunikasjonsformene som finnes og egner seg derfor godt for ? utforske kunstig intelligens. Presentasjonen beskriver hvordan musikkforskere, psykologer og informatikere jobber sammen ved RITMO for ? forst? mer om rytme, tid og bevegelse hos mennesker og maskiner.
-
Jónsson, Bj?rn Thór; Erdem, Cagri; Fasciani, Stefano & Glette, Kyrre
(2024).
Cultivating Open-Earedness with Sound Objects discovered by Open-Ended Evolutionary Systems.
Show summary
Interaction with generative systems can face the choice
of generalising towards a middle ground or diverging towards novelty. Efforts have been made in the domain of
sounds to enable divergent exploration in search of interesting discoveries. Those efforts have been confined
by pre-trained models and single environments. We are
building on those efforts to enable autonomous discovery of sonic landscapes. Furthermore, we draw inspiration from research on open-ended evolution to continuously provide evolutionary processes with new opportunities for sonic discoveries. Exposure to autonomously
discovered sound objects can elevate openness to sonic
experiences, which in turn offers inspiring opportunities
for creative work involving sounds.
-
Abrahamsson, Liv Merve Akca; Bishop, Laura; Vuoskoski, Jonna Katariina & Laeng, Bruno
(2024).
Are human voices ‘special’ in the way we attend to them?
-
-
Danielsen, Anne
(2024).
Musikalsk rytme, rytmeforskning og hva den kan brukes til.
-
God?y, Rolf Inge
(2024).
Motormimetic cognition of sound-motion objects in music.
Show summary
Motormimetic cognition of sound-motion objects in music
The focus in my talk is on how experiences of body motion contribute to listeners’ perception of meaning in music. The term ‘meaning’ can here be understood as sensations of distinct and significant events evoked in our minds in listening to (or imagining) music, events that may range from the unremarkable and immediately forgotten everyday happenings to the highly remarkable and engaging, i.e. extending from basic sound features (e.g. identifying a sound as made by strumming on a guitar) to high-level affective and/or narrative associations (e.g. identifying a sound made on a guitar as the James Bond chord). However, the ambition of this talk is limited to some basic features of meaning perception, and to fragments of music in the approximately 0.5 to 5 seconds duration range, to what we call sound-motion objects, and correlated motor sensations in this perceptual process, what we call motormimetic cognition. The duration range of sound-motion objects is optimal for focus on significant features such as overall ‘sound’, style, sense of motion and affect, and is a compromise between local and more global occurrences of meaning in music.
There can be little doubt that music can make listeners move, or evoke motion sensations in the minds of listeners. In the past couple of decades, we have seen a surge of publications on music-related body motion, predominantly on whole-body motion such as in dance, walking, and sports, as well as on musicians’ communicative motion, but less on the smaller-scale sound-producing effector motion, e.g. that of fingers, hands, arms, and the vocal apparatus, in various contexts of performance, of expressivity, and of articulation. The contention in this talk is that such smaller-scale sound-producing effector motion is not only crucial in shaping the output sound, but is actually integral to our perceptions of music, and hence, focusing on such motion could help us understand some basic workings of meaning formation in music.
We may call the approach presented here concrete in the sense of focusing on actual sound-producing motion and on actual resultant output sound features, rather than on the abstract Western music notation concepts. The inherited notation-oriented conceptual apparatus posits discrete pitches and durations as the point of departure for meaning formation, whereas the motormimetic approach posits the more holistic sound-producing motion and the resultant holistic sound events as primordial. It means that any sound event will be embedded in some sound-producing motion trajectory, and also that such motion trajectories are integral to our images of the music (e.g. hearing a ferocious drum fill evoking imagery of energetic hand and mallet motion, or hearing soft and slow string music evoking imagery of slow bow motion). Using available technologies and methods for motion capture, motion analysis, and motion features representation, as well as means for analysis and representation of continuous, non-symbolic sound features, it is now possible to gain more detail knowledge of the relationships between sound-producing motion and salient perceptual features of both sound and motion. It is also possible to make holistic representations of temporally distributed features such as dynamic, timbral, pitch-related, textural, and articulatory features as shapes, given the fact that shapes are holistic and concrete, whereas symbols are punctual and abstract.
-
-
Br?vig, Ragnhild & Stevenson, Alex
(2024).
Performing Experimental Hip-Hop: Abstract Orchestra's Cover of Madvillain's "Meat Grinder".
-
Br?vig, Ragnhild
(2024).
Boklansering: Parody in the Age of Remix: Mashups vs. the Takedown (MIT Press).
-
Br?vig, Ragnhild
(2024).
Boklansering: Parody in the Age of Remix: Mashups vs. the Takedown (MIT Press).
-
Christodoulou, Anna-Maria; Dutta, Sagar; Lartillot, Olivier; Glette, Kyrre & Jensenius, Alexander Refsum
(2024).
Exploring Convolutional Neural Network Models for Multimodal Classification of Expressive Piano Performance.
-
Christodoulou, Anna-Maria & Jensenius, Alexander Refsum
(2024).
Navigating Challenges in Multimodal Music Data Management for AI Systems.
Show summary
The responsible management of multimodal music datasets plays a crucial role in the development and evaluation of music processing systems. However, navigating the landscape of legal and ethical considerations can be a complex and challenging task due to the magnitude and diversity of such. This paper clarifies these divergent legal and ethical considerations and highlights the challenges associated with multimodality and AI systems. Focusing on the most crucial stages of multimodal music data management, we provide recommendations for tackling legal and ethical challenges. We emphasize the importance of establishing an inclusive and accessible music data environment, encouraging researchers and data users to adopt responsible approaches towards managing multimodal music data collections.
-
Bishop, Laura & D'Amario, Sara
(2024).
Methods tracking four-hand piano performances.
-
Bishop, Laura & Kwak, Dongho
(2024).
Ignoring a noisy metronome during dyadic drumming.
-
Jensenius, Alexander Refsum
(2023).
Wishful thinking about CVs: Perspectives from a researcher.
-
Jensenius, Alexander Refsum
(2023).
Sound Actions: An Embodied approach to a Digital Organology.
Show summary
What is an instrument in our increasingly electrified world? In this talk I will present a set of theoretical building blocks from my forthcoming book on "musicking in an electronic world". At the core of the argument is the observation that the introduction of new music technologies has led to an increased separation between action and sound in musical performance. This has happened gradually, with pianos and organs being important early examples of instruments that introduced mechanical components between the performer and resonating objects. Today's network-based instruments represent an extreme case of a spatiotemporal dislocation between action and sound. They challenge our ideas of what an instrument can be, who can perform on them, and how they should be analyzed. In the lecture I will explain how we can use the concepts of action-sound couplings and mappings to structure our thinking about such instruments. This will be used at the heart of a new organology that embraces the qualities of both acoustic and electroacoustic instruments.
-
Jensenius, Alexander Refsum
(2023).
Conceptualizing Musical Instruments.
Show summary
What is an instrument in our increasingly electrified world? In this talk I will present a set of theoretical building blocks from my recent book "Sound Actions". At the core of the argument is the observation that the introduction of new music technologies has led to an increased separation between action and sound in musical performance. This has happened gradually, with pianos and organs being important early examples of instruments that introduced mechanical components between the performer and resonating objects. Today's network-based instruments represent an extreme case of a spatiotemporal dislocation between action and sound. They challenge our ideas of what an instrument can be, who can perform on them, and how they should be analyzed. In the lecture I will explain how we can use the concepts of action-sound couplings and mappings to structure our thinking about such instruments.
-
-
Jensenius, Alexander Refsum
(2023).
Explorations of human micromotion through standing still.
Show summary
Throughout 2023, I will stand still for ten minutes around noon every day, in a different room each day. The aim is to collect data about my micromotion and compare it to the qualities of the environment. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for conscious and unconscious control of musical sounds.
-
Jensenius, Alexander Refsum
(2023).
Conceptualizing Musical Instruments.
Show summary
What is an instrument in our increasingly electrified world? In this talk I will present a set of theoretical building blocks from my forthcoming book on "musicking in an electronic world". At the core of the argument is the observation that the introduction of new music technologies has led to an increased separation between action and sound in musical performance. This has happened gradually, with pianos and organs being important early examples of instruments that introduced mechanical components between the performer and resonating objects. Today's network-based instruments represent an extreme case of a spatiotemporal dislocation between action and sound. They challenge our ideas of what an instrument can be, who can perform on them, and how they should be analyzed. In the lecture I will explain how we can use the concepts of action-sound couplings and mappings to structure our thinking about such instruments. This will be used at the heart of a new organology that embraces the qualities of both acoustic and electroacoustic instruments.
-
Jensenius, Alexander Refsum
(2023).
Sound Actions: Conceptualizing Musical Instruments.
Show summary
How do new technologies change how we perform and perceive music? What happens when composers build instruments, performers write code, perceivers become producers, and instruments play themselves? These are questions addressed in the new book by Professor Alexander Refsum Jensenius: Sound Actions: Conceptualizing Musical Instruments published by the MIT Press.
-
Jensenius, Alexander Refsum
(2023).
Explorations of human micromotion through standing still.
Show summary
Throughout 2023, I will stand still for ten minutes around noon every day, in a different room each day. The aim is to collect data about my micromotion and compare it to the qualities of the environment. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for conscious and unconscious control of musical sounds.
-
Jensenius, Alexander Refsum
(2023).
Sound Actions - Conceptualizing Musical Instruments.
-
Danielsen, Anne; Br?vig, Ragnhild; C?mara, Guilherme Schmidt; Haugen, Mari Romarheim; Johansson, Mats Sigvard & London, Justin
(2023).
There’s more to timing than time: Investigating sound–timing interaction across disciplines and cultures.
-
-
Jensenius, Alexander Refsum
(2023).
Forskarperspektivet.
Show summary
Denne hausten har Utkast til strategi for norsk vitenskapelig publisering etter 2024 vore ute til h?yring. Strategien skildrar tilr?dingar til b?de forskarar, forskingsutf?rande institusjonar, forskingsfinansi?rar og myndigheiter. I dette seminaret inviterer vi ein av dei som har utarbeidd strategien, Vidar R?eggen fr? Universitets- og H?gskoler?det, til ? fortelje om arbeidet med rapporten, innspel som har komme inn og korleis han ser for seg det framtidige publiseringslandskapet. Deretter g?r ordet til Alexander Jensenius (UiO, NOR-CAM), Johanne Raade (UiT) og Marte Qvenild (NFR), til ? diskutere korleis dei ser framtida for open publisering etter 2024, fr? perspektivet til ein forskar, institusjon og finansi?r, h?vesvis. Ser dei andre utfordringar enn dei som er fors?kt m?tt i den nye strategien?
-
Br?vig, Ragnhild
(2023).
Digitalisering i musikkutdanningen.
-
Br?vig, Ragnhild & Furunes, Marit Johanne
(2023).
Karrierel?psprogrammet p? RITMO med mentorordning.
-
Jensenius, Alexander Refsum
(2023).
Exploring Human Micromotion Through Standing Still.
Show summary
Moving slowly likely puts us into a special state of mind. Subjective reports from various practices including dance, Tai Chi and walking meditation suggest that slow movements can bring participants into a special state involving increased relaxation and awareness. Interestingly, relatively little research has been performed specifically to understand the underlying mechanisms and the possible applications of human slow movement. One reason might be that slow movements are not common in day-to-day life: when we want to move – for example to pick up our cup of coffee - we usually want to do it now. Some evidence suggests that humans tend to avoid moving slowly in different tasks, for example, when improvising movements together. The goal of this meeting is to bring together scholars and practitioners interested in slow movement, and to foster interdisciplinary research on this somewhat neglected topic.
-
Jensenius, Alexander Refsum
(2023).
Tverrfaglig forskning p? rytme, tid og bevegelse.
Show summary
RITMO er et unikt SFF p? grunn av sin radikalt tverrfaglige oppbygning. Hvordan fungerer det i praksis?
-
Asko, Olgerta; Solbakk, Anne-Kristin; Leske, Sabine Liliana; Meling, Torstein Ragnar; Knight, Robert T. & Endestad, Tor
[Show all 7 contributors for this article]
(2023).
The orbitofrontal cortex (OFC) has a critical role in the generation of high-level expectations.
-
Br?vig, Ragnhild
(2023).
My way to becoming a full professor.
-
Br?vig, Ragnhild
(2023).
Crises affecting the economy, production, and consumption of music: Perspectives from remixers.
-
Br?vig, Ragnhild
(2023).
Presentation of the book Parody in the Age of Remix.
-
Br?vig, Ragnhild & Stevenson, Alex
(2023).
Machine Aesthetics: An Analytical Framework.
-
Br?vig, Ragnhild
(2023).
Crisis in the Flow of Remixes and in the Maintenance of Copyright Exceptions.
-
Br?vig, Ragnhild
(2023).
Users’ Freedom of Expression in the Digital Era.
-
Br?vig, Ragnhild
(2023).
You’re not supposed to sample and rely on copyright exceptions.
-
Br?vig, Ragnhild
(2023).
Publishing Panel (on the publishing of Parody in the Age of Remix).
-
Br?vig, Ragnhild
(2023).
Wakeful Sleep and Sleepy wakefulness in EDM.
-
Danielsen, Anne
(2023).
Ain’t that a groove! Musicological, philosophical and psychological perspectives on groove (keynote).
Show summary
The notion of groove is key to both musicians’ and academics’ discourses on musical rhythm. In this keynote, I will present groove’s historical grounding in African American musical practices and explore its current implications by addressing three distinct understandings of groove: as pattern and performance; as pleasure and “wanting to move”; and as a state of being. I will point out some musical features that seem to be shared among a wide range of groove-based styles, including syncopation and counterrhythm, swing and subdivision, and microrhythmic qualities. Ultimately, I will look at the ways in which the groove experience has been approached in different disciplines, drawing on examples from musicology / ethnomusicology, philosophy, psychology and neuroscience.
-
Riaz, Maham
(2023).
Sound Design in Unity: Immersive Audio for Virtual Reality Storytelling.
Show summary
Research talk on sound design for games and immersive environments. The Unity game engine is used for environmental modeling. The Oculus Spatializer plugin provides control over binaural spatialization with native head related transfer functions (HRTF). Game scenes included C# scripts, which accounted for intermittent emitters (randomly triggered sounds of nature, critters and birds), crossfades, occlusion and raycasting. In the mixing stage, mixer groups, mixer snapshsots, snapshot triggers, SFX reverb sends, and low/high-pass filters were some of the tools demonstrated.
-
Swarbrick, Dana
(2023).
Les Effets du Musique sur Grimper.
-
Upham, Finn
(2023).
Using Metrically-entrained Tapping to Align Mobile phone sensor measurements from In-person and Livestream Concert Attendees.
Show summary
Music is often made and enjoyed in large groups, but simultaneously capturing measurements from dozens or hundreds of people is technically difficult. When measurements are not constrained to wired or continuous connected wireless systems, we can record much bigger groups, potentially taking advantage of the wearable sensors in our phones, watches, and more dedicated devices. However, aligning measurements captured by independent devices is not always possible, particularly to a precision relevant for music research. Phone clocks differ and update sporadically, wearable device clocks drift, and for online broadcast performances, exposure times can vary by tens of seconds across the remote audience. Many measurement devices that are not open to digital synchronisation triggers still include accelerometers; with a suitable protocol, participant movement can be used to imbed synchronisation cues in accelerometry measurements for alignment regardless of clock times. In this paper, we present a tapping synchronisation protocol that has been used to align measurements from phones worn by audience members and a variety sensors worn by a symphony orchestra. Alignment with the embedded cues demonstrate the necessity of such a protocol, correcting offsets of more than 700 ms for devices supposedly initialised with the same computer clock, and over 10 s for online audience participants. Audience tapping performance improved cell phone measurement alignment to a median of 100 ms offset, and professional musicians tappings improved alignment precision to around 40 ms. While the temporal precision achieved with entrained tapping is not quite good enough for some types of analyses, this improvement over uncorrected measurements opens a new range of group coordination measurement and analysis options.
-
Swarbrick, Dana
(2023).
The Effects of Music on Climbing.
-
Danielsen, Anne
(2023).
Decolonizing groove (panel discussion).
-
Monstad, Lars L?berg
(2023).
Kunstig Intelligens i kunst og kultur.
[TV].
NRK Dagsrevyen.
-
Monstad, Lars L?berg; Larsen, Borgan Silje & Vegard, Waske
(2023).
AI i musikken: konsekvenser og muligheter.
-
Glette, Kyrre
(2023).
Adaptive robots through evolutionary algorithms and machine learning.
-
Blenkmann, Alejandro Omar & Agrawal, Rahul Omprakash
(2023).
Intracranial Electrode Localization workshop.
-
Upham, Finn
(2023).
Breathing Together in Music, a RESPY Workshop.
Show summary
Respiration is a subtle but inescapable element of real time musical experiences, sometimes casually accompanying whatever we are hearing, other times directly involved in the actions of sound generation. This workshop explores respiratory coordination in music listeners and ensemble musicians with respy, a new python library for evaluating respiration information from single belt chest stretch recordings. Following an introduction to the human respiratory system and breathing in music, the workshop demonstrates how the respy algorithms evaluate phase and breath type, and presents statistical tools for assessing shared information in these features of people listening to or making music together. Rather than only use aggregate statistics such as respiration rate, respy aims to elevate the details of the respiratory sequence to facilitate our exploration of how breathing is involved in musical experiences, second-by-second. Measurable coordination of the respiratory system to musical activities challenges our expectations for interacting oscillatory systems. This session will conclude with a discussion on the different categories of relationships possible between people breathing together in music.
-
Upham, Finn & Oddekalv, Kjell Andreas
(2023).
Fingers and Tongues: Appreciating Rap Flows through Proprioceptive Interaction in Rhythm Hive.
Show summary
Rhythm games have been studied for their potential to develop interest in music making (Cassidy and Paisley, 2013) and transferable musicianship skills (Richardson and Kim, 2011), but how might they influence players appreciation for specific musical works? Proprioceptive interaction, a concept by game designer Matt Bloch (Miller, 2017), refers to changes in a game player's perception of music as they practice specific movements to it. By drawing attention to coincidental sounds, players can develop their hearing and appreciation for nuances of production and performance. Many fans of rap enjoy performances in languages they do not speak themselves. Without specific language skills, expertise in rap performance, and/or time to learn lyrics phonetically, their experience of a rap flow is hampered by an inability to imitate and imagine the generative action of performance. Rhythm Hive is a mobile rhythm game based on the music of BTS, Enhyphen, and TXT, Kpop groups with substantial followings outside of Korea. Game play presents players with finger choreographies to these groups’ hit songs, tapping sequences to the vocal performances across four to seven positions in a line. For these groups’ many non-rapping and non- Korean-speaking fans, playing Rhythm Hive may offer deeper understanding of performances by rappers like RM, Suga, and J-Hope. Through expert analysis of rap performance, transcriptions of game play, and reflections on the experience of playing Rhythm Hive, we consider shared structure between the prescribed finger choreographies and the rap flows they accompany. We studied rap verses from four BTS songs along side their Easy and Hard level tapping sequences (vocal versions only) to identify parallels in rhythm, segmentation, repetition, and accents. Easy mode choreographies tend to mark their relationship to rap vocals by hitting the start of lines and then articulating structure with repeated contours tapped on quarter and eighth notes. Hard mode choreographies tend to hit every rapped syllable and incorporate more gestural flourishes to mark pitch changes, ending and internal rhymes, and interesting breaks from a steady 16th note flow. Both Easy and Hard tappings sequences consistently follow the rap track when it deviates from a quantized beat. The finger choreographies of Rhythm Hive illuminate rap performances by directing and rewarding players’ attention to details of flows that may otherwise be missed. Game feedback pushes players to replicate delivery microtiming, while spatial patterns underline linguistic and rhythmic structure. Hard mode tapping sequences articulate distinguishing characteristics of specific rap styles, given players tangible sensitivity to degrees of technicality and nuances of genre. While fans may be motivated to play rhythm games like Rhythm Hive out of a preexisting love of the music and bands, tapping along offers them a chance to attend to, appreciate, and even rehearse key aspects of these rappers’ expert performance choices, regardless of how well they might follow by ear.
-
Ellefsen, Kai Olav
(2023).
Hva er Kunstig Intelligens?
-
Bernhardt, Emil
(2023).
Hva er musikk?
-
C?mara, Guilherme Schmidt; Spiech, Connor & Danielsen, Anne
(2023).
To asynchrony and beyond: In search of more ecological perceptual heuristics for microrhythmic structures in groove-based music.
Show summary
There is currently a gap in rhythm and timing research regarding how we perceive complex acoustic stimuli in musical contexts. Many studies have investigated timing acuity in non-musical contexts involving simple rhythmic sequences comprised of clicks or sine waves. However, the extent to which these results transfer to our perception of microrhythmic nuances in multilayered musical contexts rife with complex instrumental sounds remains poorly understood. In this talk we will present an overview of a planned series of just-noticeable difference (JND) experiments that will generate ecologically valid perceptual heuristics regarding timing discrimination thresholds. The aim is to investigate the extent to which microrhythmic timing and sonic nuances are perceived in groove-based music and connect these heuristics to the pleasurable urge to move in groove-based contexts, as well as acoustic (e.g., intensity, duration, frequency) and musical features (e.g., tempo, genre), and listener factors (e.g. musical training, stylistic familiarity). Overall, we expect timing thresholds to be higher for polyphonic/musical than for monotonic/non-musical stimuli/contexts and higher for pulse attribution (whether one can perceive a “beat”; Madison & Merker 2002, Psychol Res) than for simple detection of asynchrony and anisochrony (whether one can perceive “rhythmic irregularities”). Thresholds will likely be modulated by intensity (Goebl & Parncutt 2002, ICMPC7), tempo (Friberg & Sundberg 1995, J Acous Soc Am), instrumentation (Danielsen et al. 2019, J Exp Psychol), and genre/stylistic conventions (C?mara & Danielsen 2019, Oxford). Musically trained/stylistically familiar listeners may also display style-typical sensitivity to microrhythmic manipulations (Danielsen et al. 2021 Atten Percept Psychophys; Jakubowski et al. 2022; Cogn). In terms of subjective experience, we expect that onset asynchrony exaggerations will likely elicit lower pleasure and movement ratings compared to performances with idiomatic timing profiles (Senn et al. 2018, PLoS One). Higher ratings should also be biased in favor of familiar styles (Senn et al. 2021) and rhythmic patterns that do not engender excessive metrical ambiguity are likely to elicit higher ratings (Spiech et al. 2022, preprint; Witek et al. 2014, PLoS One).
-
Upham, Finn & Christophersen, Bj?rn Morten
(2023).
Bodies in Concert: RITMO project with the Stavanger symfoniorkester.
-
Ellefsen, Kai Olav
(2023).
Evolutionary Robotics.
-
-
Upham, Finn
(2023).
Insight into human respiration through the study of orchestras and audiences.
-
Bishop, Laura & Upham, Finn
(2023).
Bodies in Concert.
Show summary
Increasingly, research on music performance is moving out of controlled laboratory settings and into concert halls, where there are opportunities to explore how performance unfolds in high-arousal conditions and how performers and audiences interact. In this session, we will present findings from a series of live research concerts that we carried out with the Stavanger Symphony Orchestra. The orchestra performed the same program of classical repertoire for four audiences of schoolchildren and an audience of families. Orchestra members wore sensors that collected cardiac activity, respiration, and body motion data, and the conductor additionally wore a full-body motion capture suit and eye-tracking glasses. Audience members in some of the concerts were invited to wear reflective wristbands, and wristband motion was captured using infrared video recording. We will begin the session with a discussion of the scientific and methodological challenges that arose during the project, in particular relating to the large scale of data capture (>50 musicians and hundreds of audience members), the visible nature of research that is carried out on a concert stage, and the development of procedures for aligning data from different recording modalities. Next, we will present findings from two lines of analysis that investigate different aspects of behavioural and physiological coordination within the orchestra. One analysis investigates the effects of audience noise and musical roles on coherence in (i) cardiac rate and variability and (ii) respiratory phase and rate. The second analysis investigates the effects of musical demands on synchronization of body sway, bowing, and respiration in string sections. We will conclude the session with an open discussion of how live concert research might be optimized.
-
-
Lindblom, Diana Saplacan; T?rresen, Jim & Hakimi, Nina
(2024).
Dynamic Dimensions of Safety -
How robot height and velocity affect human-robot interaction: An explorative study on the concept of perceived safety.
University of Oslo.
-
-
Joachimiak, Grzegorz; Ahrendt, Rebekah & Lartillot, Olivier
(2024).
Endangered Musical Sources: Strategies for Safeguarding, Digitization, and International Collaboration. Report of Working Group 2 SOURCES, Wroc?aw, 22–24 May 2024.
Zenodo.
-
Lindblom, Diana Saplacan; T?rresen, Jim & Meijer, Frida
(2023).
In Safe Hands: Contribution of gripper material on HRI and perceived safety: An explorative study.
University of Oslo.
-
Kocan, Danielius & Ellefsen, Kai Olav
(2023).
Attention-Guided Explainable Reinforcement Learning: Key State Memorization and Experience-Based Prediction.
Universitetet i Oslo.
-
Taye, Eyosiyas Bisrat & Ellefsen, Kai Olav
(2023).
Accountability Module: Increasing Trust in Reinforcement Learning Agents.
Universitetet i Oslo.
Show summary
Artificial Intelligence requires trust to be fully utilised by users and for them to feel safe while using them. Trust, and indirectly, a sense of safety, has been overlooked in the pursuit of more accurate or better-performing black box models. The field of Explainable Artificial Intelligence and the current recommendations and regulations around Artificial Intelligence require more transparency and accountability from governmental and private institutes. Creating a self-explainable AI that can be used to solve a problem while explaining its reasoning is challenging to develop. Still, it would be unable to explain all the other AIs without self-explainable abilities. It would likely not function for different problem domains and tasks without extensive knowledge about the model. The solution proposed in this thesis is the Accountability Module. It is meant to function as an external explanatory module, which would be able to function with different AI models in different problem domains. The prototype was inspired by accident investigations regarding autonomous vehicles but was created and implemented for a simplified simulation of vehicles driving on a highway. The prototype's goal was to attempt to assist an investigator in understanding why the vehicle crashed. The Accountability Module found the main factors in the decision that resulted in an accident. It was also able to facilitate the answering of whether the outcome was avoidable and if there were inconsistencies with the agent's logic by examining different cases against each other. The prototype managed to provide useful explanations and assist investigators in understanding and troubleshooting agents. The thesis and the Accountability Module indicate that a similar explanatory module is a robust direction to explore further. The chosen explainability methods and techniques were highly connected to the problem domain and limited by the scope of the thesis. Therefore, a more extensive test of the prototype with different problems needs to be performed to check the system's rigidity and versatility as well as the significance of the results. Nevertheless, in a collaboration between an Accountability Module expert and a domain expert, I expect a modular explainability solution to create more insight into an AI model and its significant incidents.