michakrz
Publications
-
-
Kwak, Dongho; Krzyzaniak, Michael Joseph; Danielsen, Anne & Jensenius, Alexander Refsum
(2022).
A mini acoustic chamber for small-scale sound experiments.
In Iber, Michael & Enge, Kajetan (Ed.),
Audio Mostly 2022: What you hear is what you see? Perspectives on modalities in sound and music interaction.
ACM Publications.
ISSN 978-1-4503-9701-8.
p. 143–146.
doi:
10.1145/3561212.3561223.
Full text in Research Archive
Show summary
This paper describes the design and construction of a mini acoustic chamber using low-cost materials. The primary purpose is to provide an acoustically treated environment for small-scale sound measurements and experiments using ≤ 10-inch speakers. Testing with different types of speakers showed frequency responses of <?10?dB peak-to-peak (except the ”boxiness” range below 900?Hz), and the acoustic insulation (soundproofing) of the chamber is highly efficient (approximately 20?dB?SPL in reduction). Therefore, it provides a significant advantage in conducting experiments requiring a small room with consistent frequency response and preventing unwanted noise and hearing damage. Additionally, using a cost-effective and compact acoustic chamber gives flexibility when characterizing a small-scale setup and sound stimuli used in experiments.
-
Krzyzaniak, Michael; Erdem, Cagri & Glette, Kyrre
(2022).
What Makes Interactive Art Engaging?
Frontiers in Computer Science.
ISSN 2624-9898.
4.
doi:
10.3389/fcomp.2022.859496.
Show summary
Interactive art requires people to engage with it, and some works of interactive art are more intrinsically engaging than others. This article asks what properties of a work of interactive art promote engagement. More specifically, it examines four properties: (1) the number of controllable parameters in the interaction, (2) the use of fantasy in the work, (3) the timescale on which the work responds, and (4) the amount agency ascribed to the work. Each of these is hypothesized to promote engagement, and each hypothesis is tested with a controlled user study in an ecologically valid setting on the Internet. In these studies, we found that more controllable parameters increases engagement; the use of fantasy increases engagement for some users and not others; the timescale surprisingly has no significant on engagement but may relate to the style of interaction; and more ascribed agency is correlated with greater engagement although the direction of causation is not known. This is not intended to be an exhaustive list of all properties that may promote engagement, but rather a starting point for more studies of this kind.
-
Bentsen, Lars ?degaard; Simionato, Riccardo; Wallace, Benedikte & Krzyzaniak, Michael Joseph
(2022).
Transformer and LSTM Models for Automatic Counterpoint Generation using Raw Audio.
Proceedings of the SMC Conferences.
ISSN 2518-3672.
doi:
10.5281/zenodo.6572847.
Full text in Research Archive
Show summary
A study investigating Transformer and LSTM models applied to raw audio for automatic generation of counterpoint was conducted. In particular, the models learned to generate missing voices from an input melody, using a collection of raw audio waveforms of various pieces of Bach’s work, played on different instruments. The research demonstrated the efficacy and behaviour of the two deep learning (DL) architectures when applied to raw audio data, which are typically characterised by much longer sequences than symbolic music representations, such as MIDI. Currently, the LSTM model has been the quintessential DL model for sequence-based tasks, such as generative audio models, but the research conducted in this study shows that the Transformer model can achieve competitive results on a fairly complex raw audio task. The research therefore aims to spark further research and investigation into how Trans- former models can be used for applications typically dominated by recurrent neural networks (RNN). In general, both models yielded excellent results and generated sequences with temporal patterns similar to the input targets for songs that were not present in the training data, as well as for a sample taken from a completely different dataset.
-
Karbasi, Seyed Mojtaba; Haug, Halvor Sogn; Kvalsund, Mia-Katrin; Krzyzaniak, Michael Joseph & T?rresen, Jim
(2021).
A Generative Model for Creating Musical Rhythms with Deep Reinforcement Learning.
In Gioti, Artemi-Maria (Eds.),
The Proceedings of 2nd Conference on AI Music Creativity.
AI Music Creativity (AIMC).
ISSN 978-3-200-08272-4.
doi:
10.5281/zenodo.5137900.
Full text in Research Archive
Show summary
Musical Rhythms can be modeled in different ways. Usually the models rely on certain temporal divisions and time discretization. We have proposed a generative model based on Deep Reinforcement Learning (Deep RL) that can learn musical rhythmic patterns without defining temporal structures in advance. In this work we have used the Dr. Squiggles platform, which is an interactive robotic system that generates musical rhythms via interaction, to train a Deep RL agent. The goal of the agent is to learn the rhythmic behavior from an environment with high temporal resolution, and without defining any basic rhythmic pattern for the agent. This means that the agent is supposed to learn rhythmic behavior in an approximated continuous space just via interaction with other rhythmic agents. The results show significant adaptability from the agent and great potential for RL-based models to be used as creative algorithms in musical and creativity applications.
-
-
Erdem, Cagri; Jensenius, Alexander Refsum; Glette, Kyrre; Krzyzaniak, Michael Joseph & Veenstra, Frank
(2020).
Air-Guitar Control of Interactive Rhythmic Robots.
Proceedings of the International Conference on Live Interfaces (Proceedings of ICLI).
ISSN 2663-9041.
p. 208–210.
Show summary
This paper describes an interactive art installation shown at ICLI in Trondheim in March 2020. The installation comprised three musical robots (Dr. Squiggles) that play rhythms by tapping. Visitors were invited to wear muscle-sensor armbands, through which they could control the robots by performing ‘air-guitar’-like gestures.
-
-
View all works in NVA
-
Krzyzaniak, Michael Joseph & Bishop, Laura
(2022).
Professor Plucky—Expressive body motion in human- robot musical ensembles.
-
Kwak, Dongho; Krzyzaniak, Michael Joseph; Danielsen, Anne & Jensenius, Alexander Refsum
(2022).
A mini acoustic chamber for small-scale sound experiments.
Show summary
This paper describes the design and construction of a mini acoustic chamber using low-cost materials. The primary purpose is to provide an acoustically treated environment for small-scale sound measurements and experiments using ≤ 10-inch speakers. Testing with different types of speakers showed frequency responses of <?10?dB peak-to-peak (except the ”boxiness” range below 900?Hz), and the acoustic insulation (soundproofing) of the chamber is highly efficient (approximately 20?dB?SPL in reduction). Therefore, it provides a significant advantage in conducting experiments requiring a small room with consistent frequency response and preventing unwanted noise and hearing damage. Additionally, using a cost-effective and compact acoustic chamber gives flexibility when characterizing a small-scale setup and sound stimuli used in experiments.
-
Krzyzaniak, Michael Joseph; Gerry, Jennifer; Kwak, Dongho; Erdem, Cagri; Lan, Qichao & Glette, Kyrre
[Show all 7 contributors for this article]
(2021).
Fibres Out of Line.
Show summary
Fibres Out of Line is an interactive art installation and performance for the 2021 Rhythm Perception and Production Workshop (RPPW). Visitors can watch the performance, and subsequently interact with the installation, all remotely via Zoom.
-
Karbasi, Seyed Mojtaba; Haug, Halvor Sogn; Kvalsund, Mia-Katrin; Krzyzaniak, Michael Joseph & T?rresen, Jim
(2021).
A Generative Model for Creating Musical Rhythms with Deep Reinforcement Learning.
-
Krzyzaniak, Michael Joseph
(2021).
Dr. Squiggles AI Rhythm Robot.
In Senese, Mike (Eds.),
Make: Volume 76 (Behind New Eyes).
Make Community LLC.
ISSN 9781680457001.
p. 88–97.
-
-
Krzyzaniak, Michael Joseph
(2020).
Interactive Rhythmic Robots.