Publications
-
Krzyzaniak, Michael Joseph & Bishop, Laura
(2022).
Professor Plucky鈥擡xpressive body motion in human- robot musical ensembles.
In Carlson, Kristin (Eds.),
MOCO '22: Proceedings of the 8th International Conference on Movement and Computing.
.
ISSN 978-1-4503-8716-3.
doi: .
-
Kwak, Dongho; Krzyzaniak, Michael Joseph; Danielsen, Anne & Jensenius, Alexander Refsum
(2022).
A mini acoustic chamber for small-scale sound experiments.
In Iber, Michael & Enge, Kajetan (Ed.),
Audio Mostly 2022: What you hear is what you see? Perspectives on modalities in sound and music interaction.
.
ISSN 978-1-4503-9701-8.
p. 143鈥146.
doi: .
Show summary
This paper describes the design and construction of a mini acoustic chamber using low-cost materials. The primary purpose is to provide an acoustically treated environment for small-scale sound measurements and experiments using 鈮 10-inch speakers. Testing with different types of speakers showed frequency responses of <聽10聽dB peak-to-peak (except the 鈥漛oxiness鈥 range below 900聽Hz), and the acoustic insulation (soundproofing) of the chamber is highly efficient (approximately 20聽dB聽SPL in reduction). Therefore, it provides a significant advantage in conducting experiments requiring a small room with consistent frequency response and preventing unwanted noise and hearing damage. Additionally, using a cost-effective and compact acoustic chamber gives flexibility when characterizing a small-scale setup and sound stimuli used in experiments.
-
Krzyzaniak, Michael; Erdem, Cagri & Glette, Kyrre
(2022).
What Makes Interactive Art Engaging?
.
ISSN 2624-9898.
4.
doi: .
Show summary
Interactive art requires people to engage with it, and some works of interactive art are more intrinsically engaging than others. This article asks what properties of a work of interactive art promote engagement. More specifically, it examines four properties: (1) the number of controllable parameters in the interaction, (2) the use of fantasy in the work, (3) the timescale on which the work responds, and (4) the amount agency ascribed to the work. Each of these is hypothesized to promote engagement, and each hypothesis is tested with a controlled user study in an ecologically valid setting on the Internet. In these studies, we found that more controllable parameters increases engagement; the use of fantasy increases engagement for some users and not others; the timescale surprisingly has no significant on engagement but may relate to the style of interaction; and more ascribed agency is correlated with greater engagement although the direction of causation is not known. This is not intended to be an exhaustive list of all properties that may promote engagement, but rather a starting point for more studies of this kind.
-
Bentsen, Lars 脴degaard; Simionato, Riccardo; Wallace, Benedikte & Krzyzaniak, Michael Joseph
(2022).
Transformer and LSTM Models for Automatic Counterpoint Generation using Raw Audio.
Proceedings of the SMC Conferences.
ISSN 2518-3672.
doi: .
Show summary
A study investigating Transformer and LSTM models applied to raw audio for automatic generation of counterpoint was conducted. In particular, the models learned to generate missing voices from an input melody, using a collection of raw audio waveforms of various pieces of Bach鈥檚 work, played on different instruments. The research demonstrated the efficacy and behaviour of the two deep learning (DL) architectures when applied to raw audio data, which are typically characterised by much longer sequences than symbolic music representations, such as MIDI. Currently, the LSTM model has been the quintessential DL model for sequence-based tasks, such as generative audio models, but the research conducted in this study shows that the Transformer model can achieve competitive results on a fairly complex raw audio task. The research therefore aims to spark further research and investigation into how Trans- former models can be used for applications typically dominated by recurrent neural networks (RNN). In general, both models yielded excellent results and generated sequences with temporal patterns similar to the input targets for songs that were not present in the training data, as well as for a sample taken from a completely different dataset.
-
Karbasi, Seyed Mojtaba; Haug, Halvor Sogn; Kvalsund, Mia-Katrin; Krzyzaniak, Michael Joseph & T酶rresen, Jim
(2021).
A Generative Model for Creating Musical Rhythms with Deep Reinforcement Learning.
In Gioti, Artemi-Maria (Eds.),
The Proceedings of 2nd Conference on AI Music Creativity.
.
ISSN 978-3-200-08272-4.
doi: .
Show summary
Musical Rhythms can be modeled in different ways. Usually the models rely on certain temporal divisions and time discretization. We have proposed a generative model based on Deep Reinforcement Learning (Deep RL) that can learn musical rhythmic patterns without defining temporal structures in advance. In this work we have used the Dr. Squiggles platform, which is an interactive robotic system that generates musical rhythms via interaction, to train a Deep RL agent. The goal of the agent is to learn the rhythmic behavior from an environment with high temporal resolution, and without defining any basic rhythmic pattern for the agent. This means that the agent is supposed to learn rhythmic behavior in an approximated continuous space just via interaction with other rhythmic agents. The results show significant adaptability from the agent and great potential for RL-based models to be used as creative algorithms in musical and creativity applications.
-
Krzyzaniak, Michael Joseph
(2021).
Musical robot swarms, timing, and equilibria.
.
ISSN 0929-8215.
50(3),
p. 279鈥297.
doi: .
-
Erdem, Cagri; Jensenius, Alexander Refsum; Glette, Kyrre; Krzyzaniak, Michael Joseph & Veenstra, Frank
(2020).
.
Proceedings of the International Conference on Live Interfaces (Proceedings of ICLI).
ISSN 2663-9041.
p. 208鈥210.
Show summary
This paper describes an interactive art installation shown at ICLI in Trondheim in March 2020. The installation comprised three musical robots (Dr. Squiggles) that play rhythms by tapping. Visitors were invited to wear muscle-sensor armbands, through which they could control the robots by performing 鈥榓ir-guitar鈥-like gestures.
-
Krzyzaniak, Michael Joseph
(2020).
Words to Music Synthesis.
In Michon, Romain & Schroeder, Franziska (Ed.),
Proceedings of the International Conference on New Interfaces for Musical Expression.
.
ISSN 978-1-949373-99-8.
p. 29鈥34.
doi: .
-
Krzyzaniak, Michael Joseph; Frohlich, David & Jackson, Philip JB
(2019).
,
AM'19: Proceedings of the 14th International Audio Mostly Conference: A Journey in Sound on ZZZ.
.
ISSN 9781450372978.
doi: .
Show summary
In this paper we examine how the term 鈥楢udio Augmented Reality鈥 (AAR) is used in the literature, and how the con- cept is used in practice. In particular, AAR seems to refer to a variety of closely related concepts. In order to gain a deeper understanding of disparate work surrounding AAR, we present a taxonomy of these concepts and highlight both canonical examples in each category, as well as edge cases that help define the category boundaries.
-
Krzyzaniak, Michael Joseph & Bishop, Laura
(2022).
Professor Plucky鈥擡xpressive body motion in human- robot musical ensembles.
-
Kwak, Dongho; Krzyzaniak, Michael Joseph; Danielsen, Anne & Jensenius, Alexander Refsum
(2022).
.
Show summary
This paper describes the design and construction of a mini acoustic chamber using low-cost materials. The primary purpose is to provide an acoustically treated environment for small-scale sound measurements and experiments using 鈮 10-inch speakers. Testing with different types of speakers showed frequency responses of <聽10聽dB peak-to-peak (except the 鈥漛oxiness鈥 range below 900聽Hz), and the acoustic insulation (soundproofing) of the chamber is highly efficient (approximately 20聽dB聽SPL in reduction). Therefore, it provides a significant advantage in conducting experiments requiring a small room with consistent frequency response and preventing unwanted noise and hearing damage. Additionally, using a cost-effective and compact acoustic chamber gives flexibility when characterizing a small-scale setup and sound stimuli used in experiments.
-
Krzyzaniak, Michael Joseph; Gerry, Jennifer; Kwak, Dongho; Erdem, Cagri; Lan, Qichao & Glette, Kyrre
[Show all 7 contributors for this article]
(2021).
Fibres Out of Line.
Show summary
Fibres Out of Line is an interactive art installation and performance for the 2021 Rhythm Perception and Production Workshop (RPPW). Visitors can watch the performance, and subsequently interact with the installation, all remotely via Zoom.
-
Karbasi, Seyed Mojtaba; Haug, Halvor Sogn; Kvalsund, Mia-Katrin; Krzyzaniak, Michael Joseph & T酶rresen, Jim
(2021).
A Generative Model for Creating Musical Rhythms with Deep Reinforcement Learning.
-
Krzyzaniak, Michael Joseph
(2021).
Dr. Squiggles AI Rhythm Robot.
In Senese, Mike (Eds.),
Make: Volume 76 (Behind New Eyes).
Make Community LLC.
ISSN 9781680457001.
p. 88鈥97.
-
-
Krzyzaniak, Michael Joseph
(2020).
Interactive Rhythmic Robots.
-
Krzyzaniak, Michael Joseph; Veenstra, Frank; Erdem, Cagri; Glette, Kyrre & Jensenius, Alexander Refsum
(2020).
Interactive Rhythmic Robots.
-
Krzyzaniak, Michael Joseph; Kwak, Dongho Daniel; Veenstra, Frank; Erdem, Cagri; Wallace, Benedikte & Jensenius, Alexander Refsum
[Show all 7 contributors for this article]
(2020).
Dr. Squiggles rhythmical robots.
Show summary
Dr. Squiggles is an interactive musical robot that we designed, that plays rhythms by tapping. It listens for tapping produced by humans or other musical robots, and attempts to play along and improvise its own rhythms based on what it hears.
Published
Sep. 12, 2019 2:19 PM
- Last modified
Dec. 6, 2021 7:59 PM