Project Statement/Goal
- This project is an experiment in using imitative processes as a means of developing aesthetic awareness in the domain of musical melody and phrasing.
- It remixes on two levels:
- firstly in combines ideas from previous projects, most notatably Imitation Lab, Melody Interpolation and Melody Converstaion. The first explores using (web) video as a means of facilitating comparison between and imitation of ideas. The others uses Google Magenta as a means of generating variations on a melody, and to some extent inviting musical interaction with them.
- secondly it will aim to allow remixing of musical phrases by a user, based on the idea that being able to express (with the body) ideas is an significant part of the development of aesthetic awareness: when we do something, we are able to perceive it more richly than if we don’t. (Just as it’s illuminating to deconstruct somebody else’s remix (in music or otherwise) to understand it)
- Conceptually, this work is based on the notion that imitation is one of the human species’ most important gifts. Imitation is not same as copying identically, like a photocopier, but is a more subtle process which can involve appreciation of the goals of other, and adoption of a metaphorical or empathic stance, where we become part of the people and ideas we are imitating (and vice versa). Paul Pangaro promotes Conversation Theory, based on the work of Gordon Pask, as a means of understanding such interactions: an aim here is to build an artefact that more fully reflects the richness of this model.
- Where my previous work for this course has generally aimed at producing proofs of concept, here I want to produce what might be called (after Stephen Scrivener and others) an “Experimental System”: a system which is rich enough to produce unexpected outcomes. [^expsys]
- Ideally, I want this system to provoke the pleasant feeling of learning to perceive something that was present before, but which one did not understand: like the magical moment in learning a foreign language when random sounds become understanding. There’s a relation between acting and observing the actions of another which I want to explore more.
- Another theme is the interaction between artificial and human agents: just as a mirror, or tape or video recorder, is an artefact which lets us see ourselves, other people and artificial intelligences such as that of Google Magenta can be mirrors which reflect back at us, perhaps with new information added from beyond the bounds of our own imaginations.
- There are clear links in this project to the Media Theory course - through the connection to cybernetics, which conversation theory stems from - and to the media law course: consider for example the question of ownership and copyright if a good melody is derived from others, and (some of) the others are subject to copyright.
[^expsys]: See Micheal Schwab (ed) “Experimental Systems: Future Knowledge in Artistic Research”, avaiable at https://odradeksjourney.files.wordpress.com/2017/11/1-experimental-systems-future-knowledge-in-artistic-research.pdf
Motivation
This project is a continuation of an interest in imitative learning developed in my design masters thesis Imitation Games. I am interested to see if imitative processes can be facilitated by design, and moreover if they can be absorbing (creating states of flow, where we don’t have to think too much about what we’re doing). And I am interested to see how the theoretical ideas can enrich such a process (I found that Conversation Theory offered a richer perspective than I’ve had in the past, and I want to explore its consequences)
A reasonable formulation of my research question might be “how can imitation and conversation theory be used to create systems for improving musical awareness and expressive ability?”. Or perhaps more directly “drawing on Conversation Theory, can Google Magenta be used in an interesting and interactive way to help musicians play and invent by ear?”
I have summarised by previous work in this domain at https://phhu.org/imitate/previous/. Most of these projects we executing in a few days, and they cover most of the technical issues I am likely to face: based on this I would expect to be able to produce something interesting at least in response to the questions above.
I have chosen to work with music (rather than say graphic design, drawing, dance, poetry or furniture design, other domains I’ve touched in this course) because music is relatively easy to treat mathematically or computationally; it has well established forms of graphical representation; and because Google Magenta works well with it; and because music (at least on a single instrument) is less highly dimensional that visuals are (or at least, it’s easier to limit its dimensionality).
Research Review
Much of my research has been done on previous courses: see at https://phhu.org/imitate/previous. Notable points of reference include:
- Gordon Pask’s work is an obvious starting point for this project. If his theoretical writings are a little abstruse, projects such as Colloquy of Mobiles offer visual ideas.
- There are many example projects using Google Magenta at https://magenta.tensorflow.org/demos/. These have had a clear influence the decision to work with music: because by and large they work, and they are often absorbing.
Since starting this project specifically, some additional references have been useful:
- I have found a few papers concerning the question of measuring similarity (or distance) between melodies (references at https://phhu.org/imitate/elements/). Ihope to be able to use these to solve one of the key problems in this project: how to tell if a melody is similar to another.
- I read a paper called “LEARNING MUSIC FROM EACH OTHER: SYNCHRONIZATION, TURN-TAKING, OR IMITATION?” (See https://phhu.org/imitate/blog/posts/07_whynotiterate/). This compared three methods of practicing music. It struck me as odd that the methods chosen were not based on an iterative process of improvement.
- Reading Iain McGilchrist’s work on the differences in perspective between the hemispheres of the brain I have found illuminating. See https://phhu.org/imitate/blog/posts/02_divided-brain/. In crude terms, we can design to build contextual awareness (right brain) rather than abstract models and rules (left brain).
Target Audience
I am designing for myself in the first instance: both to help develop my own musical process, and satisfy my curiousity. Beyond this I hope to produce a tool which might be of interest to musicians and musical educators in general; and to produce ideas for applications in domains other than music.
Ideally, I want people to experience a kind of flow when they interact: perhaps like the experience one has when playing imitative games with a young child. For myself, I would ideally produce a tool that I can use generatively in my own musical practice.
I would also want for imitative process to be seen as an alternative to the formulation of abstract rules as a means of developing a language within a domain: a sort of refutation of theory, or an insistance on theory being arrived at in an embodied, bottom-up, way.
Generally, I think that in design it’s often best to design for ourselves, or at least drawing on our own experiences, believing that people are in many ways alike. If we design for an idealised other (personas…), we sometimes introduce simplifications of reality, and I don’t trust these: it’s too easy to assume that someone else will want something, rather than looking for the (magical) click when you realise something will work.
Resources
The project will be presented as a web application, similar to other projects I have done for this course (see above) I might use P5.js (processing) for parts of it; but I will not work within the open processing platform as I did on the creative coding course. I will use React for the user interface (and possibly React Native, if using a smartphone’s sensors gives clear benefits). The project will be published on github and my own web domain.
Google Magenta will be used for artificial intelligence and melody generation. I have used this in previous work, and found it easy to use.
Ideally, if time permits, I would want to test my project on some users (other than myself). This will require access to suitably trained people: I know a few.
Design / Aesthetic Ideas
There is an obvious connection to the idea that “everything is a remix”, one of the themes of this course. I’m emphasising the idea that by remixing, we learn to create. Or that an artist should begin by copying, and trust the creation will happen from it - just as we learn words before we write poems.
The notion of an experimental system (see above) I have used in previous work (see “Imitation Games” p.11 at https://phhu.org/imitationgames/MDes_Stage3report_ImitationGames_PhilipHughson_Aug2015.pdf)
The idea of the meme (in its original sense, from Dawkins, rather than the meaning of pictjre with caption) is involved here: however I have found imitation can be considered more broadly as a purposeful activity, rather than following the notion of blind imperfect copying suggested by Dawkins.
Ideation processes involved in the process have included CROSS DOMAIN TRANSFER (e.g. asking how hairdressers learn their trade; practicing imitation in dance using video; trying chess.com to look for teaching methods; looking for examples of imitative processes in the unban landscape (https://phhu.org/imitate/transect/). I have also done some FREE WRITING (see at https://phhu.org/imitate/elements/) . And I have found the process of relaxing after absorbing a lot about the subject productive (see https://phhu.org/imitate/blog/posts/06_millefeuille/)
Project Milestones
- Review existing code
- Adapt imitation lab interface to music, and “eject” it from openprocessing.
- find mechanisms of measuring melodic similarity (this is a serious question: see below)
- Review existing Google Magenta projects, and look for similarities and code which can be adapted.
- Build prototype, drawing on existing code, and incorporating (most of) the following improvements:
- allow specification of arbitrary melodies (existing projects use fixed starting points for simplicity)
- could allow these to be recorded
- allow specification or negotiation of a goal (e.g. a known melody, perhaps a complex one, which one want to learn to play, or to vary).
- one might start with a simple melody and work towards a more complex one
- or work on variations of two
- Allow selection of aspects of a melody to work on (e.g. rhythm, notes, harmonies, intonation)
- Use similar no touch interface as possible in imitation lab: i.e. the language of communication should be music (perhaps certain notes to switch on / off), not buttons on an interface, where possible
- Allow both analogue (Audio) and digital (midi) specification of melody (e.g. could we take examples of pianists playing a tune on youtube?). Allow the combination of both.
- Experiment with visuals on music, as done with video on imitation lab. Does it help with perception, or does it distract from it (or both?). C.f. chess.com, which has arrows to show lines of attack.
- Experiment with using graphical score and other visuals.
- Allow variation of phrase length, time signature and tempo
- introduce metronome (high hat?) for time keeping, with adjustment
- submit the prototype (end of week 4)
- Test the prototype (with self and with musical friends)
- Identify improvements, implement and iterate this process
- Improve UI for final presentation, aiming for simplicity.
- Record video presentation of work
- ideally include footage of testing, as I did with “Imitation Games” M.Des. thesis.
Risks & Challenges
- I am confident in my coding abilities. I write Javascript for a living at the moment: this project will allow some professional growth (e.g. React).
- The most significant difficulty I think lies in measuring similarity between melodies in a way which is musically sensitive. In previous work, I used crude measures (e.g. all notes in the correct order, regardless of timing): but here something more sophisticated will be required. It may be possible to measure similarity using Google Magenta itself. But another problem is that similarity is to some extent subjective: part of being a musician is in finding similarities between disparate ideas.
- Time is an issue. I think though there are enough means of limiting scope to make it possible to adapt (e.g. work only with rhythm, and a single sound).
- the most basic adaption would be to resumbit the Imitation Lab project for music rather than video.
- In previous work I have used a electric piano to play melodies on. this is clearly preferable: but for accessibility, it might be useful to find a QWERTY keyboard based midi controller (I couldn’t find one in the past).
- It might be difficult to assess the project without an instrument
- this could be mitigated by filming its use.
- It might be difficult to assess the project without an instrument
- I am not sure if the project will obtain its goal of being absorbing. The aim of an experimental system is to some extent to produce the unexpected: I trust my instinct that something interesting will emerge!
[Note: this text was roughtly written!]