Alan Edward Williams

Composer/ cyfansoddwr


Leave a comment

Collaborative composition with Paynter: a digital tablet-based platform

In 1990, when I was an undergraduate in Edinburgh, we had a visit from the Polish composer Witold Lutoslawski. Some of the students had been working with schoolchildren on a community music project with players from the Scottish Chamber Orchestra, creating music collaboratively based on one of his pieces, under the inspirational leadership of Nigel Osborne. “Very interesting”, said Lutoslawski, “but of course, this is improvisation, not composition”.

 

Lutoslawski was well known for techniques (described as “aleatoric” after the Latin for dice) which allowed musicians a certain degree of freedom within fixed parameters to play material at a pitch or time of their choosing, so it’s interesting he drew this clear line between composition and improvisation (or “not composition”). What he meant, I think, was that even where some freedom within given parameters is allowed, the composer should have sole authority over the structure of the music, and over the parameters set.

 

While Nigel Osborne was – and is – driven by a deep belief in the restorative power of music to inspire communities and to heal division, as he would go on to show in Bosnia and elsewhere, more generally the imperative to involve communities and schools in community arts projects related to the professional activity of an orchestra springs partly from a democratic principle – driven by public funders– and partly from an urgent need to develop new audiences for classical music, and probably the belief that the two things are one and the same.

 

Until relatively recently, the structure of community music projects tended to be hierarchical, with the composer at the top, whose work is performed by the professional ensemble; some players from the ensemble go into a community setting, usually led by a professional community musician or composer-animateur, to collaboratively develop a piece or an artwork based on the professional piece, and that community piece or artwork is then shown or played in an event which is close to, but not identical with, the main ‘professional’ event. The concert audience may form part of the community group; and they may  see the work produced  by the community group – but this experience is peripheral to the professional performance event

comm arts trad

In 2009, when my piece “Wonder: A Scientific Oratorio” was premiered, then BBC Philharmonic projects director Martin Maris led an amazing project with multiple animateurs to create artwork and soundscapes with schools from around Salford based on the ideas explored in the oratorio. The original plan was that music created by the community groups would feature in the same concert, but in the end, it was shown in a nearby space which audiences to the main event could go to if they wanted.

 

But what if you could reverse that hierarchy, and make a project in which the final performed work came as a result of the community arts activity, rather than a spin-off? Could we create a structure and practice in which ideas were sourced from the community group and ended up in a professional performance, rather than the other way round?

 

Paynter, developed by Dr Adam Hart as part of his PhD research, enabled that to happen.  Paynter is a tablet-based app, which allows the user to draw lines, blobs and position icons generating sounds in a drawing-type background, and for sounds generated to be played back. It is designed so the user can use sounds and notes to tell a story. Its design allows certain compositional procedures, such as motivic repetition, transposition and imitation to be used; it also allows sounds recorded from the environment, or from an ensemble to be manipulated in certain ways.

Paynter on tablet

Adam and I went into St Peter and St John Primary School in Salford on five visits in March-May 2019. On our first visit we were accompanied by four professional musicians  provided by the BBC Philharmonic orchestra: Gary Farr (trumpet), Gemma Bass (violin), Kathryn Williams (flute) and Elinor Gow (cello). All the musicians were accomplished improvisers and animateurs in their own right, and we were conscious that the tablet-based app that Adam had developed might prove to be surplus to requirements. But we persisted, because we wanted to see if the app could be used to encourage the participants to think in specifically compositional ways.

 

In the first session, the musicians introduced their instruments, and developed an improvised piece to 2 stories that the school had already been working on – one about the Titanic, and one about a cat and a bird. The whole session was recorded, and the sounds generated from this improvisation, included lots of extended techniques were assigned to icons in the Paynter app.

 

In three further visits, the composer-technologists (Adam and I) worked on the narratives in two classes, one year 3 and one year 5. We got the kids to divide up the story into scenes which could be relayed using sounds and musical ideas using the Paynter app on the tablets. The kids created and finalised graphic scores on the app, and I then transcribed these into traditional notation.

Cat in glue

“The cat gets stuck in the glue”

cat in glue score

 

 

I made great efforts to stick as closely to the graphic scores as I could: I was aware that all of the musicians could probably have played the pieces straight off the graphic scores. But Paynter’s design created a fixed relationship between the sounds created in the project and the graphic score – it would always play the score the same way. This clearly is the opposite to the standard use of graphic scores with human musicians, where they are used to initiate a more flexible relationship between the “work” and the sounding outcome.

 

But that fixed relationship meant that I could transcribe with some confidence onto the ensemble of four musicians. Some challenges remained. Although much of the material created in the group was either pitched notes (like MIDI) or recordings triggered by icons of music created in the first session by the instrumentalists themselves, much was also sound effects (how do you make the sound of leaves rustling on a trumpet?). Most problematically, sounds created at pitch on the original ensemble of trumpet, violin, flute and cello, when shifted to the bottom of the canvas, were transformed. So a simple trumpet note became a ship’s horn, for example.

 

When we played the pieces back with the ensemble, the school participants also created mouth and percussion noises alongside the ensemble, on cue from the composer-animateur (me). One noise in particular was popular in the Titanic scene – one student was able to make very effective dolphin chattering noises with his own voice. Another member of the ensemble, BBC Philharmonic cellist Eli Gow, used a harmonic glissando to make a seagull sound, which most groups used. Because the sources for musical ideas were varied, sometimes used in mediated form, a flowchart diagram for ideas is much more complex, and less hierarchically organised than the conventional model for community arts projects. It might look something like this:

coll comp structure

 

The aim here is to show the multiple relationships occurring in the project, in which no one person has overall authority over the final outcome.

 

The final phase of the project was the use of some of the ideas of the collaboratively composed score in a longer piece by me, The Rivet’s Tale. This performance had already been agreed, but the piece had to be written after the school project phase had finished, leaving me with little over three weeks to write a piece of approximately 9 minutes for 13 musicians. While the collaboratively written pieces – lasting 2 minutes each, and for only 4 players were relatively quick to transcribe (I did both of these transcriptions in one long composing day), it proved difficult to simply expand these short pieces into a longer piece. The material created for the shorter pieces just wouldn’t work like that. I eventually abandoned the thought of “simply” scaling up the collaboratively composed pieces, and chose only the Titanic story, weaving a more complex texture out of carefully chosen themes that would work in the new context.

 

As a project, we showed that the Paynter app could encourage children of Key Stage 1 and 2 ages to think compositionally. The use of graphic scores generated by the kids themselves which are then transcribed as carefully as possible meant that the process of collaborative composition is much more transparent. We didn’t test the proposition that by composing a piece collaboratively “bottom up”, the participants would have more of a sense of ownership over the collaboratively composed piece, or gain more understanding of the “professionally” composed piece, and future projects need to build that in.

 

I am left feeling unsure about Lutoslawski’s distinction between composition and improvisation. I use improvisation a lot to begin a piece, developing material by playing a lot of the time. When I am improvising in a jazz context, I’m often thinking and planning in a motivic way. By anchoring much of the activity on the app as “play”  – which is by its nature collaborative – the process could be viewed as a kind of improvisation. But it’s also very much like my process of composition. Perhaps this simply points to technology having a significant effect on the nature of the creative process, and that this has altered significantly since 1990 when Lutoslawski made his remark.


Leave a comment

The Arsonists – a Northern Opera?

It’s a year since Heritage Opera and an ensemble from the BBC Philharmonic premiered my and Ian McMillan’s The Arsonists. It received huge amounts of coverage in the week leading up to its performance, mainly because of the idea of the portrayal;l of working class characters singing in as close an accent as we could manage to a South Yorkshire one. It wasn’t, of course, the first time working class characters had appeared on the opera stage as the main roles, and not as ‘peasant’ colour – in, say, L’Elisir d’Amore – and Mark Antony Turnage’s Greek used East London working class accents in 1988, but it was at least the first time that a Northern English identity had been portrayed in accent on the operatic stage.

I’m now looking to take it on tour, and will be contacting theatres, rather than opera houses and companies, to find venues. This may in fact mean abandoning the term ‘opera’ altogether. I don’t intend altering the show at all, but it seems likely that the very term ‘opera’ might be likely to put off the very audience we’d intended trying to attract. In the end its a story, which is sung. It has about as much spoken (as oppose to sung) dialogue as a Sondheim musical, and some of the material is definitely song like. But it demands operatic voices and training.

If you know, work in, or maybe even run a suitable 2-300 seat theatre or would like to see it near you, get in touch!

Here’s a trailer:


Leave a comment

Playing the Echo: a musical-acoustic collaboration

Every so often a project comes along which is so attractive that I want to drop everything to make it happen. Playing the Echo, part of the Manchester Science Festival, was one of these and proved to be an excellent example of what happens when science and the arts collaborate.
My colleague Trevor Cox, Professor of Acoustics at the University of Salford, had been asked to give a talk in the Manchester Science Festival about the extraordinary acoustic of Manchester Central Library Reading Room. cwbmb4lweae-xb4-jpg-smallAs Trevor is also a musician (a rather fine saxophonist on the quiet) and we’d been talking about doing something together for a while, he asked me if I’d do something musical for the event.
I didn’t need asking twice. If you’ve ever been in the reading room, and in particular, if you ever went in before its recent renovation (2008-12), you’ll remember the surreal acoustic effects created by any audible movement such as (typically) a pencil dropping, a cough or a sneeze. As a schoolboy and later as a student I would sometimes some and use the library, nearly always in the old Henry Watson then tucked away on the third floor, but I would always find some excuse to go into the reading room, and unobtrusively drop something or cough extravagantly, just to hear the echo.
At that time, before acoustic treatment as part of the refurbishment, there were two really notable features – one was the extensive reverb, which Trevor tells me was formally around 3 seconds. The other, more obvious, was the repeated echo, most obvious at higher frequencies – which is why pencil dropping was so effective. A single source could be heard up to 5 or 6 times reflected off the hard surfaces of the dome.
After acoustic treatment, both the reverb and echo were dramatically reduced, leaving only a single noticeable slapback echo around a fifth of a second after the initial source.
My research student Adam Hart (whose AHRC project is actually digital music learning interfaces in schools, but who is kind enough to tolerate my many demands on him as an all round technologist) and I went in to the reading room before it opened – Trevor had already been in with his sax, and advised us that short, sharp sounds would be more effective in the acoustic. So I asked Gravity Percussion duo to play, and it turned out they had already played once before in the room, and were able to offer some advice on the acoustic as well.
Adam and I were equipped with two pairs of claves and recording equipment. We wandered around the room for a bit, playing with the acoustic and recording it, and noted that towards the centre of the room, the slap-back echo was very clear and focused, and that towards the edge of the room it became more diffuse. OK, we thought, that’ll do. I measured the echo, worked out what metronome mark I’d need to make the echo “slot in” to the gaps between notes: this turned out to be crotchet =139 – if a crotchet is played at that speed, the echo ‘plays’ quavers in between. I reckoned that as guitarists often use digital delay in this way (such as the Edge from U2), it shouldn’t present too many problems for the ensemble even if the echo was generated by the building. I also set the tempi so that sometimes the echo would slot in as a compound time quaver – in 12/8, at crotchet = 112, the echo is the second quaver in a dotted crotchet beat.
My other main thought in writing the piece was also architectural – the room is ringed by a colonnade of 28 pillars in 4 segments, separated by doorways, and atop them is a quotation from Proverbs: 4:8 ‘Wisdom is the principal thing; therefore get wisdom: and with all thy getting get understanding.’ This gave me a title for the piece – Pillars of Wisdom.
The four entrances can be disorientating, and counting the pillars initially made me think of another ring of upright stones I’d written a piece “about” – Bryn Cader Faner,img_5895 near Talsarnau in Gwynedd. This bronze age circle is located at the junction of two ancient roadways linking important trade routes in North Wales. The two locations could hardly be more different – one on a bleak mountainside, one at the heart of a vibrant, modern city – but they are both collective expressions of the importance of culture, placed at significant junctions.
So I used the same procedures to write the piece as I did with the one about Bryn Cader Faner, Meini Hirion, which I wrote for my ensemble ACMG at Salford Univeristy. I also wanted to use the ‘keynote’ sound of the centre of Manchester –the D/A diad played by trams when they are in the city centre. As it happens, that was the keynote of Meini Hirion as well – pure co-incidence.
We knew that high frequencies weren’t as well reflected as lower frequencies in the reading room – or more accurately, the prominent high frequency reflections of the existing space had been most successfully damped by acoustic treatment. I wanted to show this in the piece, so I started it with 5 basic frequency bands moving from high to low – claves, high woodblock, low woodblock, bongos high and low. This opening passage is about establishing the tempo so that the echo slots neatly between the played strokes.
Then, in a fit of theatricality, one player walks to the perimeter of the room while the player in the middle plays a ‘holding pattern’. The perimeter player reads a series of notes which I’d blu-tacked to each pillar – and these notes are passed to the middle player, who plays chords derived from them on the marimba or vibraphone. While theses chords are being played, the perimeter player walks to the next pillar for the next chord. All 28 pillars have a separate chord, and these will work in any order.
It was important at all times that eye contact was possible between players because the acoustics of the hall made the normal listening difficult, so I initially placed the marimba and vibraphone opposite each other as close as possible to the central plinth-type thing that dominates the centre of the reading room (it used to be the librarians’ desk and hide a spiral staircase to the store underneath).
It turned out that this made it almost impossible for the musicians in the last section of the piece, where the perimeter player returns to the centre and they play various ‘hits’ in unison. We discovered that the room’s acoustic created an extremely strong focal point on the opposite side – exactly where the other instrument was placed initially – and this focal point was not only much louder than the sound leaking round the plinth but also slightly delayed. The musicians had to have sight of each other to play in sync so we had to slightly offset the instruments.
Prior to the performance, Trevor had given an introduction to the acoustics of curved spaces in a separate performance space. The audience then walked up to the reading room and were invited to wander round the space to experience the different qualities of sound created by the hall’s acoustics. In this accompanying 360 video made by Trevor and his team, half the notes you’re hearing are actually reflections from the room. The microphone was placed more or less at one of the focal points, so the hall is really acting as an amplifier to the signal.


Leave a comment

Northern Voices Opera project- the survey

The Northern Voices Project

We’d be really interested in hearing how well you think singing operatically in a Northern accent works. Over the course of the project Ian McMillan and I wrote four songs using Ian’s own Barnsley accent. Here are links to videos of them being performed by singers Nick Sales (tenor), Zoe Milton Brown (soprano), Sarah Helsby Hughes (soprano) and Tom Eaglen (baritone), with John Wilson at the piano.

We’d love it if you could listen to one or more of the songs, and then let us know what you think with this short survey. The whole thing will take about 5 minutes if you listen to one song all the way through. We’ll be thinking about your responses when we write the full opera later this year and early 2016! Thanks so much!

1. Like Me Dad

Me mam said he had a lovely voice

‘like an angel in a cap’ she said…

View original post 606 more words