Adaptive Podcasting
I’m thrilled to have been invited to join a group of creatives put together by artist / academic / educator Penny Hay called the “Rabbit Holes Collective”. The name comes from Penny, as an invitation for young people to fall down a metaphorical rabbit hole to connect more deeply with nature through creativity. Read more from Penny HERE. As a group we have been introduced to Adaptive Podcasts by BBC R&D senior firestarter Ian Forrester. The idea is for a podcast to adapt and change based on external parameters, not for the listener to actively interact, but for ambient measurements that can be sensed by a regular smartphone to change the podcast to create a more personalised encounter for each person. Here is a more detailed outline.
My initial idea was to use the ‘Deep Listening Walk‘ format as a starting point, where I invite people to join me on a walk to listen to usually inaudible sounds, focussing on underwater sounds. I thought about creating a soundscape that would change depending on the weather, drawing on my own library of sound recordings of rain, wind and considering how to relate some of my other field recordings to combinations of weather conditions and seasons. I began by following the tutorials for the adaptive podcast editor and explored what was possible within this context. After conversations with Ian, it transpires that there is currently not a ‘free’ open source weather data provider, which throws up a few obstacles to using the weather forecast to change the soundscape, as users would need to subscribe to the weather data provider, and the aim is for all these podcasts to be free and open source. At this point, I decided to look at what inputs are already available, and shape some ideas around them, so I could get on with ‘making’ and exploring what it’s like to use… ie – get stuck in! I came up with 3 initial ‘sketches’ of adaptive podcasts
1 – Battery-Biodiversity.
After looking through which inputs were available to use on the online editor, I decided to use ‘phone battery level’ because it could be read between 1 and 100%, therefore providing a wide range of ‘states’ to respond to. I wanted to relate the battery to level to the ‘biodiversity’ of a soundscape. This was inspired by Bernie Krause’s Great Animal Orchestra, based on his bio-acoustic research relating the richness of a soundscape (many frequencies present, from many animal calls) to levels of biodiversity. This first adaptive podcast plays a soundscape that combines recordings of many species when the phone battery level is high, to very few when the battery level is low. For this first iteration, I have focussed on underwater sounds drawing on recordings I have made locally in Bristol and Bath alongside recordings made as far away as Mexico and Tasmania. I aim to create some more iterations with insect sounds and birdsong. I made the soundscape quite short, because the ‘input reading’ (of the phone battery level) is taken at the start of the soundscape playback, and does not change dynamically. So, the podcast can be re-listened to periodically as battery levels change – creating different versions for the listener.
2 – Soundscapes for different times of day
The second podcast works with ‘time of day’ which is separated into ‘Morning’ – which plays the sound of the dawn chorus, ‘Afternoon’ – which plays the sound of a cosy camp-fire, ‘Evening’ – which plays the sound of crickets and cicadas chirping in the late afternoon, and ‘Night’ which plays a soundscape of bats. I aim to develop this further and have different versions of this for different times of year too.
3 – Light and dark
The third podcast plays with the idea of applying a frequency bandwidth filter to a field recording based on whether it is light or dark. I have treated an underwater soundscape with a highpass or a lowpass filter, so when it is light, the higher frequencies are accentuated, and when it is dark, the lower frequencies are accentuated. This function (or another ‘effects’ function) could be nested into another podcast design and applied to any of the included sounds by changing an audio effect that responds to light / dark status, or, perhaps another input.
After making these three sketches, I started to come up with new ideas, including ways to nest one idea within another, taking into account multiple inputs… as I move forwards with the process.
Finally, I also wanted to consider how this system could be used to build a ‘framework’ for a creative workshop format, where young people could make their own sound recordings and create their own adaptive podcasts. Taking the example of the ‘biodiversity’ podcast, this could work as a framework to invite young people to make their own collection of sound recordings to produce a soundscape that shifts in biodiversity. This could work well with exercises in inventing musical motifs to represent species, or imitating the sounds of other species and/or using technology to synthesise sound motifs as ‘species’. This creative sound exercise could be combined with discussions and knowledge sharing around biodiversity, including what we might be able to do to actively improve the biodiversity of our own neighbourhoods.