facebook facebook pinterest youtube youtube email

Images and Algorithmic composition

Warning: Missing argument 4 for get_img_details(), called in /home/pauljame/public_html/wp-content/themes/Pauljames-main/content-standard.php on line 10 and defined in /home/pauljame/public_html/wp-content/themes/Pauljames-main/functions.php on line 25
grunge_numbers     by paul james design

A few years ago I started some research work on translating video and images into music scores. A kind of algorithmic composition. I did  not want results which sound random or computer generated.

The first attempt was for an exhibition called Redefining Materials,  22 June to 6 July 2012, School of the Arts, Singapore. The brief of the exhibition was to the explore the matierials and redefine what that meant.  I interpreted that as a transformation from one form of material to another.  I took two or three abstract paintings, and took macro-photo images of the surface at an angle, they formed vistas, horizons and landscapes. I arranged 100 images into timeline to playback as a video stream.

I went back to some algorithms that I had been writing in my spare time to generate a music score based on a set of rules that produces ever changing music.  The changes in the scores were triggered every few bars of music and the changes were some what random. The music style was somewhere between the styles of Philip Glass and Steve Reich.  Glass, for the arpeggios that systematicaly change over time, and the mixture of  quarter notes against triplets. Reich, for the phase shifting between parts over time and the addition/subtraction of notes in the phrases over time.

I thought how much more meaningful it would be if the changes made to this music were not based on random events but on qualities of the series of images in the slide show, therefore redefining the image into a another form.  Changes musical score were made by analysising each image for colour, and the  texture values across the image.  The programming was done in a high level mathematics programming language, that generated a midi file that was loaded into Logic x, and assigned to instruments, and then rendered into a MP3/wav file.  With a real time language the sound track could be generated live based on a video feed or webcam.

The resulting Video/artwork was shown in the exhibition and is shown below:

I liked this method, it can be applied to any video, or image stream to generate a music score.  Perhaps it’s good for performance art or action painting, where the artist and soundtrack interact and feed back on each other … A riff in space and sound.

As the old adage goes a picture tells a thousands words, it became apparent that if a single image was split into sub-images  then a whole piece of music could be generated. Depending on the scale maybe a whole symphony.

Sub-images were  analyzed to generate  different music phrases, dominant colour corrosponding to a chord choice, the brightness  texture of the image corrospond to the sequence of notes and to the velocity. There hundreds of ways to map the image properties to notes, rhythm and velocity.  I played with algorithms and images until something made sense.  I ended up with about of work that was shown at  the substation Singapore. Consisting of 14 images which were digtel in origin, printed on canvas. and 14 individual soundtracks.

The work bridges the areas of art, music and mathematics. The challenge works both ways: to produce a musical soundtrack from an image,  or to produce an image that yields a musical soundtrack.

Previous post:

Next post:

Leave a Reply

Your email address will not be published. Required fields are marked *