Pattern and Sound Series - P5js and Tone.jsl

Using p5js and Tone.js, I created a series of compositions that reconfigure temporarily on input from Kinect, a motion sensor, and generate soothing and energizing synthetic sounds that match the visual elements of the piece. The people in front of the projected visuals and sensor, in effect, create sonic compositions simply by moving, almost like a conductor in front of an orchestra but with more leeway for passive (if one simply walks by) or intentional  interaction.

The sound is easy to ignore or pay attention to. It is ambient and subtle. 

The next steps are to create more nuanced layers of interactivity that allow for more variation of manipulation of the sound and the visuals. Right now I am envisioning the piece becoming a subtle sound orchestra that one can conduct with various hand movements and locations of the screen. 

Composition 1: Ode to Leon's OCD

In the editor.

Composition 2: Allison's Use of Vim Editor

Mouseover version for testing on the computer.

In the editor using Kinect.

Composition 4. "Brother"

In the p5js editor.

For this synchopated piece I sampled sounds of tongue cliks and and claps found on No tone.js was used in this piece.  

Composition 5. "Free Food (in the Zen Garden)"

For this composition I used brown noise from Tone.js. 

In the p5js editor.

After using mouseOver for prototyping, I switched over to Kinect and placed the composition on a big screen to have the user control the pattern with the movement of the right hand instead of the mouse. 



I'm uising Tone.js, written by Yotam Man.


P5js and Adobe Illustrator.

Final Compositions



Initial Mockups


Sketches of ideas



The simple grid-like designs I am creating are inspired by Agnes Martin's paintings. 

Minimalist and sometimes intricate but always geometric and symmetrical, her work has been described as serene and meditative. She believed that abstract designs can illicit abstract emotions from the viewer, like happiness, love, freedom. 



Composition 1: "Ode to Leon's OCD" using mouseover code.

Composition 2: "Allison's Use of Vim Editor" with Kinectron code, using Shawn Van Every and Lisa Jamhoury's Kinectron app and adapting code from one of Aaron Montoya Moraga's workshops.

Composition 2 with mouseover for testing on computer.

Composition 3: "Isobel" with mouseover.

Composition 4 "Brother" with mouseover.

Composition 5. "Free Food (in the Zen Garden)"


More prototypes for sound sculpture enclosures. More about this project here.

The sound playground is a group of kinetic sculptures whose sonic behavior changes over time and whose interactions cannot be predicted with certainty. It encourages play, exploration, calm, and the relinquishing of complete control in the creative process. Fascinated with human ability to remain playful throughout their lives, I aim to build an experience that allows people of all ages to create subtle sounds compositions by setting kinetic sculptures in motion and delight in the ever-changing and sometimes surprising results.
In addition to play, I am examining the human interactions with things that they cannot control or learn as well as change over time. The objects absorb bits of the surrounding noise and insert these bits into the existing algorithmic composition, somewhat like new snippets of DNA.  A sculpture's program also changes in response to how it was moved. The experiences of a sculpture is incorporated into it's future behavior, but it changes alongside us using it's own logic. The interactions invite users to examine what it means to them to be in control, especially in control of the creative process. 

This project is a series of kinetic sculptures that emit subtle sounds when set into motion by people. The sounds of some of the sculptures will change over time as they absorb bits of the surrounding noise and insert it into their existing ambient soundscape, somewhat like new snippets of DNA. These chunks of sound that are sourced from the environment may not play back right away and may not manifest themselves for months. They will also be highly abstracted by the sculptures' code. The idea is that these objects behave quite independently in terms of their sonic properties and behaviors and that the user can't learn or predict their behavior with certainty. Relinquishing control over the objects, we are met with ever-changing interactions that we can observe and delight in.

Explorations of enclosures

These are exploratons of shapes for a sound object series "sound playground" project I am working on at ITP. Full project here

The sound playground is a meditative, interactive sculpture garden that encourages play, exploration, and calm. Fascinated with human ability to remain playful throughout their lives, I want to create a unique experience that allows people of all ages to compose sound compositions by setting kinetic sculptures in motion.

In addition to play, I am examining the concept of aging and changing over timeIn addition to play, I am examining the concept of aging and changing over time. The sound-emitting kinetic objects absorb bits of the surrounding noise and insert these bits into the existing algorithmic composition, somewhat like new snippets of DNA.  A sculpture's program also changes in response to how it was moved. In this way, the experiences of a sculpture is incorporated into an ever-changing song that it plays back to us, changing alongside us.

Screen Shot 2018-05-23 at 3.34.27 PM.png
Screen Shot 2018-04-11 at 3.47.57 PM.png
Screen Shot 2018-04-11 at 4.04.42 PM.png
Screen Shot 2018-03-20 at 6.29.16 PM.png

Video Projects Using VidPy and FFMEG

These are videos  that I created using VidPy and FFpeg for Detourning the Web at ITP



Video 1: A Species Goodbye

Maria Falconetti is sampled here from her role as Joan of Arc in Carl Theodor Dreyer's 1928 silent film, La Passion de Jeanne d'Arc and represents our generation saying goodbye to the polar bear species in the wild. We see ourselves in Maria Falconetti's anguish as we reflect on how negligent capitalist practices have set into motion environmental decline. The last scene shows a polar bear waving it's paw at the viewer but in a clearly artificial, jerky way that VidPy allows for. The viewer understands that the polar bear is not actually waving but the effect is comically heart-wrenching.  

VidPy, is a python video editing library  developed by Sam Lavigne.

Other videos used:
 Boston Robotics BogDog on the beach from youtube,
 Fastest man to run on all fours from youtube
and polar bear footage from youtube


Video 2: Screens, Portals, Men, and Frodo

In this video piece I sampled video footage of Steve Jobs, Star Trek TNG, Lord of the Rings, and a nature video about summer that had poetry text. All of the videos were donwloaded from youtube using youtube-dl, fragmented in FFMPEG, and put together with jerky offsets using VidPy, a python script developed by Sam Levigne.  

Themes explored: men as adventurers, technology as men's realm, legendary and real iconic figures and the grey area between, male as default gender in pop culture.


Code for Video 1, A Species Goodbye:



 Code on my github



ffmpeg -ss 00:06:18 -i jobs_640_480.mp4 -c:v copy -c:a copy -t 6 jobs_640_480_1.mp4

ffmpeg -ss 00:07:43 -i jobs_640_480.mp4 -c:v copy -c:a copy -t 5 jobs_640_480_2.mp4

ffmpeg -ss 00:07:51 -i jobs_640_480.mp4 -c:v copy -c:a copy -t 5 jobs_640_480_3.mp4

ffmpeg -ss 00:11:08 -i jobs_640_480.mp4 -c:v copy -c:a copy -t 3 jobs_640_480_4.mp4

//cutting out a portion from jobs interview that shows the screen

ffmpeg -ss 00:00:06 -i sheliak_636_480.mp4 -c:v copy -c:a copy -t 5 sheliak_636_480_1.mp4

got error in python file while runing singletrack:

Katyas-MBP:screens crashplanuser$ python

objc[40458]: Class SDLTranslatorResponder is implemented in both /Applications/ (0x10870af98) and /Applications/ (0x108b6c2d8). One of the two will be used. Which one is undefined.
objc[40459]: Class SDLTranslatorResponder is implemented in both /Applications/ (0x110617f98) and /Applications/ (0x110a892d8). One of the two will be used. Which one is undefined.

//get alien

ffmpeg -ss 00:00:29 -i sheliak_636_480.mp4 -c:v copy -c:a copy -t 1.5 sheliak_636_480_p.mp4

//get riker

ffmpeg -ss 00:00:38 -i holodeck.mp4 -c:v copy -c:a copy -t 10 holodeck.mp4_1.mp4

//get lotr chunk

ffmpeg -ss 00:00:34 -i lotr.mp4 -c:v copy -c:a copy -t 3 lotr_late.mp4

//couldn’t get this to make a sound


Bot That Books All The Office Hours

For my Detourning the Web final project at ITP I made a bot that books all the office hours with the instructor, Sam Lavigne.  If I hadn't figured this out, I would have left one office hour booked so I could do this.

Thanks to Aaron Montoya-Moraga for help with this.  

Screen Shot 2018-04-24 at 12.43.43 AM.png
 Photo courtesy of Sam Levigne

Photo courtesy of Sam Levigne

 Photo courtesy of Sam Levigne

Photo courtesy of Sam Levigne

Data Sonification - Earth's Near Deaths

Project for week 4 of Algorithmic Composition. by Nicolás Escarpentier, Camilla Padgitt-Coles, and Katya Rozanova,



Today our group met up to work on sonifying data using Csound. At first we were planning to build on the work we did on Sunday, where we created a Markov chain to algorithmically randomize the "voice" or lead flute sound from a MIDI file of "Norwegian Wood" over the guitar track using extracted MIDI notes and instruments created in Csound.

Our plan for the data sonification part of the assignment was to also take the comments from a YouTube video of the song and turn them into abstract sounds which would play over the MIDI-fied song according to their timestamp, using sentiment analysis to also change the comments' sounds according to their positive, neutral or negative sentiments. However, upon trying to implement our ideas today we found out that the process of getting sentiment analysis to work is very complicated, and the documentation online consists of many forums and disorganized information on how to do it without clear directives that we could follow.

While we may tackle sentiment analysis later on either together or in our own projects, we decided that for this assignment it would suffice, and also be interesting to us, to use another data set and start from scratch for the second part of our project together. We searched for free data sets and came across a list of asteroids and comets that flew close to earth here (Source:

We built 9 instruments and parsed the data to have them play according to their 9 classifications, as well as their dates of discovery, years of discovery, and locations over a 180 degree angle, as well as  each sound reoccur algorithmically at intervals over the piece according to their periods of reoccurrence. We also experimented with layering the result over NASA's "Earth Song" as a way to sonify both the comets and asteroids (algorithmically, through Csound) and Earth (which they were flying over).  The result was cosmic to say the least (pun intended!)

Here are the two versions below.


Python script

By Nicolas Nicolás Escarpentier found here.

For each asteroid or comet on the file, we extracted some common characteristics to set the sound parameters. The most important aspect is to portray how often they pass near the Earth, so the representation of the time has to be accurate. We set an equivalence of one month = 5 seconds and a year multiplier of 12 months, in case we wanted to make a longer year to introduce longer periods of silence on the score. The audio file starts on Jan 1, 2010 - the earliest year from the acquired data set. Each rock's discovery date sets its first occurrence on the score, and each occurrence repeats itself according to its period_yr (except for the 'Parabolic Comet', which doesn't have a return period).

month_interval = 5. # in sec
year_mult = 12 # multiplier (how many months in a year)

for a in aster_data:
    # get raw data
		datetime = dateparser.parse(a['discovery_date'])
		yea = datetime.year       # starting time
		mon = datetime.month      # starting time
		day =        # starting time

		# first occurrence (starting in 2010)
		start = ((yea-2010)*year_mult + mon + day/30.) * month_interval

		# recursion
		start += recur *year_mult

For the other parameters, we selected characteristics that gave us some expressive possibilities. The pitch of each rock is based on the orbit's angle (i_deg), the instruments are based on the orbit_class, the duration on the q_au_1 (which we have no idea what it actually represents). For the scale of this score, we chose a minor B flat, in reference to the sound of a black hole and the "lowest note in the universe".


Camilla, Nicolas, and I  created nine instruments using CSound.

The first three corresponded to the three most common occurring meteors and asteroids. These are subtle "pluck" sounds. A pluck in CSound produces naturally decaying plucked string sounds. 

The last six instruments consisted of louder, higher frequency styles.
Instrument four is a simple oscillator. 
Instrument five, six, and eight are VCO, analog modeled oscillators, with a sawtooth frequency waveform. 
Instrument seven is a VCO with a square frequency waveform. 
Instrument nine is a VCO with a triangle frequency waveform. 

linseg is an attribute we used to add some vibrato to instruments 6 - 9. It traces a series of line segments between specified points. These units generate control or audio signals whose values can pass through 2 or more specified points.

Each instrument's a-rate takes variables p4, p5, and p6, (which we set to frequency, amplitude, and pan) that correspond to values found in the JSON file under each instance of a meteor/asteroid near Earth. The result is a series of plucking sounds with intermittent louder and higher frequency sounds with some vibrato. The former represent to the more common smaller meteors and asteroids and the latter represent the rare asteroid and meteor types. 

 Meteor Art by Meteor art by  SANTTU MUSTONEN  , which I manipulated using Photoshop.  Accidental coding poetry by Nicolas Pena-Escarpenier.  Photo by me.

Meteor Art by Meteor art by SANTTU MUSTONEN , which I manipulated using Photoshop.  Accidental coding poetry by Nicolas Pena-Escarpenier.  Photo by me.

 Description of our code by Nicolás E. ~ See the full project on GitHub  here

Description of our code by Nicolás E. ~ See the full project on GitHub here

IMG_1727 (1).JPG

Frenemies in the Greek Gallery/Comoediae Agni

Museum Frenemies/Ram Bearer's Comeuppance/Comoediae Agni


Scene 1. "brevis victoria"

The original sketch was for scene 1: Lamb bearer's comeuppance. The title became Brief Victory when I added the final part of the scene that dethrones the lamb from the man's body. 


Additional end of scene 1 where the lamb realizes it's victory was short.



Scene 2. "speranza. giocare"

Lamb grabs man's head and shakes it with satisfaction in teeth like a dog that caught a squirrel or a dog toy. Wags it's tail with glee. This continues. 

Scene 3. "exhalationem spiritus"

Lamb blows at man's head "TBTBTBB" playfully. The head simply falls off.  Possible that the bearer sculpture was never alive like the lamb sculpture. Did the lamb ever have a friend or foe? It vomits in despair or possibly let's out it's spirit. At the end of the scene the wings and the trumpet are uplifting. 


Screen Shot 2017-11-09 at 3.42.29 AM.png


I am thinking of adding another few frames dedicated to the lamb vomiting up a strong stream of some sort of life force (kind of like Lynch's Garmonbozia, pain and suffering). It will come out in the form of mist that isn't affected by gravity and sort of floats out) before or after it opens it's eyes wide.


References and source materials

In making the video I used the following images: lamb bearer - Kriophoros, Nike of Samothrace's (wing), Deux chien de Chantilly  by Fanfareau e Brillador (tail).


Sound Effects

Then sound will be slapstick and unsubtle, like in Terry Gilliam's animations. 



Terry Gilliam's animations for Monty Python. 

Cyriak's "Baa"

And another animation that I have yet to find, shown to us in class. 

AI-generated voice reading every comment on 2-hours of Moonlight Sonata on Youtube + Reading amazon reviews of Capital vol 1

Code is on github

It was a pleasure performing this piece at Frequency Sweep #2 at Baby Castles in New York along with other fellow ITP students and alumni. (see video below)

How I did it

In this project for Detourning the Web at ITP, I scraped Youtube comments from a 2-hour looped version of the first part of the Moonlight Sonata and placed them into a JSON file. I then used Selenium, a Python library, to write a script that uploads the comments from a JSON file into Lyrabird, which reads the comment out-loud in my own AI-generated voice. I had previously trained Lyrabird to sound like like me, which adds to the unsettling nature of the project. I based my Selenium code off of the code that Aaron Montoya-Moraga's wrote for his automated emails project.


The concept

The work explores themes of loneliness, banality, and anonymity on the web. The comments read out loud give voice to those who comment on this video. The resulting portrait ranges from banal to uncomfortable to extremely personal to comical and even endearing. Online communities have been an interesting place to observe humanity. Often, it’s where people say things they refrain from discussing in the open.

The piece is meant to be performed live. The screen recording below shows what the experience is like. 





This is a separate but similar project that also uses  Selenium.

For Capital Volume 1 I had Lyrabird simply read its Amazon reviews one by one.  I'm interested in exploring online communities and how they use products, art, or music as a jumping off point for further discussion and forums for expressing their feelings and views. Often people say online things they cannot say anywhere else and it's an interesting way to examine how people view themselves and their environment. 

The piece is meant to be performed live. The screen recording below shows what the experience is like. 

A new addition to Sound and Pattern Series

This is a new design for Sound and Pattern Series, an audiovisual piece. I have yet to find a sound and animation style for these shapes but I'm happy with the progress so far. The minimal composition already seems to dance and very subtle movements along with soothing ticking noises is in the works for this meditative addition.


Screen Shot 2018-03-20 at 6.20.54 PM.png

Persistence Attempt

For the Dynamic Web Development course at ITP I was able to make servers that take and output data. I have made attempts to use MongoDB, a database, and JSON files to store the values users entered into the text fields in the apps I created. 


Approach one: MongoDB

While trying to run this server, I ran into an issue with the "save" property on line 114  "{"name":textvalue}, function(err, saved) {" . This was confusinf since I only had 123 lines in my code. 

I tried to find out other ways to write the code but to no avail. Every MongoDB tutorial was slightly different and Some suggested npm installing mongoose , others instructed to download Robot 3T, and nothing seemed to work. 

Screen Shot 2018-03-16 at 10.41.09 PM.png

Approach two: JSON files

I attempted to save data to a JSON file and simply render the data file in the ejs template. I watched these videos by Coding Rainbow and referenced this Stack Overflow page. However, that didn't seem to work. The names.JSON file in my "public" folder did not seem to take any of the code lines and remained unaltered. I have yet to make this work. For now the data collected is only on the server by way of temporary memory but it's able to spit out some input as output in a story. For now, it's as dynamic as my web development gets.


Next Steps

I'll try some more attempts at persistence soon. I need to figure out which way is better and easier, database or JSON. My hunch is that figuring out the Mongo DB will pay off because it's much more elegant and private than simply collecting all the user input and letting it live unsecured in a public server.

The result is something like this

Screen Shot 2018-03-16 at 10.35.06 PM.png

All the code can be found on my Github.