CDP: Acid Etching Prototyping Lab 1

We learned how to acid etch a circuit board. Using the Op Amp example, I designed a circuit through Eagle and below is the documentation of the process:

printing onto laminate surface and then run it through a laminate machine onto copper plate:

photo 1 (28)

Water bath to remove laminate sheet:

photo 2 (27)

Once laminate sheet is printed onto copper surface, dip and agitate in acid bath:



photo 3 (21)

Final Result:

photo 4 (13)

Code of Music Week 2

Week 2 Assignment:

  • Create a melody sequencer prototype, and post it to your class blog. It can be either a code or a design prototype, but it should be more than a sketch.

For this week’s assignment, I created a 16 Step Melodic Sequencer using Pure Data Extended. The melody is quite dissonant as I wanted to play around with how it would sound. In Pd, I created sliders to control the sequencer, and each slider has midi note numbers ranging from 48 (note name: C3) to 72 (note name: C5). In addition, I added a reverb with wet/dry/room size parameters as well as a delay. Link to my github file is here

Screenshot of a part of my code:

Screen Shot 2014-09-21 at 9.08.19 PM

Code of Music Week 1

  • Design exercise: pick an online step sequencer; play with it for 10 minutes; take two random ‘oblique strategy’ cards [This link]( (; re-design the sequencer following them, or design a completely new one.

For the first day of class, we did an exercise where we played with an online 16 step sequencer app. The sequence my partner, Devyn Curry, and I used were: and also the Figure by Propellerhead iPhone app which I had on my phone. The first app was a bit awkward to work with as it wasn’t quite intuitive.  There were several button options that didn’t have labels on them, so it was hard to tell what the hell it was doing. Really hate it when interfaces are designed this way as the user should be able to recognize what it is that they are doing. Why the ambiguity? Argh.  It also tried to mimic a real hardware sequencer which is ok I suppose.  But it didn’t consider the potential interactive design when playing with a computer or mobile. On many levels, the Figure app is quite successful in its design, and fairly easy to learn to use as the directions are quite clear. The sound is also pretty awesome and it’s a good tool if you just want to make a sketch for some music tracks.

We then were instructed to randomly choose two phrased from the Oblique Strategy card deck created by Brian Eno and Peter Schmidt to inform our sequencer design, and these were the two phrases we had to work with:

1.) Remove ambiguities and convert to specifics.
2.) Balance the consistency principle with the inconsistency principle.

This is what happened:

photo (17)

This is Devyn and I dorking out on the explanation:

We essentially designed a super verbose sequencer that has multifunctionality for each sound, including an ADSR, two EQ parameters, cut-off/resonance, a touch ribbon, and etc. So what is pictured here is just for the kick sound, and the idea is to bridge this out into some sort of a matrix, so that each of these super verbose parameters are included also for, say a snare, hi hat, cymbal etc. If this were ever able to be built, the thing would probably be almost 5 feet tall if not more, and probably weigh like 100 lbs. Pretty ridiculous but super fun to design! Week 1 of Code of Music is awesome.

CDP Homework Week 1

First circuit design with Eagle:

A piezo connected to an Op Amp and out to speakers.

Screen Shot 2014-09-11 at 5.35.16 PM

Looking through a microscope:

Item: Lace, half a B vitamin pill, a metal foot from a midi, circuit board i found in the cdp lab

I was taking pictures with the lens that came with the microscope, but didn’t find that it take photos very well. The images are not very clear.

Update: I realized I was using the microscope camera wrong. The hardware itself is a microscope. I was using it as a lens to read through the standing microscope lens. duh.

Photo on 9-11-14 at 5.51 PM Photo on 9-11-14 at 5.51 PM #2


Photo on 9-11-14 at 5.55 PM Photo on 9-11-14 at 5.58 PM Photo on 9-11-14 at 5.59 PM

Governing Dynamics of Software Final: beatlang


The only requirement is that it be a programming language, by some definition of “programming language” that we discussed in class or that you can justify.

beatlang by Hellyn Teng & Eamon O’Connor

Unique way to explore program language design by letting users physically and rhythmically build their own code.


beatlang is an experimentation in programming language design as we have discussed in the Governing Dynamics of Software course, exploring alternate possibilities with how code can be written. This project seeks to bring a physical interactivity to programming. We naturally use our sense of touch, hearing and eyesight, and why not explore the possibility of designing a language that utilizes these senses?

The idea of beatlang is to generate a type of rhythmic language, letting the users create their program by physically tapping out a rhythm on multiple, modular midi drum pad surfaces. The rhythm pattern represents the source code, with Max/Msp as its host language. A possibility in the program is to be able to have the data call on itself, generating a recursive rhythmic output.

We have submitted this project to the ITP 2014 Spring Show and will see if the submission gets accepted.

Here is a video how beatlang works (not sure why it only shows the link right now. in process of fixing this):

Documentation of the Max/Msp code (which is its host language). Right now it’s running Live’s 808 drum pad, using the kick and snare. However, we’re still in the process of figuring out a way to store and retrieve data and may possibly have figured out a patch for this…still in testing mode…


Documentation of building the first module:
photo 1 (25)

Debugging the piezo. Very frustrating moment….dark space…Thank you Kina for all your help!

photo 3 (20)

And it works….at least is generating sound through the Max patch (video documenting this will be posted soon).

photo (12) copy

Switch we will incorporate:

photo 4 (12)


RWET Final: Kepler-186

Final: Kepler-186

The final project documentation should consist of (a) the text of the piece the student plans to read (or equivalent documentation in the form of video, still images, audio recording, etc. depending on the shape the piece takes) along with (b) a discussion of methodology and (c) the python source code for the project.

For the final project, I generated a poem that mashed up texts from scientific journals and technical manuals. I was particularly drawn to an article I read about Kepler-186f, which was recently discovered as the most earth-like planet to be found. From there I looked into other journals about stars and moons and took those articles and blended them into a text file. I then researched different manuals and found a Whirlpool washing machine manual which meshed quite well with the other texts. I wanted to see what type of poem can be generated by meshing these two text sources, and to see what type of an experienced can be invoked.

In addition, I incorporated a sound element as well as generated a visual form for the text, similar to what I did for the mid-term project. I was inspired by calligrams which I discovered in a link on the syllabus under Poetic Forms. I wanted to experiment with text and form, and to see if this would create an added effect for the reading/text interpretation. Here is a link for information on calligrams:

The sound synthesis was created using an audio library I found called Pyo. It’s a pretty slick library and pretty straight forward to use. It has a GUI function that has a live interpreter built in so you can change/create a synthesis on the fly. For this piece, I didn’t perform any live synthesis and just ran the code I had wrote. Here is a link to Pyo:

Here is an image of what the Pyo GUI looks like:

Screen Shot 2014-05-11 at 3.16.22 PM

Here is the sound I generated and source code can be found here:


This is the initial pass for the poem and source code is here. I had performed this piece in class with this visual text form. I also spoke through a mic and had filtered a reverb and delay effect through Ableton Live to further max out the intergalactic feel of the poem.

Screen Shot 2014-05-11 at 3.21.07 PM

I had utilized the TextBlob library to parse the texts using the tags, and see if I could generate some sort of meaning with the randomly chosen words. Although it did not always make much sense, there sometimes seemed to generate a veil of meaning.

For the Unmanned Poetry Drone public performance, I worked out a better text form and generated a circle with the text to emphasize the planetary/intergalactic/space visualization. Here is an image and the source code is here (and need to give a shout out to my friend Aaron for helping with this! Thank you!)

Screen Shot 2014-05-11 at 3.16.07 PM

This was not easy to figure out at all, and this code was written separately from my initial text mash-up code. I wasn’t able to get it to work with the TextBlob parsing code I had already written, so I had to generate the text from the first code, then save it as a text file and then regenerate it using this new code. In this process it lost the parsing I had wanted to use as this is only generating the text by randomly choosing the words. I’m in the process of trying to get this to work correctly and will update it.



This project has two steps. You must:

  • Devise a new poetic form.
  • Create a computer program that generates texts that conform to new poetic form you devised.

Your poetic form could be something as simple as “Each line must begin with the letter ‘A'” or something as sophisticated as Mac Low’s diastics.

Your presentation and documentation for this project should include the following:

  • The name of your poetic form, and a thorough description of how it works
  • The source code for the program you used to generate poems that follow your form
  • A number of “poems” that your program generated (at least three),one of which you will read aloud during your presentation

Consider the following when evaluating your work:

  • How well does the output of your computer program conform to your invented poetic form? Could a human do it better?

I think the output meshed quite well with this concept. I think a human can probably generate more interesting           meaning with the words, but for the form, perhaps the computer won this time.

  • How does your choice of source text (your “raw material”) affect the character and quality of the poems that your program generates?

I think it emphasized the formal and sterile qualities of the text even more through the form, randomized word choice and the sounds from the say command.


  • The name of your poetic form, and a thorough description of how it works

The name of the poetic form I created is called Diverge Say Say Say.

Math, physics, manuals, chemistry and science books of all variations are a source of inspiration for me. I like playing with the text, as the words and phrases are usually quite dry, formal and sterile, and it’s quite fun to take the words or phrases out of their elements and re-appropriate them in a different context. I’ve come up with titles for songs, music project names, etc. by using this technique, pulling random words from random pages and sticking them together. Thus, a moniker I use for one of my music projects, Less Effectives.

For the midterm project, I wanted to develop this concept and see if I could create a Python program for it. In addition, I wanted to incorporate a visual element for the form of the text. Also, earlier in the semester I had discovered the say command in Unix which is so effing rad, so I also added a sound element to this project. This whole thing is a digital collage/montage of sorts.

For the source text, I did a search for various math and physics topics on wikipedia. The topics I chose were random number generators, Monte Carlo algorithm, chaos theory, and non-linear systems.  I took parts of the text, created a mash-up and saved it as a single text file. In the code, I took out all the “a”, “the”, “to”, “it” and etc. from the text so that it highlighted each word and created a more minimal-esque impact.

I then did some research on how to generate shapes with the text in Python. I found a number of sources which helped me get started with the code. Mainly from sites that showed how to create games with Python, like this one (which is also pretty sweet. i want to delve into this at some point).  I created four different designs and are shown below.

Source code for the project can be found here

To perform this live, I activated three terminal windows and prompted the say command with three different voices on each window. I played it sequentially, timing the start time to a beat of word, a pause, etc. This can be done with the same text form, or you can generate multiple text forms and play it together. For the in class live performance, I used the Cello, Whisper and Zarvox say commands, which actually blended together quite nicely. It sort of further highlighted the minimal qualities of the text form and words.

Also wanted to add that I also was able to take this project and sync it up with some synthesizers at home. It synced quite well and was able to make some pretty interesting experimental sounds. I had brought my Korg Monotribe to class and had intended to perform with it, but unfortunately the cable I had didn’t work so I left it out. Perhaps this is a concept I will be building on for the final….

Screenshot of what I did live in my terminal:

Screen Shot 2014-03-27 at 8.48.34 PM



Screenshot of the different text forms:

Screen Shot 2014-03-27 at 5.15.10 PM         Screen Shot 2014-03-27 at 5.14.28 PM

Screen Shot 2014-03-27 at 5.13.59 PM         Screen Shot 2014-03-27 at 5.13.29 PM

Screen Shot 2014-03-27 at 5.13.00 PM