pineArt Box is an interactive installation that invites the viewer to create their own work of art using pine needles and digital color manpiulation. After drawing their image in pine, they are able to adjust the rgb values and push a button to save their completed work. Made in collaboration with Kevin Stirnweis, it was exhibited in December 2015 at ITP’s Winter Show.
The experience was created using an Ardunio processor and P5.js.
This is last week’s prototype for the design of the pineArt Box.
We have most of the components of the drawing box operating. There are two images on the canvas, the first on the left is the stream from the webcam, on the right is created when a user pushes a button. S/he manipulates the image with the needles and three potentiometers which adjust an R, G or B value – then “clicks” their finished drawing into being.
The following are images from play testing in class last week. Prototype:
Even without the tech element people seemed to respond positively to the interaction with the needles.
This week Kevin and I met a few times to play test the image element of PineBox (working title). The first pass we used a webcam with a small box. I quickly learned it most likely can not be a light box if we want to hide the camera under the drawing plane. I’m pretty wedded to the idea of hiding the camera, you know for magic’s sake. I figure at the very least I’ll set up lights around the user to make a pseudo-lightbox in reverse. But, for now, we’ve just been playing with ambient light – it’s always lighter outside the box. We’ll return to this problem when we finalize the camera as well as the material for box’s surface (frosted plexi?).
The quality of the webcam is no good. The pixels are not differentiated enough in color value and contrast.
During last week’s play testing someone suggested I try pulling the RGB values from the photo-image of the pine needles to create a graphic translation in P5. Above is a quick photoshop sketch I did to illustrate what I imagine could be translated from the actual material the user will be interacting with. At the very simplest visual and code level.
A few days later and after talking with Moon, we set it up again with a Kinect. The photo image quality is certainly better but because the whole point is that the physical drawing is on a two-dimensional plane I’m not so sure we need it. The Kinect is all about sensing distance/depth from the lens – but I’m more interested in flat. The next test – a Lumix or other still camera. Need to work on the P5 aspect, for sure.
We’ve added the sound of walking in the woods which will be trigged when while someone is drawing. I’m thinking headphones. Usually I really don’t like headphones in gallery settings but I think here it makes sense. It isolates the person and makes the act of drawing with the needles more focused and meditative. The smell of pine and the sounds of leaves cracking underfoot.
For my Physical Computing final I would like to make a device that enables a user to create an image using natural elements – probably pine needles. The user will first “draw” with the needles, then push a button and a translation of their drawing will appear as an image in p5.
First pass at a prototype. The canvas is too small.