I have personally tested my first draft of idea to see if the programming works correctly. As I am still unsure as to what objects to use, I decided to test them all to see which works best, here are the clips showing my tests:
From the test, I found that the object orientated programming all works as expected, when they are on a white pixel, they descend, and when they reach a black pixel they ascend. I did however find that in the software icons examples, they tended to fall in groups at times which is something I will have to resolve if I choose these objects. I think the bright colours on the white background work well as they stand out and make the piece more bright and appealing. As you can see from the tests I have carried out, the brightness thresholding in each example varied slightly. The threshold level in each is the same so this is due to the different time of day and the brightness of the room. Therefore, from this I have learnt that when displaying my piece, I need to edit the threshold to a level where it would work best, to suit the brightness of public space in Weymouth House. Next, I will go on to show my pieces to users, in order to gain feedback as to which object they think works best and other improvements they may have.
Today, I learnt about the map function, which ‘re-maps a number from one range to another’. I used existing knowledge of importing images and the 2d pixel array formula – x+(y*width), and mapped the pixel colours of the image to a height location on the screen which is determined by the mouse. Here is the code:Learning the map function was useful as it advances my programming knowledge, however I don’t plan on using this function in my piece. If I was to relate this mapping idea more to the brief, I would setup up a video capture so that the colour pixels of the video are mapped.
Processing (2014). Processing: Map [online]. Available from: https://www.processing.org/reference/map_.html [Accessed 28 November 2014].
In order to help me to further develop ideas for my piece, I have been looking at existing interactive motion pieces that interest me. These are the main two examples that stood out to me when carrying out my research. I will analyse what I like about each one and will reflect on how they have inspired me.
I like the fact that this installation is based on pixels, using black and white circles to create silhouette like figures. It captures peoples movements but I feel it is more interesting rather than just being a simple video capture. I like how circles are used to create the figures on screen and think it makes quite a good effect. The use of black and white contrasts the figures from the background and I think that this works really well.
I really like the use of colours in this piece, the black background with bright luminous colours works well and stand out a lot to me. When someone moves, the multiple lines all move randomly, changing size and colour. This piece is more abstract and artistic, and the movement of the person creates a nice abstract visual.
Similarly to these examples, I would like my interactive piece to be on motion and movement. Carrying out this research has been very beneficial and it has really helped me to gain more ideas for my own piece. From this, I have decided that I would like my piece to video capture people as silhouettes rather than being in full detail. By doing this, hopefully it’ll encourage more people to engage with the piece as I’m aware that some people don’t like the idea of being on camera. I would also like to use bright colours, possibly on a black background, to make it eye catching and appealing to users. Relating back to my previous processing workshop, I would like to incorporate the use of objects and vectors into my piece, and the motion of the people can obstruct and alter the direction of the objects. Next, I will go on to produce a post finalising my idea as well as looking at some motion tutorials online.
Hype, 2012. Iris. Available from: youtube https://www.youtube.com/watch?v=qhdG7OltXnU [Accessed 17 November 2014].
Universal Everything, 2013. Nike Flyknit . Available from: http://www.universaleverything.com/projects/nike-flyknit/ [Accessed 17 November 2014].
I have been looking at some processing video examples that I find interesting in order to help me develop ideas for my interactive piece. I have also been watching some of Daniel Shiffmans’ processing tutorials in order to improve my understanding of past processing skills that I have learnt as well as develop new skills. I have learnt more about the Capture class, and the steps required to setup video capturing. I am going to analyse some video capturing examples, and explore how I could make it interactive within public spaces.
- Video Pixelation – Likewise to images, you can also get the pixels of a video capture. Similarly, you have to use for loops and the pixel array formula (x+(y*width)) but in this case you load the video capture rather than an image. An idea relating to video pixelation could be that if people are moving at a certain speed, they will become pixelated, whereas if they are stood still in the space they become in focus.
- Motion Detection – This is something I would like to incorporate into my interactive piece, but I need to be aware that at busy periods, there will be lots of movement within the public space. An idea related to motion is that when someone walks into the pubic space a new class/object will be created, which then follows the fastest movement within the public space (motion tracking).
- Colour Tracking – Instead of tracking motion within the space, I could set it so the camera tracks a certain colour. For example, when someone walks past wearing red, an object could follow them on the public screen. However, this would become complex if there was lots of people wearing the same colours.
From looking at these examples, it has helped me to improve my processing knowledge about video capturing and motion detection/tracking. From this, I would like to incorporate motion in order to make my piece interactive, but still need to continue to fully develop an idea. To do this, I am next going to look at some existing interactive pieces that I find interesting.
Processing (2014). Processing [online] Available from: http://www.processing.org/[Accessed 11 November 2014].
Throughout Reading Week, I have been experimenting with processing and have been looking at several different processing examples. This has enabled me to gain more ideas for my piece and possible concepts that I can explore. I have edited the examples used and will briefly analyse each one to explain how I could possibly make it interactive within the public space.
- Image Layering (Transparency)
This idea relates back to the previous post about Palimpsest, and the idea of layers. In the above example, I loaded 3 images onto processing and altered their opacity so you can still see all 3 images layered upon one another. I could make this idea interactive as I could set it so every few minutes, the camera takes an image of the public space, the images are then layered on top of one another, with the opacity reduced so that previous images are still visible. This idea could relate to the concept of representation, as the layers of images blur what each image represents as it kind of merges the images together. Also, different people depending on when they are in Weymouth House would see different layered images on the displays and individuals may interpret the displays differently. Some may notice the past images whereas others may not, hence the representation of the piece will be open to arbitrary decodings.
- Image Pixel Distortion
This idea comes from one of the processing examples – Explode – The mouses horizontal location controls the breaking apart of image and maps pixels from a 2D image into 3D space. I could make this idea interactive as each time someone walks past the camera, the pixels could move, hence the busier the public space is, the more distorted the image will become.
This idea comes from another processing example – Pointillism – The mouses horizontal location controls size of dots and it creates a simple pointillist effect using ellipses coloured according to pixels in an image. I experimented with editing the code so that it used rectangles instead of ellipses, thus creating a different effect. To make this idea interactive within the public space, I could set it so it captures an image of Weymouth House and then each time someone walks past the camera a certain number of pixel ellipses/rectangles appear.
From looking at examples, I may decide to do something that relates to the concept of representation, and the idea of how contemporary audience decode multiple different meanings and messages from pieces. I would definitely like to make it interactive by basing it around motion and movement within the public space. From this, I need to decide what images to possibly use or whether to base it on video capturing. Next, I am going to look at some video capture examples to help me develop my ideas and experiment further with processing by learning more about how to capture videos and track motion within a public space.
Today, I learnt how to add images into processing, and how to scale and tint them. To begin, I learnt the basic of how to add images to the sketch and load it. I used existed knowledge of mouseX, mouseY so that the image followed the mouse around the screen and I applied a blue tint to the image…
I then learnt about image pixels and how to use them in two dimensional as pixels are normal stored in a one dimensional linear sequence. I used the Processing tutorial in order to understand how pixels work and this diagram is particularly useful.
To get the pixels of my image, first I had to load the pixels. Then, I had to set up a nested for loop of so that it will continuously loop through all the pixels. I then had to use the formula x+(y*width) to calculate the one dimension locations from a two dimension grid and colour each pixel according to the image. I set the size of the pixels as an integer so I can easily change the value of ‘s’ and the pixels size will change.
I loaded another image into the sketch and used a conditional statement so that when the mouse pressed, the image changes. I also made the pixels shake by using the random() feature. I added text to each the images, of which I set the fill colour and positioning.
Gaining this knowledge of pixel was useful and it is something I could incorporate into my piece. The use of text is also a good skill and something that I would possibly like to explore further.
Processing (2014). Processing: Images And Pixels [online]. Available from: https://processing.org/tutorials/pixels/ [Accessed 24 October 2014].