This week I’ve been doing a lot more thinking than making which is good but means it’s has been slightly less productive than I hoped Portals wise (mostly due to some outside projects taking over much of the week). But I’m hoping that reflecting on what I’ve been doing so far this week will help me plan the next few days a little better. Hardware wise, I finally got the courage to break open the webcam mounting arms in order to figure out a way to mount them in the box more solidly and consistently. I was really surprised at how well made these little webcams were made and the design of the little joints, which I’m re-using as mounting hardware for my specialized set up. The pic below shows the first “cam-pop” design, the latest version is a bit different.
Having designed the camera mounts I’ve started to layout the production ready designs in illustrator to be cut out on the laser. I also drafted up the production ready design of the light boxes in illustrator. I tried to go up to Hillside to get them cut out but ran into a bunch of issues due to my lack of experience laser cutting. Long story short, nothing got cut but I learned a lot from this time around. Hoping to get to cutting tomorrow morning, since I’m constrained by the laser lab hours.
In the mean time I’ve also begun to think more about the ability of the Avatar characters to be both inputs and outputs. While the last full scale prototype was the little accordion player I’ve been thinking about the affordances of having the photographer avatar in the real space and how that could become an input device for location specific Portal content. In the image above you see a rough photorobo (lacking any sort of character design). My little point and shoot camera happens to have a silly little feature that will automatically take a picture when it detects a smile. So I set it on that mode and turned on roomba, and set about trying to smile at it as it runs. It was a quick test but enough to sort of enjoy it. In theory a potential network could look something like this:
mini-avatar in Portal moves > large-avatar in Real World moves > Camera captures images of smiling people in the location > Using Eye-fi, geo-tagged images are instantly upload to Flickr > Flash app pulls in the images from Flickr > Tagged images are displayed / tracked above the mini-avatar in Portals.
This sort of helps complete the loop from Physical to Virtual, a few times over I think. I’m still not sure how much of this system I want to actually want to flesh out, but I think prototyping this a bit further could prove to be interesting. John and Andrew have been playing around with trying to figure out how to hack the roomba for their own enjoyment/experiments so that possibility is a little closer than before..
I feel like I’ve hit a bit of a plateau this week though overall. Maybe I’m just having some trouble figuring out what my priorities are for the project. I’m feeling like I want to be making diagrams, making scenarios, designing characters, creating animations, creating the interactive systems, hacking hardware, hacking code, and a bunch of other things. All at once.. I also feel like I need to get into serious production mode next week before the big week 9 review..
Side Note Reference
My sister’s friend sent me a link to this project from 2009, which I hadn’t seen but, that seems pretty related to Portals, although different:
Digitie is a real-time communication channel between two different places.
These are linked by two apparatus, which enable communication by gesture.
To use them, participants have to put one of their hands into the device. The hands of the users are displayed together in a united image on a screen. Strange people of different languages, ages and cultures and far distant living places , are able to get in touch with each other.