This week I’ve been doing a lot more thinking than making which is good but means it’s has been slightly less productive than I hoped Portals wise (mostly due to some outside projects taking over much of the week). But I’m hoping that reflecting on what I’ve been doing so far this week will help me plan the next few days a little better. Hardware wise, I finally got the courage to break open the webcam mounting arms in order to figure out a way to mount them in the box more solidly and consistently. I was really surprised at how well made these little webcams were made and the design of the little joints, which I’m re-using as mounting hardware for my specialized set up. The pic below shows the first “cam-pop” design, the latest version is a bit different.
Having designed the camera mounts I’ve started to layout the production ready designs in illustrator to be cut out on the laser. I also drafted up the production ready design of the light boxes in illustrator. I tried to go up to Hillside to get them cut out but ran into a bunch of issues due to my lack of experience laser cutting. Long story short, nothing got cut but I learned a lot from this time around. Hoping to get to cutting tomorrow morning, since I’m constrained by the laser lab hours.
In the mean time I’ve also begun to think more about the ability of the Avatar characters to be both inputs and outputs. While the last full scale prototype was the little accordion player I’ve been thinking about the affordances of having the photographer avatar in the real space and how that could become an input device for location specific Portal content. In the image above you see a rough photorobo (lacking any sort of character design). My little point and shoot camera happens to have a silly little feature that will automatically take a picture when it detects a smile. So I set it on that mode and turned on roomba, and set about trying to smile at it as it runs. It was a quick test but enough to sort of enjoy it. In theory a potential network could look something like this:
mini-avatar in Portal moves > large-avatar in Real World moves > Camera captures images of smiling people in the location > Using Eye-fi, geo-tagged images are instantly upload to Flickr > Flash app pulls in the images from Flickr > Tagged images are displayed / tracked above the mini-avatar in Portals.
This sort of helps complete the loop from Physical to Virtual, a few times over I think. I’m still not sure how much of this system I want to actually want to flesh out, but I think prototyping this a bit further could prove to be interesting. John and Andrew have been playing around with trying to figure out how to hack the roomba for their own enjoyment/experiments so that possibility is a little closer than before..
I feel like I’ve hit a bit of a plateau this week though overall. Maybe I’m just having some trouble figuring out what my priorities are for the project. I’m feeling like I want to be making diagrams, making scenarios, designing characters, creating animations, creating the interactive systems, hacking hardware, hacking code, and a bunch of other things. All at once.. I also feel like I need to get into serious production mode next week before the big week 9 review..
Side Note Reference
My sister’s friend sent me a link to this project from 2009, which I hadn’t seen but, that seems pretty related to Portals, although different:
Digitie is a real-time communication channel between two different places.
These are linked by two apparatus, which enable communication by gesture.
To use them, participants have to put one of their hands into the device. The hands of the users are displayed together in a united image on a screen. Strange people of different languages, ages and cultures and far distant living places , are able to get in touch with each other.
I met with Tim yesterday and we talked for a while about how to make the street view thing less like just a backdrop and more tied into the actual street view thing. And we also talked about transitions between the “normal” version and the portal version. And the idea of something somewhat participatory (like getting invited to join a play date). SO, the video above is sort of an attempt at mocking that up. The way it’s set up there’s a bit of a glitch where you get a peek at the Portal world for a split second first, but I actually sort of like that effect. It’s sort of like Tyler Durden’s subliminal flashes in Fight Club.
Also, thinking about how to tie the output back into the interface, I started looking into Panoramio, the user submitted geotagged photos that can appear on Google earth and sometimes Google Maps. I uploaded the picture of my accordian player, which apparently meets all the acceptance criteria since it now says “This photo is now selected for Google Earth” but I don’t know what the criteria is for acceptance to Google Maps. But I guess at least this is sort one step closer…
I’m starting to feel like the physical/virtual circle is starting to come together a little bit closer now.
And a sort of tangential side note, I read this article today about how magicians like Teller manipulate the human mind and thought it was super relevant to my work, considering a lot of it is about illusion and, as Tim called it, Constructed Confusion. Teller explains a few of the principles magicians employ when they want to alter your perception:
Exploit pattern recognition.
Make the secret a lot more trouble than the trick seems worth.
It’s hard to think critically if you’re laughing.
Keep the trickery outside the frame.
To fool the mind, combine at least two tricks.
Nothing fools you better than the lie you tell yourself.
If you are given a choice, you believe you have acted freely.
I think these are all really useful principles for designing engaging and uncanny hybrid experiences, especially ones that play in that space between the real and imaginary. So much of the experience is actually being completed in the person’s own mind. One of the things I’ve noticed when testing the teleportaling is that even I sometimes forget which object is in which box and reach for the virtual one instead of the real one. It may not be “user friendly” but I think that sort of constructed confusion is really fascinating cause it reveals just how much your subconsciously immersed in the experience.
We went to the USC immersive/mixed reality lab thing on Friday. It was interesting to check it since it’s super related to what I’m working on. But at the same time I felt a little underwhelmed. Why do we need more military simulation VR head sets? I guess I just feel like I’ve been seeing these sort of “mixed reality” applications for the past few decades, so it doesn’t seem terribly new. I’m also pretty tired of people referencing the Star Trek holodeck, as if that’s the holy grail of a mixed reality experience.
I think the most interesting part about visiting is the fact that I have more things to compare to my version of mixed reality, which I feel is quite different. So at least it helps in positioning my project. The other big picture sort of thing I took away from the trip was that it seems most of these projects tend to be rather huge and over powering, and I realized I don’t want that sort of scale. I like small things. I’d like my experience to feel more intimate and human scale. And preferably cute.
Testing the animation. I set up a mini open portal for easier desktop testing for now. It almost works but you can see the seams of the animation because my duct tape mounting job allows the camera to droop a little bit. But the real test was just getting it to activate the animation. Right now it’s just using the light sensor, which isn’t exactly ideal since it breaks the ground plane.
I spent quite a while frustratingly trying to get code to work to enable a timer to delay the playback (like if it’s on the spot for 3 seconds, then play the animation) but I realized it was easier to just render out a video with 3 seconds of blank at the beginning.
I was going to try messing with RFID tags to swap out backgrounds, but I fried my reader. (First time I’ve seen such immediate smoke!) I ordered another but it’ll be a few days. New webcams should be coming in tomorrow so hopefully I can start working on a slightly updated box for Monday that incorporates the animation and a changing background image
I’m also really into the idea of having a round window. Partially inspired by these lovely dioramas by Patrick Jacobs.
I met with Jon Rafman (who does the 9eyes project) on Friday before colloquium. He seemed pretty into the experiments I was working on, though more interested in the idea of app of some kind that would scale and that everyone would have access too instead of the specialized boxes that only a few people could experience. I told him I was interested in the scaleability of an online application too, and that sort of runs parallell to the tangible interface.
A really interesting reference he gave me was this Escape the Map campaign for Mercedes. The “interactive” version is better than this little preview of it, although it’s more of a film with some barely interactive elements. But I do love the idea that all these blurred people sort of just wander the streets like zombies. And the idea of wearing a blur to fit in…
I sort of wish it was actually just a longer movie instead of trying to be this not-quite interactive thing. Though I do appreciate the part where it asks for your phone number. I’m assuming if I lived in the UK it would actually call me which is pretty awesome. I like the idea of getting a call from The Map. Sort of relates to an earlier thought I had about getting mail from The Map.
Streetview Freeze Tag
So..in the middle of writing this post just now I got inspired to make up an Alternate Reality Game based on this idea of getting blurred and frozen in time. Like modern medusas, the street(view) team runs around to tag people. Touching a person activates the camera, thus capturing them. The person then freezes in place and puts on a mask that blurs their face. I’m not sure if they should be able to get un-frozen like in regular freeze tag… Maybe it’s just a timed thing so you try to tag as many people as possible in 10 minutes and then the teams switch.
Hmm.. I really like the idea of running a game like this that can happen in a really public place, allowing by standers to take part by default. And I feel like I’ve been working in the screen world too much still and need to get projects out that happen in the real world.
I also like the fact that Google sells these shirts so the “drivers” could look all official like.
Traditional business teleconferencing & video chat is too boringly real. There was a time when people thought everyone was going to have virtual meetings in Second Life but Second Life is too fake. Surely there is something between the two?
Something like this?
As a side note, remember Nick Arcade? It was a short lived game show on Nickelodeon in the early 90′s where, after a series of mini games, the winning team would play a “real life” video game within the “video zone.” I use to think this was one of the coolest shows ever.
Here’s an initial prototype of the “touch screen” interface. Right now it just uses the motion tracker widget to track motion, though that’s sort of finicky. I’m wondering if it might be better to use something like an IR camera so it tracks more precisely. The monitor in front is also way too big. But I do like that if you were sitting in front of the monitor and put your arm around the back side it would make you basically hug the screen and force you to get pretty intimate with it.
Also, while I was trying to figure out how to get another camera to work as a webcam I came across this tutorial. I wasn’t able to get my Canon Rebel working cause I haven’t been able to find the EOS Utility software yet. But I did get CamTwist going and got pretty excited about the prospects of that alone.
Basically it lets you make a box around an area of your screen which then becomes another “camera” source. So then in the Netlab motion tracker widget you can use it as a camera, meaning it’s no longer bound by the camera on your laptop or one plugged into the usb. So I could even use a bunch of those IP cameras we have from the show.
In the tests above I tried it with a Youtube video, a street view, and a live chat and they all worked just fine, although street view is too low contrast I think. The most exciting part was using the live chat because this means I could actually have people’s webcams control something on my computer, or if I hook up some servos to the motion tracker, something in a physical location. Also interesting that it lets me use both my laptop camera and the CamTwist camera at the same time. I feel like there’s some nugget of an interesting idea in here but I haven’t quite sorted it out yet. It’s a little confusing thinking about all the different camera feeds….
This showed up in my Facebook ads today….A case of successfully targeted advertising… These dotspots allow your iPhone to take panoramic video without any stitching.
So in relation to the other stuff I’ve been looking at, how does something like this fit into things? I think it’s pretty awesome (and I would totally want one if I had an iPhone) but maybe still doesn’t quite have the same feeling as those little cinemagraphs or timesculptures. I think it’s the fact that video footage is just a little too accurate at capturing what actually happened, not what could have happened, and doesn’t quite leave enough mystery…
“Sometimes real life doesn’t give you all the right material,” I said. “So you have to invent something that’s true to the feeling you had, the feeling you’re trying to get across, even if the thing you invent didn’t actually happen. It’s an idea from Werner Herzog. He calls it the Ecstatic Truth.”
“I think you told me,” she said.
“A lot of people don’t understand that,” I said. “They get all hung up on details. Did this happen, did that happen. But it’s not always about what happened. Sometimes it’s more about how you felt when you were in it, what it made you think of, what it could’ve been, or what it almost was. It’s less about what happened, and more about how it really was, which is something else and something more.”
A very rough initial prototype of what an interface for these manufactured landscapes might be. At this point it sort of just seems like a fancy knob project. I’m thinking that I could embed some of these little RFID’s into each of the tiles so the car knows which tile its on… The only bummer is that the car, which would be the reader, would have to be considerably larger to fit the RFID reader. Plus a USB cable sticking out the side…
I’m thinking it would actually be easiest to just get a clock motor thing to power the arm so I don’t need to make a servo thing. Plus if it’s just a clock motor it would in theory be easy to coordinate what’s happening between the tangible device and what you see on screen. But on the other hand, the idea of a more fluid idea of time is also kind of interesting….
for the Romans, the length of an hour varied by the season. Twelve hours of daylight in December required shorter hours than twelve hours of daylight in June. The length of the hour varied between two extremes: 45 minutes long at the winter solstice and 75 minutes long at the summer solstice. Our hour divided (no matter what the season) into 60 minutes and 3600 seconds is a creation of the mechanical clock, which of course was unknown to the Romans. The Roman based the time of the day on their observation of the sun and the shadows it created.
Also, a side note maybe there is a separate tangible interface for adding things to a tile, so it’s not all in one thing.
Oh! And thanks to Sal for helping me re-find that reference I was looking for..