Having to describe the project to a public audience (both in the video and the descriptions) has really helped me think about the implications of the project and what I’m trying to do with them and the next steps involved. It also at least got me to come up with a name for the things other than “magic black box.”
I also sort of feel like going through the process of making a Kickstarter campaign itself is sort of part of the greater project since I’m sort of looking at the effects of our network culture, like the little experiments with Amazon’s Mechanical Turk. I also really hope I can reach the funding goal, not just for the funding but for the opportunity to make the rewards for people, as I think collectively those could also be really interesting projects in themselves.
And now for the next 9 days I really need to focus on writing the first draft of my thesis paper…
I met with Jon Rafman (who does the 9eyes project) on Friday before colloquium. He seemed pretty into the experiments I was working on, though more interested in the idea of app of some kind that would scale and that everyone would have access too instead of the specialized boxes that only a few people could experience. I told him I was interested in the scaleability of an online application too, and that sort of runs parallell to the tangible interface.
A really interesting reference he gave me was this Escape the Map campaign for Mercedes. The “interactive” version is better than this little preview of it, although it’s more of a film with some barely interactive elements. But I do love the idea that all these blurred people sort of just wander the streets like zombies. And the idea of wearing a blur to fit in…
I sort of wish it was actually just a longer movie instead of trying to be this not-quite interactive thing. Though I do appreciate the part where it asks for your phone number. I’m assuming if I lived in the UK it would actually call me which is pretty awesome. I like the idea of getting a call from The Map. Sort of relates to an earlier thought I had about getting mail from The Map.
Streetview Freeze Tag
So..in the middle of writing this post just now I got inspired to make up an Alternate Reality Game based on this idea of getting blurred and frozen in time. Like modern medusas, the street(view) team runs around to tag people. Touching a person activates the camera, thus capturing them. The person then freezes in place and puts on a mask that blurs their face. I’m not sure if they should be able to get un-frozen like in regular freeze tag… Maybe it’s just a timed thing so you try to tag as many people as possible in 10 minutes and then the teams switch.
Hmm.. I really like the idea of running a game like this that can happen in a really public place, allowing by standers to take part by default. And I feel like I’ve been working in the screen world too much still and need to get projects out that happen in the real world.
I also like the fact that Google sells these shirts so the “drivers” could look all official like.
A slightly more refined (foam core & duct tape) version of the box, which I’ve decided to start calling a Portal. Also a couple prototype videos showing what it could be like to have two people in this virtual layer. I was able to get a very rough, version of it working (see example with Dustin & Rubina). But the chroma key is so crappy and not really working like I would like. I think I just need much better lighting and higher res cameras. So the video below is sort of a half real half fake prototype. I’ve also been thinking about what your hand could do while inside the box, for instance rubbing a surface to navigate the space like a giant lazy susan. Or having push buttons in the space that could trigger other things so you’re actually interfacing with something inside the box?
The timer goes off and automatically logs me out of Viewland. I pull my hand out of The Black Box and take off my glovatar. It’s time to FocusSwitch again, but I forget which mode I should be in. Did the last 30 minutes count as a work-unit or a play-unit?
In the telepuppet meeting with Sasha and Quinn we had been collaborating on building a new structure for the space. But since the generatabot was chugging along just fine on its own we mostly just gestured about the things we were going to do in Viewland the next time we had a free Viewland play-unit.
Quinn mentioned San Francisco’s imagery had just been updated to include a new Marcade. Maybe we could all go over to tackle some Accounting Quests. Sasha and I agreed it could be worthwhile, and planned our strategy for later. So I guess that counted as a work-unit after all.
I move over to the Co-Journalist Box, and start to brainstorm what I could submit today. I decide to write a story about the new Marcade and submit my 500 word article to the system. The editrons parse my submission, pay me 5 acti-points, and grant me access to today’s articles.
A story that was growing in popularity finds me. A hacker group figured out a way to both fool the skin detection algorithm and enable two-way audio transmission in The Black Box. They were able to enter Viewland without wearing a glovatar and speak to one another with their own voice! The system registers my excitement level and lets me read more.
Don’t get me wrong, the androgynous and ethnically neutral glovatars were great. They had created objective and non-discriminatory activity environments. And, as advertised, Gesteranto had successfully enabled cross-cultural and politically neutral communication for all.
But sometimes I just wonder what the other users are actually like under those gloves. Sometimes I miss the sound of people talking. Sometimes I long to reach out and just touch my collaborators. But I know that might trigger an HR-harassment script.
I quickly pluslike the article just before the timer goes off again. My glovatar hand goes back in The Black Box, but I can’t stop fantasizing about that hack.
Super quick little test to see if I could get a “telepresence” system. You can see Rubina & Dustin’s hands in the same space on screen even though they’re in separate locations. Obviously the chromakey is pretty awful cause the second “set” isn’t lit properly at all. But I think it at least gets the idea across as a proof of concept. I also don’t mind the crappy resolution all that much cause it helps add a layer of abstraction to the people.
I didn’t make a second box since I don’t have another monitor right now. I bought a little 7 inch USB monitor (see it face down on the floor) but I’ve been having trouble getting the full screen video working on it. I was thinking the larger monitor was too big before, but I sort of feel like it’s actually not bad. I was hoping the little one would work just so I could have a shoebox sized box, but maybe it’s too small now that I look at it.
I feel like it’s a super simple little system but it has lots and lots of possibilities. I can imagine magnets on the people so you could move them around from below. Or a system that can identify the objects being brought into the world. Or.. lots of other things! Exciting!
Traditional business teleconferencing & video chat is too boringly real. There was a time when people thought everyone was going to have virtual meetings in Second Life but Second Life is too fake. Surely there is something between the two?
Something like this?
As a side note, remember Nick Arcade? It was a short lived game show on Nickelodeon in the early 90′s where, after a series of mini games, the winning team would play a “real life” video game within the “video zone.” I use to think this was one of the coolest shows ever.
First, a couple of things that didn’t really work this morning… I was trying to use our IP cameras to create a panoramic source for the Motion Tracker widget. Unfortunately the refresh rate on those things is just WAY too slow to work properly. So I got frustrated and bought a $30 webcam from Staples, which works wonderfully, although you can’t place them side by side to make a long panoramic..Next test was getting things onto the iPad to use it as a front display. I tried Air Display, Live View, and Ustream. Unfortunately all 3 of those also had a lag time that was not going to work for me. I was also playing around with CamTwist’s various effects. At first I thought they were all just cheesy filters but the ability to dynamically chroma key and layer video on top of each other was really exciting…
And I was finally ready to start making a box! I found the smallest screen in the studio and the smaller end table fit together pretty well. I’m imagining a more refined version would be both smaller and fully enclosed except for a place to put in your hand. But I think this version actually works pretty well for now..
Thanks to Andrew for both lending me his little people and modeling : ) At first I was hoping the camera wouldn’t be so zoomed in, but I think it’s actually kind of nice. I love the scale shift between the human, the interface, and the mini people. I think what I like about going behind the screen to interact with tangible things is that it sort of plays with the layers of virtual/physical. And it also starts to think about interaction beyond “Pictures Under Glass” cause the thing with touch screens is they’re very much still separated by that barrier.
I definitely need a cleaner background and light it better for a cleaner chroma key. But I sort of like the chunky artifacts. Sort of stays true to the loss of fidelity in going from a physical thing to a digital thing.
I’m feeling pretty good about where this is going. I also feel like there’s lots of other ways this could be even cooler. For instance if I could somehow track for position of objects and then have them appear to animate on the screen after you’ve placed them into the space. Or if the box knew what you were adding and could add additional things related to that specific object. I ordered some RFID stuff to try out so hopefully I could use that to identify objects as they pass through the opening to the box. I am also thinking another element I could add in would be “anonymous hands” that also add to your scene while you’re using it. In theory they would be from other networked boxes, in reality they might just be videos overlaid on top.
Here’s an initial prototype of the “touch screen” interface. Right now it just uses the motion tracker widget to track motion, though that’s sort of finicky. I’m wondering if it might be better to use something like an IR camera so it tracks more precisely. The monitor in front is also way too big. But I do like that if you were sitting in front of the monitor and put your arm around the back side it would make you basically hug the screen and force you to get pretty intimate with it.
Also, while I was trying to figure out how to get another camera to work as a webcam I came across this tutorial. I wasn’t able to get my Canon Rebel working cause I haven’t been able to find the EOS Utility software yet. But I did get CamTwist going and got pretty excited about the prospects of that alone.
Basically it lets you make a box around an area of your screen which then becomes another “camera” source. So then in the Netlab motion tracker widget you can use it as a camera, meaning it’s no longer bound by the camera on your laptop or one plugged into the usb. So I could even use a bunch of those IP cameras we have from the show.
In the tests above I tried it with a Youtube video, a street view, and a live chat and they all worked just fine, although street view is too low contrast I think. The most exciting part was using the live chat because this means I could actually have people’s webcams control something on my computer, or if I hook up some servos to the motion tracker, something in a physical location. Also interesting that it lets me use both my laptop camera and the CamTwist camera at the same time. I feel like there’s some nugget of an interesting idea in here but I haven’t quite sorted it out yet. It’s a little confusing thinking about all the different camera feeds….
Thinking about some alternate interfaces for editing the views for the roundabout. I had an initial sketch that looked like a diorama but I’m thinking a better one would be one where the screen is in front of your hand, so you can’t actually see the objects you’re holding. maybe being abstracted makes the objects your holding more open to imagination. Super quick test of what it might look like:
I’m also thinking about how the collaborative aspect of it might be even more interesting by having to stick your hand into this thing you can’t see. Perhaps when other people are in the same view you can feel their hand in there moving around too. I’m thinking the glove thing would be important as a way to make it more anonymous but also simplify the representation of the hand so it wouldn’t have to be actual video but perhaps images, stop motion like. Here’s the rough sketch of what I think I’ll start trying to make this afternoon.
Also, some existing similar interfaces.. theres the glove boxes people use for super clean work and then there’s the fact that many surgeons look at screens and not the actual body when doing their work these days…