Here’s an initial prototype of the “touch screen” interface. Right now it just uses the motion tracker widget to track motion, though that’s sort of finicky. I’m wondering if it might be better to use something like an IR camera so it tracks more precisely. The monitor in front is also way too big. But I do like that if you were sitting in front of the monitor and put your arm around the back side it would make you basically hug the screen and force you to get pretty intimate with it.
Also, while I was trying to figure out how to get another camera to work as a webcam I came across this tutorial. I wasn’t able to get my Canon Rebel working cause I haven’t been able to find the EOS Utility software yet. But I did get CamTwist going and got pretty excited about the prospects of that alone.
Basically it lets you make a box around an area of your screen which then becomes another “camera” source. So then in the Netlab motion tracker widget you can use it as a camera, meaning it’s no longer bound by the camera on your laptop or one plugged into the usb. So I could even use a bunch of those IP cameras we have from the show.
In the tests above I tried it with a Youtube video, a street view, and a live chat and they all worked just fine, although street view is too low contrast I think. The most exciting part was using the live chat because this means I could actually have people’s webcams control something on my computer, or if I hook up some servos to the motion tracker, something in a physical location. Also interesting that it lets me use both my laptop camera and the CamTwist camera at the same time. I feel like there’s some nugget of an interesting idea in here but I haven’t quite sorted it out yet. It’s a little confusing thinking about all the different camera feeds….
Thinking about some alternate interfaces for editing the views for the roundabout. I had an initial sketch that looked like a diorama but I’m thinking a better one would be one where the screen is in front of your hand, so you can’t actually see the objects you’re holding. maybe being abstracted makes the objects your holding more open to imagination. Super quick test of what it might look like:
I’m also thinking about how the collaborative aspect of it might be even more interesting by having to stick your hand into this thing you can’t see. Perhaps when other people are in the same view you can feel their hand in there moving around too. I’m thinking the glove thing would be important as a way to make it more anonymous but also simplify the representation of the hand so it wouldn’t have to be actual video but perhaps images, stop motion like. Here’s the rough sketch of what I think I’ll start trying to make this afternoon.
Also, some existing similar interfaces.. theres the glove boxes people use for super clean work and then there’s the fact that many surgeons look at screens and not the actual body when doing their work these days…
So one of the things that came up during my meeting was to think about a bunch of different ways I could have something like my Rolling in the Streetview appear in the world.
So this is a a quick initial experiment with a very low-fi “lenticular” animation, although there’s no lens. It’s just two frames, cut up into strips, interlaced, glued down and folded like a fan.
Advertisers use this sort of thing for billboards and posters and direct mailers all the time, but I think the interesting part is using for non-advertising purposes, and having it relate to the space contextually.
I think the end result is kind of fun. I just realized afterwards that I could have printed a photo of the background behind the kitty instead of just having white, which would have made it seem sort of “transparent” and I could have used a clear acrylic stick instead of the wooden one. Perhaps those are things for version 2. Putting it on a stick also makes me think I’d like to make an animated protester sign. From one view it can say “FOR” and the other side “AGAINST” for the flip floppers out there.
Other thoughts that came up while making this was thinking about using the surface of blinds to toggle between two frames. The added bonus to having blinds is that in theory you could see through to the real world and maybe it would appear to be “augmenting” it. Also if it was on some sort of mobile (on wheels, not not phone) device it could be placed anywhere.
I think what I really like about this sort of low-fi version is that it doesn’t require any sort of screen or projection so it works in the daylight and doesn’t really need any technology so people could potentially print these out and take it outside themselves.
This showed up in my Facebook ads today….A case of successfully targeted advertising… These dotspots allow your iPhone to take panoramic video without any stitching.
So in relation to the other stuff I’ve been looking at, how does something like this fit into things? I think it’s pretty awesome (and I would totally want one if I had an iPhone) but maybe still doesn’t quite have the same feeling as those little cinemagraphs or timesculptures. I think it’s the fact that video footage is just a little too accurate at capturing what actually happened, not what could have happened, and doesn’t quite leave enough mystery…
“Sometimes real life doesn’t give you all the right material,” I said. “So you have to invent something that’s true to the feeling you had, the feeling you’re trying to get across, even if the thing you invent didn’t actually happen. It’s an idea from Werner Herzog. He calls it the Ecstatic Truth.”
“I think you told me,” she said.
“A lot of people don’t understand that,” I said. “They get all hung up on details. Did this happen, did that happen. But it’s not always about what happened. Sometimes it’s more about how you felt when you were in it, what it made you think of, what it could’ve been, or what it almost was. It’s less about what happened, and more about how it really was, which is something else and something more.”
A very rough initial prototype of what an interface for these manufactured landscapes might be. At this point it sort of just seems like a fancy knob project. I’m thinking that I could embed some of these little RFID’s into each of the tiles so the car knows which tile its on… The only bummer is that the car, which would be the reader, would have to be considerably larger to fit the RFID reader. Plus a USB cable sticking out the side…
I’m thinking it would actually be easiest to just get a clock motor thing to power the arm so I don’t need to make a servo thing. Plus if it’s just a clock motor it would in theory be easy to coordinate what’s happening between the tangible device and what you see on screen. But on the other hand, the idea of a more fluid idea of time is also kind of interesting….
for the Romans, the length of an hour varied by the season. Twelve hours of daylight in December required shorter hours than twelve hours of daylight in June. The length of the hour varied between two extremes: 45 minutes long at the winter solstice and 75 minutes long at the summer solstice. Our hour divided (no matter what the season) into 60 minutes and 3600 seconds is a creation of the mechanical clock, which of course was unknown to the Romans. The Roman based the time of the day on their observation of the sun and the shadows it created.
Also, a side note maybe there is a separate tangible interface for adding things to a tile, so it’s not all in one thing.
Oh! And thanks to Sal for helping me re-find that reference I was looking for..
Here’s the slide deck I showed at my first Thesis Committee meeting today with Tim, Elise, and Mike. It generally covers what I’ve learned from the experiments I’ve been working on so far, what type of feedback I got from Science Fair, how I’m reframing my project, and what I’m currently working on and what I plan to be working on going forward. I think the helpful part was the end where I sort of lay out the things I’d like to focus on making going forward:
A way to “spawn” things in the physical realm from an online realm.
A way to enable collaborative creation that affects the real world.
A tangible interface for interacting with our intangible representation of the real world.
I seem to be operating within a space between the constraints of a game and a totally open world like Second Life. How can I be more clear about how game like or not game like my project is.
What makes it different from Second Life is the potential flow between the real and virtual space. This potential cycle is what makes it more interesting.
It may be good to do a sort of survey/catalogue of existing mixed reality type things (ie. Foursquare) and diagram out what the difference is between them as a way to situate my work.
I mentioned my instinct to want to include animation, which is good, but what is it about animation that makes it fascinating in this context?
How does animation – which lives in a very frame based virtual world – relate to the real world?
What type of things do we do better in the virtual world?
A traditional virtual world like second life is like a dead end. This new real virtual world is different because it’s not a complete escape from reality.
What does the virtual world offer? What does the physical world offer?
Maintaining the qualities of the real or virtual world when they pass that barrier makes sense. How can I be more explicit about the qualities or characteristics. What is an example of mis-fidelity or glitches
How does the kitty example, through the animation, poke at the existing structure?
The studies are interesting, maybe it’s good to dive deeper and ask what makes them interesting.
There are elements of subversiveness + play + interaction beyond function.
How can it be more meaningful (not ness. practical) into the real world?
How can I complete the loop?
Maybe I could catalogue presidents on artifacts from the virtual world in the real world. (pixels, polygons, etc)
Maybe it doesn’t need to be a thing. Could it be information? Or opportunities?
How could I explore iterations & variations on things like the kitty in real life. A spectrum from noting, to printouts, to projections?
Can I catalogue this range of experiments?
Does it have to be within a context of a static image? What if the view was a live video feed?
Part of it seems to be about the tension between a static and animated thing.
We have this world of interfaces we live in that have a spectrum from utilitarian to pure fantasy, this is somewhere between that.