One of the things I’m really still trying to figure out is how to track the objects that go in and out of the box so that other stuff can appear based on the objects. I have a couple of issues though trying to get this thing working like I’d like. First is the fact that my cameras are constantly auto focusing and exposing so it’s hard to keep a consistant image. I sort of realized this earlier but didn’t think it was going to be a huge issue…but it kind of is. I’ve ordered another camera to play with and try out. It’s lower resolution but gives more control so.. we’ll see. Another issue is the size of my “interaction space.” I think compared to “normal” computer vision type things my mini-set is pretty small, so things like trackers and stuff don’t do so well at that scale. Also it’s hard to use my cameras for multiple purposes simultaneously (ie. layering on the display, streaming a feed, performing computer vision calculations). Also, having depth makes things difficult, esp. when I’m not using something like the Kinect.
In this first test I shot some stop motion of the cows milling about and tried using the RFID as the object identifier as a way to activate the animation specific to the object. This sort of works I guess but the object can’t really relate to the animation in any way live, since it’s not tracked by the camera.
I then tried some actual computer vision type stuff with a OpenCV. Found a little blob detecting Processing demo and was able to sort of distinguish objects by their area, which is kind of cool. But of course this only works when things are very different in size. And when I pick up an object it gets un-detected. And the area of an object is variable depending on where it’s placed in the space. ALSO big issue is that for some reason using the webcam feed directly into Processing makes it really slow, whereas if I use the CamTwist feed it goes real time. But I need to use Camtwist to layer the various elements. And I can’t capture the processing sketch to feed it back into the camtwist collage cause then it would just be stuck in this weird footage loop. I’m discovering these things are all quite easy and possible when done separately but becomes quite difficult when trying to do them simultaneously.
Also quite difficult because I really know very little about real time computer vision.
So I’m kind of frustrated trying to figure out what to do about this stuff. I really don’t want to use any sort of AR marker type thing because I think they’re really ugly and totally break the experience. So if I track anything it would probably have to be color. But I can’t really do that until my camera isn’t getting all auto mode on me. I’ve thought about trying to track markers on the bottoms of things, like maybe having a camera inside the light box looking upwards (like the reactable). But I’m not totally into the idea of having to deal with even MORE cameras. Plus I’m not sure if it could even see through the mylar and if the size of the marker would make a difference.
One of my kickstarter rewards for people who pitched in $25 or more was a personalized thank you video from me. Of course this was an opportunity to play with the format and make something fun.
Since I have people’s addresses from mailing their postcards earlier I was able to “visit” all my backers at their homes on Street View. But I didn’t want to just make a static video so I figured out how to layer an html5 video on top of an embedded street view location.
This sample is live here. (Unfortunately it only works in Chrome and Safari and not Firefox)
I don’t know why. But as a fall back i can send people a regular screen captured video of it if it doesn’t work for them i suppose… I haven’t sent them out yet so I’ll see how people respond…
Spent the past few days hooking up the magic/technology to start working on what I’m calling Teleportaling.
In the first few tests I was working on just getting things set up and working.
Get it to work just using my laptop camera, using google video chat. This worked with just my regular camera but ran into a lot of problems when I was trying to use camtwist and the portal cameras.
Get it to work with the portal camera (being held by duct tape), layered on some footage. The duct tape failed me. As it got warm in the box the adhesive would loosen and the camera would slowly drift downwards, and the lights would just fall off. so I had to figure out some alternate mounting options.
Camera secured with Velcro, which satisfied both my need to be strong but removable. I layered pre shot footage + live portal feed + and random live broadcast from UStream just to test the collaging of multiple feeds.
Right portal real time + teleportaling / (Left portal is wrong). I was able to get one box to transmit it’s camtwist feed to ustream, but not able to get the other box to transmit its feed while appearing on its own screen. Thus the right one works while the left one sends the feed to the right box before the left box sees it. This is obviously a problem since you need to be able to see what you’re doing.
Individual teleports. Without layering the left and right feeds I’m able to do the teleportaling, but of course not being able to see your own feed isn’t optimal…
In this video, I’m finally able to get both sides to have real time feeds of their own activity while layering the feed of another. Only the right box feeds to the left right now because I don’t have enough cameras. The right one works because there’s actually 2 cameras in the box. It seems that I can’t have the same camera feed being sent to camtwist AND ustream producer at the same time. But at least this is pretty good test of it almost working. I ordered more webcams but those will be in tomorrow so for now I wait. Perhaps I’ll spend the rest of the day working on those kickstarter rewards.
Making a second light box. These foam core ones are just for now. Once I get they layout of the innards more planned these’ll probably be cut from acrylic I think.
See those tall thin pieces of wood behind the boxes? I was originally going to make a cube frame out of those and then get some material to make the sides. But when I went up to the wood shop on hill side the shop instructor convinced me that was a pretty bad way to make a cube cause it would be super hard to assemble straight. Instead he suggested I just build it out of MDF, which would be both easier and cheaper, at least for now. This is just suppose to be another “draft” of the boxes, not the final ones. But I really like how these turned out, way nicer than i had expected. I think for a more “final” version I would just use a nicer wood that I could stain to look nice. I’m really thankful for the shop guys’ help. It also helped to get more comfortable in the wood shop, cause even though I’ve had shop classes in like high school the wood shop is still a little intimidating to me.
Also put a hinged lid on the boxes to make it easier to get things in and out! Only real problem was that the opening in the front was about 1/8 of an inch too small for the screen, which is also sort of better than I had expected considering I didn’t have the exact measurements of the screen when I was in the shop.
Today Dustin helped me shave that 1/8 inch bit off with a jig saw, and I was able to ram the screens in there perfectly. Yay! I was initially hoping to have everything self-contained, with the laptop and power strip and everything inside the box. But I quickly realized it was getting too messy with all the cords and my computer was really unhappy in there, so there’s still going to be a fat stack of wires running out the side of the box.
And here’s the pair of Portals together! I still have to get into the technology side of things. But it’s really great to finally have a pair of Portals that are in a solid and consistant enough form to be moved around while still keeping things consistant on the inside, since before I was just disassembling my end-table-foam-core pile every time it moved. For the Kickstarter peeps, I think I’ll be making a little laser cut plaque with people’s names on it so I it can travel along with the boxes as they get updated and improved.
The project sort of went on hold over winter break as equipment got returned, I caught up on the rest of my life, and waited for the Kickstarter funding to get deposited. I did manage to send out some Kickstarter backer reward thank you postcards to people. On the left side of the rainbow is me sticking my hand into a portal, while the right side is a drawing of the backer reaching into another portal (if I knew them in person or if they had a photo available online). The cloud in the middle is the internet, some with backers’ favorite animal. I actually really liked making these just cause they were fun but also I feel like it’s a pretty good diagram of the system, especially with the rainbow. So it may appear again in another form later.
Also re-worked the thesis paper, integrating the research and experiments together more and trying to cut out the less important bits. It ended up being about half as long as the original. It also doesn’t have pictures. Maybe I should have put some in, but I sort of feel like it lives in the context of the rest of my work too, so if you’re reading the paper you can see the other stuff online. Plus, a lot of it is video content, so you’d have to see it playing to make sense anyways. So here’s the “final” paper I guess, although it’s not really final if I’m just getting started on the project. But I do feel like it helped me organize my thoughts a bit and sort of make a game plan going forward.
The end of the paper I basically pulled out a few principles to guide my work for the rest of the term:
By leveraging spacial and technical constraints, use play as a tool to disrupt and challenge existing magic circles in the real world. Through this act, instigate more improvisational interactions between both human to human and human to computer.
Embrace alternative “truths” as a tool for sharing ideas and inviting participation from others. By provoking questions about future technological developments, hopefully this technique ensures the project’s core ideas can continue exist beyond its original “actual” form.
Combine the affordances of the virtual world, the physical world, and the human imagination in order to create an experience formerly impossible in the virtual or physical world separately, and therefore more authentic to the hybrid nature of the experience.
Using both the strength and strangeness of network culture, create an uncanny experience in order to disorient users and disrupt our habitual routines as active participants in the network.
So that is sort of the manifesto I guess. Today’s the first day back and it feels a little weird just diving right in. But I at least have that. And today I at least brought in the supplies I got to start building the things. I’ve got my 2 monitors, led lights, web cams, stands and some other stuff to start building.
Tomorrow I’m going to start building out a second light box at least. And maybe some new boxes. Just in white foam core for now while I figure things out. I’m thinking the white interior will be better for the lighting than the black, though I don’t know what the outside should be. I was thinking a super shiny reflective mylar so the thing almost blends into the environment. But I think I’ll wait on worrying about that. First I just want to get the basic stuff up and running so I can start playing with the crazy stuff. I also want to finish making backer rewards very soon so I don’t have to worry about those.
Monday’s committee review went pretty well I think. Overall, lots of good feedback that basically encouraged me to keep going with the Portal and adding more features and pushing it to its extremes. They mentioned that my instincts had served me well so far so I should continue to keep experimenting and adding more. A few notes:
look at possibly physically locating the portal in the physical space.
think about the portal as a new modality of interacting with space.
maybe the illusions could be considered “gimmicky” but that’s ok
play up the “magical” and visual tricks
cinematic solutions being applied to a different problem of interaction design
add more features
tie in the cinematic qualities
not just a background image, but more of an interactive environment
more precise interaction and layering of more reality
connect to the navigation
And then last night at the open house / holiday party thing it was fun seeing different people interacting with the box in their own ways. The little kids seemed really into it, which makes sense since it’s pretty playful. I really liked when people started putting their own objects into the box and interacting with it on that personal of a level.
As a side note, I was on Lake the other day when a pano shooting car drove by. Of course I ran after it to get a picture. I don’t think it was a Google car though because there wasn’t any sort of markings and I’m fairly certain the Google cars are all branded. But it was interesting to see the rig set up and everything. But maybe it WAS a google car and I actually will be there in a few months time! Hopefully…
Finally went and picked up some lights to try out. Found a nice set of LED strips and some ultra thin pucks. So I built a little foam core back lit stage to eliminate casting shadows. Its basically just a light box folded in half, though it took a lot longer to make than I thought it would. You can see bits of the lights inside cause I didn’t have enough foam core to make proper sides, but it works for now. I think ultimately I would want to cut it in acrylic. The mylar from Kizu works really well to help diffuse the light.
Also made little lamps/stands for the front lights. Probably not the most efficient or sturdy way to make the things, but I sort of like how they look, they’re a little quirky and kind of reminds me of my old stochasticity space.
Here’s what it looks with everything inside. Looks like a real little stage. Except for the Christmas mug holding up the little webcam.
And here’s the result of all that… Look at that chroma key! So clean and lovely. It still breaks a little, but it’s not really the lighting’s fault at that point, I think it’s just cause the camera is trying to re-white balance or something when there’s a lot of change to the scene. But I sort of don’t even mind when it breaks because the grey tone seems to sort of fit. Though I could probably still work on the lighting positioning.
I got the new HD webcams which are really nice, except that they’re maybe too good for their own good… When I put the green screen in the back it tried to white balance for the green and totally shifted the colors. Also testing the light sensor animation based animation spot, but for some reason the color doesn’t translate right so you can see a big difference…
I switched over to just using white paper, which put the colors back to normal, but this means I can’t use white in any of the objects… I guess it might be better though cause then I can back light it too to get rid of shadows.
Since I had the white paper in there I thought I’d try some drawing. Or rather, I asked Dustin to draw for me since the hole is on the left and he’s left handed…