Archive for February, 2012

Identifying and Tracking Experiments

Written by on 4th February 2012 in Portals, Thoughts with 0 Comments

One of the things I’m really still trying to figure out is how to track the objects that go in and out of the box so that other stuff can appear based on the objects. I have a couple of issues though trying to get this thing working like I’d like. First is the fact that my cameras are constantly auto focusing and exposing so it’s hard to keep a consistant image. I sort of realized this earlier but didn’t think it was going to be a huge issue…but it kind of is. I’ve ordered another camera to play with and try out. It’s lower resolution but gives more control so.. we’ll see. Another issue is the size of my “interaction space.” I think compared to “normal” computer vision type things my mini-set is pretty small, so things like trackers and stuff don’t do so well at that scale. Also it’s hard to use my cameras for multiple purposes simultaneously (ie. layering on the display, streaming a feed, performing computer vision calculations). Also, having depth makes things difficult, esp. when I’m not using something like the Kinect.

In this first test I shot some stop motion of the cows milling about and tried using the RFID as the object identifier as a way to activate the animation specific to the object. This sort of works I guess but the object can’t really relate to the animation in any way live, since it’s not tracked by the camera.

I then tried some actual computer vision type stuff with a OpenCV. Found a little blob detecting Processing demo and was able to sort of distinguish objects by their area, which is kind of cool. But of course this only works when things are very different in size. And when I pick up an object it gets un-detected. And the area of an object is variable depending on where it’s placed in the space. ALSO big issue is that for some reason using the webcam feed directly into Processing makes it really slow, whereas if I use the CamTwist feed it goes real time. But I need to use Camtwist to layer the various elements. And I can’t capture the processing sketch to feed it back into the camtwist collage cause then it would just be stuck in this weird footage loop. I’m discovering these things are all quite easy and possible when done separately but becomes quite difficult when trying to do them simultaneously.

Also quite difficult because I really know very little about real time computer vision.

So I’m kind of frustrated trying to figure out what to do about this stuff. I really don’t want to use any sort of AR marker type thing because I think they’re really ugly and totally break the experience. So if I track anything it would probably have to be color. But I can’t really do that until my camera isn’t getting all auto mode on me. I’ve thought about trying to track markers on the bottoms of things, like maybe having a camera inside the light box looking upwards (like the reactable). But I’m not totally into the idea of having to deal with even MORE cameras. Plus I’m not sure if it could even see through the mylar and if the size of the marker would make a difference.

Read More →