Box by Bot & Dolly is the most impressive and immersive example of 3D projection (aka  projection mapping) I have yet come across. Whereas nearly all previous examples of 3D projection have employed projection onto static objects, Box  showcases what you can do with a combination of 3D projection and movable objects.


We’ve been working on similar projects, albeit on a smaller scale and mainly driven by research interests, for a couple of years (see the above video for some of the tests we did four years ago in collaboration with Kollision and BIG), and to me this type of technology holds a lot of potential, so I’m very excited to see it becoming more mainstream as professional interaction designers start working with it. One of the tricky aspects of doing 3D projection onto moving objects is that it requires very precise calibration. If the calibration of the projectors is just a bit off, pixels will spill over and the illusion will break down. As long as you’re projecting onto a static object, you’re more or less home safe once the projectors are calibrated, but when you project onto a moving object, you need to find a way of tracking this object with a high degree of precision. Box circumvents this problem by using pre-programmed robots to move the displays.

In our most recent work, we’ve developed so-called Tangible 3D Tabletops, which combine interactive tabletops with 3D projection onto tangible, movable objects. One of the demonstrators is the Tangible Urban Planning demo above. Although there are still a number of interesting research questions to address (for instance how we can track and project onto objects that people pick up and carry around a room, how we can integrate it with other types of interfaces, and how we can project onto shape-changing objects,  I’m fairly sure that projects such as Box can help pave the way for a wider uptake of 3D projection.