Sharing Silhouettes

mockup-SharingSilhouettes-LuZhouyang

April 2019 (Upcoming)

An interactive new media installation using motion capture to portray current and past participants as ambiguous ghosts in a haunting cemetery. Created as part of a senior thesis capstone project for the Digital Media program.

Learn more about this project by visiting the Toronto Design Forum.

Creative Team:
Lex Moakler
Kemdi Ikijiani
Zhouyang Lu
Anas Chohan

Project Proposal

I. ABSTRACT

Sharing Silhouettes ​is a motion capture project in the form of an interactive audio-visual art installation. The interactivity is based on data from a Kinect camera sensor, processed and visualized to reveal a participant’s presence. The audio-visual scene and user presence appear as a graveyard and a ghost, where the user can interact with the tombstones. Touching the tombstones reveal past movements, where there is ambiguity as to whether they are the users’ motions or those of prior users.

KEYWORDS

Motion capture, silhouette, interactivity, Kinect, presence, ambiguity, graveyard, ghost, tombstone, past movements


II. INTRODUCTION

Motion capture is a huge area of participation for many industries including, but not limited to, live-action film industries, video game companies, and animation companies. Motion capture or mo-cap is “the use of sensors on real bodies (objects and beings) to track motion and record the movements digitally, typically for transferring to virtual 3D imaging” [1]. Motion capture has since broadened in definition and, due to that, has gained popularity in live-action performances where devices such as the Kinect camera are used without the need of a green screen. ​Sharing Silhouettes takes the live-action* path, where instead of relying on a green screen and on-body sensors, we utilize the capabilities of a Kinect camera. Although the definition varies depending on what field it’s used in, its factors remain constant. Some of these include capturing the data, cleaning up the data, mapping the data to your character, and editing the data [2], all of which we implemented into our project. There are two main parts of our project that we hope the participant is able to observe: The present and the past​. The present is straightforward. The participant is shown a reflection of themselves. The second part, the past, enables the participant to view a previous action. The finished product will explore the aesthetics of ambiguity of the past actions—whether they are from the participant or from other people. However, for our prototype, what we showed for the past is a still image rather than a moving picture.

III. RELATED WORKS

In the process of determining our work, we researched a variety of similar artworks. The two most relevant pieces for our work are Chris Milk’s The Treachery of Sanctuary [3] and Scott Snibbe’s Make Like a Tree [4]. The first project by Milk is a good example of the artistic effects that can be demonstrated by the Kinect motion capture algorithms. The Treachery of Sanctuary explores the creative process with three looping parts: birth, death, and transfiguration. The loop itself shows how a project like ours can be iterative, with rapid testing of concepts and “reset” redesigns of the project. Milk’s work also combined human silhouettes and birds with a two-dimensional method. The second project, Make Like a Tree, relates directly to our work in terms of the thematic scene and depiction of past silhouettes. Snibbe creates a scene with a mixture of mild horror with creative and playful movements. User motions are captured and shown behind trees as shadows, getting farther and farther away.

In addition to our two primary sources, we continue to find inspiration from three-dimensional motion capture artworks found earlier on in the research process. The residual image effect shown in ​Kung Fu motion visualization [5] is artistically compelling. Hsin-Chien Huang’s performance art piece ​The Inheritance [6] shows how motion capture can be used to control virtual objects. Myron Krueger’s pioneering digital media artwork ​VideoPlace [7] explored the interactivity inherent when motion capturing two people in real-time and continues to guide our design process as the finished product will entail users interacting with other silhouettes.

IV. DESCRIPTION OF THE DEVELOPMENT PROCESS

Sharing Silhouettes ​is an interactive audiovisual installation that explores how we relate to traces of motion from ourselves and others. Members of the general public are invited to participate one at a time, where they immediately see a representation of their motion as the silhouette of a ghost. This ghost is in a graveyard, surrounded by tombstones, appearing below the night sky. When the user’s silhouette makes contact with a tombstone, a ghostly representation of past movement is revealed. The user is confronted with a form of ambiguity that is at the heart of the project, “Are these ghosts also from ​me, o​r are they from ​others?”​ ​Continuing to interact demonstrates that both are true: some traces are yours, and some traces are from other people. In both cases the silhouette’s motion is ​shared, ​as each successive user is given control over the entire assemblage of motion. The distinction between your movements and those of others is ambiguous, because eventually someone else will be given control over both. Fig. 1 shows a visualization of what a user will see when they begin interacting with our finished project. Fig. 2 shows the current state of the project, as demonstrated with the prototype. Fig. 3 shows the intermediate step in Processing where the user silhouette is isolated from the Kinect depth data.

mockup-SharingSilhouettes-LuZhouyang
Fig. 1. Visualization depicting a user’s silhouette, a graveyard full of tombstones, a moonlit night sky, and small trails of glowing lights hinting at the tombstones’ interactivity.

The design of the project has changed over the course of its development. Our initial goals were ambitious: two users interacting through two Kinect sensors at the same time. Exploration of the implications of these variables through research and diagrams showed that both variables should be kept to a minimum. The presence of two users amidst past users’ movements could obscure the tombstone-based ghosts, distracting from the ambiguity we were focused on. We also found that using multiple Kinect sensors would increase the technical difficulty for two reasons. Firstly, interfacing with two sensors simultaneously posed enough of a technical challenge on its own that its achievability seemed beyond our existing skill sets. Secondly, the infrared dots projected by two Kinect sensors would be the same pattern and wavelength, thus being indistinguishable [8]. Reducing the number of users and sensors was justified in that it allowed us to focus more closely on the ambiguity between one user’s motion and those of past others. To continue with the two-user two-sensor design would be unjustified, in that it would have increased the project’s complexity without offering an associated benefit for the user experience or our artistic goals. The outcome of these decisions is shown in Fig. 4, a diagram of our project’s technical components.

Screen Shot 2019-02-12 at 2.02.20 PM.png
Fig. 2. The prototype of ​Sharing Silhouettes​. The user’s ghost is touching a tombstone, revealing another ghostly silhouette to the left.

Screen Shot 2019-02-12 at 2.02.10 PM.png
Fig. 3 Debugging interface screenshot, showing on the right how the user’s silhouette is processed.

Screen Shot 2019-02-12 at 2.02.03 PM.png
Fig. 4. Second version of the project’s system diagram, showing its hardware and software components.

V. CONCLUSION

When taking into account the title we chose for our prototype, ​Sharing Silhouettes​, it should be noted that the term sharing is used metaphorically rather literally. In addition to that, representing the participant(s) with silhouettes ultimately worked in our favour due to the visual interpretation of the depth map provided by the Kinect. “Sharing” also takes into account the thought process of the participant when they first come into contact with this silhouette. Their first thought will most likely be “who is that?” rather than “that’s me.” For that length in time, it appears as though the participant is “sharing” a silhouette with an unknown person. Using a Kinect in addition to the Processing and Max applications, we were able to show the participant another silhouette. Whether it was depicting a past or present action, was up to them.

VI. TASK DISTRIBUTION

We divided the tasks for this project based on our strengths, weaknesses, and experiences.

i. Responsibilities for the Prototype

Lex was the project manager, as he had more experience in making projects like this than anyone else in the group. He made sure that everyone was on task and took control of planning for the project. Lex retrieved the motion capture data from the Kinect. He used Processing to retrieve the data and send it to Max in two ways: depth imagery are streamed as a texture through Spout, while skeleton coordinates are sent as OSC messages using UDP libraries.

Kemdi ​​was responsible for using Max to build a scene around the data coming from Processing. She made a 3D environment in Max to generate different types of interactive elements. In Max, she was also able to enable collision reporting with the tombstones using jit.phys.world. She also modelled the tombstones using Maya. Additionally, she is also responsible for the audio component. Currently, she is working on getting textures and bump mapping to work with Max.

Anas worked with Blender. Initially, Blender was planned to be connected directly to the Kinect, to create real-time interactive visual elements, but we faced technical hurdles such as being unable to combine Blender and Max output. So as a group, we decided not to include Blender in the real time component. However, for the prototype, Anas’ pre-rendered components, like the trees, were added to Max. He is mostly responsible for uv wrapping (projecting a 2D image to a 3D model’s surface for texture mapping) and texture baking (manipulating texture images via Blender).

Zhouyang ​​was responsible for creating the 2D visual components of the project including the night-sky video which he made in Adobe After Effects. During the research stage of the project, Zhouyang showed interests in visual effects applied to recorded footage, such as a mirror effect or a video lag effect. So, Zhouyang created some of these effects in Processing, since they are applicable to the 2D silhouettes we are creating. Zhouyang also has experience in web development, so he made a website for us, which was great for regular reporting of tasks and showcasing our creative process.

ii. Responsibilities for the Paper

Lex:
Development process
Abstract
Kemdi:
Introduction
Conclusion
Anas:
Task Allocation
Zhouyang:
Related Works
References

VII. REFERENCES

[1] Rouse, M. 2018. ​Motion Caption (Mo-Cap).​ Accessed December 2018. https://whatis.techtarget.com/definition/motion-capture-mo-cap
[2] Mocappy. n.d. ​What is Motion Capture?​ Accessed December 2018. http://mocappys.com/what-is-motion-capture/#.XAbpghNKjfY.
[3] Snibbe, S. 2006. ​Make like a Tree. ​Interactive Artwork. www.snibbe.com/projects/interactive/makelikeatree/
[4] Milk, C. 2012. ​The Treachery of Sanctuary. I nteractive Artwork. http://milk.co/treachery/
[5] Gremmler, T. 2016. ​Kung Fu Motion Visualization. Vimeo. Retrieved from https://vimeo.com/163153865/
[6] Huang, H. 2014. ​The Inheritance. ​Taiwanese Dancers Project Memories with Xsens Motion Capture. Retrieved from http://www.storynest.com/pix/_4proj/per_interitance/p0.php?lang=en/
[7] Krueger, M. 1974. ​Videoplace. ​Media Art Net. Retrieved from http://www.medienkunstnetz.de/works/videoplace/
[8] Maimone, A. and Henry Fuchs. 2011. Encumbrance-free tele-presence system with real-time 3D capture and display using commodity depth cameras. In Proceedings of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality​ (ISMAR’11). IEEE Computer Society Washington, DC, 137-146.


* Refers to a live performance as opposed to making a film.
This is group three’s technical paper (EECS 4700 course) for the fall term.
© 2018 Copyright held by the owners/authors