Yes, it has been over a year since my last post here, so this is probably long overdue, but I realized that sharing a page is usually more algorithm-friendly than sharing a YouTube link, so perhaps it was time to dust off the ol’ WordPress blog.
It’s been a long, hard few years for many, and I’ve had a lot less exciting creative professional projects this last year or two than I normally would, thanks to COVID-19. I’ve gotten to help with fights for a few gigs here and there, some online conferences and workshops, and some classes, I’ve been doing more service work with the SAFD and other organizations, made and modified some props I might talk about later, but the biggest continuing pet project of the past couple years has been a constantly-evolving grant-funded project on motion capture as a pedagogical tool for movement training.
This actually started more specifically as something about VR/AR and the potential of true first-person content as maybe better for learning some physical skills than the usual third-person video content. My first year at it was a Freedman Fellows project on that, and I did some initial experiments in first-person VR video recording to see if that was viable, too motion-sickness inducing, or just not worth it.
But I pretty quickly realized that VR wasn’t ideal, because the viewer had no idea about where their own body was in space. This is problematic for learning because you can’t compare yourself to a model, and for safety because you can’t really move freely around the space without running into things.
So then onto Augmented Reality, where you can see something 3d projected into your actual surroundings. I had high hopes for things like the Microsoft Hololens, because I knew Case Western had been working with them already and had some in their possession, but when I was finally able to test some out, I quickly realized they wouldn’t work. Too narrow a field of vision, too laggy, etc. The Oculus Quest 2 has some AR tools coming soon, apparently, using their low-res black and white cameras to sort of do AR, but the developer tools don’t seem to be available to the general public just yet. So that part of the project got mostly put into the theoretical future bin.
But the other realization I quickly had with the 360 video was that true first-person perspective would require motion capture, since a physical camera can never really be put inside your head. You can do first-person video where you control the perspective, like Hardcore Henry did, but not in such a way that the user can decide where to look.
And this took me in another direction, based more around motion capture. The Freedman Fellowship really only funds hiring a student or other support for labor, not equipment, so I started applying for others, and landed an ACES+ Opportunity Grant and then later a Glennan Fellowship, and between those and the tail end of my setup funds, I was able to get the basic equipment need to buy a basic motion capture set.
I chose the Perception Neuron Studio system, because I needed something relatively affordable, portable, and with pretty quick setup, which the better optical systems just don’t offer, and I also had a preference for straps over suits, since I wanted to be able to put different students in it without worrying about suit fit, having to launder things constantly, etc. (which ruled out Rococo, the other main competitor to Noitom). I also wanted something I could do a one-time purchase of and own, since that’s how grants work, and some of the systems out there are subscription-based for the software. I’d used an inertial/magnetic setup previously at the Paddy Crean workshops, so those were already on my radar.

professionals from New Zealand, the UK, Norway, Canada and the US.
I wanted to try two main things in terms of classroom usage: Base mocap data for movement and postural analysis, and mapping on to other characters as a sort of modern descendant of traditional mask or puppetry work. The VR/AR thing is still in a possibly/maybe/eventually list, but classroom use came to the fore.
I originally leaned towards Unreal for the latter, as perhaps more of an industry standard in video games and movie/tv production, but the student I was able to hire for the second year of Freedman Fellowship support was familiar with Unity instead, and with their purchase of the WETA tools and some of their new demos, Unity’s definitely still a player in the market.
Anyway, after working on this idea for over a year now, I’ve been able to test these ideas in the classroom:
…and yes, this is part of a longer series/playlist of videos I’ve made long the way, trying to figure out how to make this all work.
Up next, in a couple weeks:

…and I’ve just found out that our Cardinal Richleau at the CPH Three Musketeers (just went into rehearsals) is someone else I know, who’s worked on cartoon and video game mocap, so this may grow.
Looking forward to seeing where all this can lead!