Jody died of cancer. It was what they call a “childhood” cancer; a soft tissue carcinoma. It started in his hip and was diagnosed in the autumn of 2002. He and his team of oncologists beat it back and by the following summer he was in remission and had a bright new lease on life.
But then there came that day in October. We were talking online, both of us at work, over ICQ, and he started complaining of a pain in his side. A familiar kind of pain. He hoped it was something digestive. I was a little concerned, of course. What I remember is leaving at lunch time to buy a 128MB USB drive… my first. While I was driving, I listened to The Mighty Quinn by Manfred Mann. It was a kind of upbeat, empowering tune, and I was hoping I’d get back and Jody’s pain would have blown over. But it hadn’t. Friends shooed him off to the doctor. You can guess the rest.
Anyway, the song has had a complicated emotional impact for me ever since. I still think it’s lovely and hopeful; a song about the coming of some strange, vague messianic figure who’ll solve all the problems just by his presence. I still want to believe in stuff like that. Who doesn’t? I still love the song, even though it’s come to be associated with some of the worst things living things have to face. Since then, three of the cats I’ve cherished have died, and my friend Georgette’s health problems finally overcame her a year ago. I thought it was appropriate to remember them too. I guess I could have also mentioned Jenny, but she died just before Jody’s initial diagnosis and I had to draw the line somewhere. Besides, I made a tribute to her when she died.
So, the piece. The piece itself here. It runs about a minute and a quarter. The initial inspiration was something Seth, the cat starring in it, did spontaneously one day last summer. I was doing something on the main computer, listening to The Mighty Quinn, and I looked up to see Seth sitting there on the couch like a human or a bear or something. Big and furry, like an Eskimo. And Seth is a big cat. He’s 20 lbs. and may just be the largest housecat I’ve ever seen personally. A gentle giant. I quietly grabbed the S100 and videoed him for a little over a minute, before my arms started to get tired from the weird angle.
I’ve had it in my mind to do something with it for a long time, and kinetic text of the lyrics seemed like a cool idea. Recently, I’ve started learning some of the ropes in Adobe After Effects, and looking for a way to mark the occasion I mentioned at the outset of this entry, it all just seemed like time to see what I could accomplish.
The first thing I decided to do was alter the soundtrack. What you’re hearing is actually what the camera recorded that day, except that I wanted to double the chorus at the beginning. In the original version of the song, there’s a single chorus at that point, and it felt insufficient. So I opened the raw footage in Adobe Audition and carefully inserted a second chorus, then exported the new version as a WAV file. I opened the original footage in Adobe Premiere and replaced the original track with the new one, and exported the footage. This would eventually serve as the background for the kinetic text.
I imported the footage into After Effects and made a still from one frame of the video, and used that as the background for doing the kinetic text compositions. A still frame puts a lot less strain on the processors and speeds up the process. After that, it was a matter of creating new compositions for the lyrics. I had to learn a lot in order to do this, but luckily there are hundreds, literally hundreds, of tutorials on YouTube about how to do things in After Effects. How do you “scrub” the track so you can hear the music and time the animations to when words begin and end? There’s a video for that. How do you animate text along a path? There’s a video for that. How to you use a null object to control a camera view? There’s a video for that…
I started doing this on Wednesday and it took me till Friday night to complete all the kinetic text animations. There are 23 video exports showing the progress along the way, including this final cut. By last night, I had a version where I could put the original footage in as the background. That version was fine, but the text appears as though it’s superimposed… which it is, of course… but it seems divorced from the footage. The footage is handheld and it moves, because I couldn’t hold perfectly still. The kinetic text is static with regard to the inadvertent camera motions, and I wanted to see if I could make a version where it almost looks like I filmed the text at the same time I was filming Seth.
After Effects has the ability to analyse footage to create motion tracking, and then apply that to other layers, like the composite layer I made of all the text animations. It took a couple of attempts, but I got a version that tracked really well and when applied to the composite layer, made the text follow the motions of the camera (if not the slight rotations; they’re inconsequential enough I didn’t bother with that). The issue then, though, is that all the compositions were the same size as the background layer… 1280x720. So anything that moves them off-centre, like tracking camera motion in another layer, means that animations that reach the edge of their compositions suddenly seem to be cut off while on screen instead of seeming to naturally go “off stage” at the edges, and it’s highly noticeable. So I had to figure out what to do about that.
I duplicated the project so if I mucked it up, at least I’d have the original version. Then I learned how to use an expression on the composite layer’s position (specifically, “position [120, 120]”) to shift all the values of each from 120 pixels down and 120 pixels right, so the animations were much closer to the original locations in the version that doesn’t track camera motion. But I still had the issue of animations seeming to be cut off on screen. The answer there was to increase the field size for each of the two dozen or so text animations from 1280x720 to something that would give the animations more elbow room… 1500x1000. I figure no matter how far the tracking might “jerk” the composite layer in any given direction (and there are a couple of rather violent jolts in the main video), that would be enough room to keep the composite edge from ever appearing on screen. Since the centre point was constant, I hoped that would mean the composites wouldn’t appear to change position, and it turned out to be the case. After that, it was just a minor matter of tweaking a couple of video end points so elements that had left the screen didn’t seem to stop and hang around at the edges; making sure masks masked things that were at the edges of the comps, and then easing a few videos (like the one of the “pigeons”) a bit so they appeared exactly where I wanted them, and it was done.
I’m kind of proud of this. It’s no big deal, but I learned a lot; it represents something I couldn’t do but only admired in the work of others when the week started; and it was done with friends I’ve lost in mind throughout. If you’ve bothered or just happened by here, please take a couple of minutes and watch it once or twice. Then it all has meaning.