I came out to the back garden yesterday and found this lovable guy playing on the patio!
Another test for Blender using the camera tracking feature. I’m still practicing with this quite a bit, and trying to check out its capabilities since I still get quite excited by the results. Considering this was footage just shoot on my iPhone 4s it turned out pretty good. Blender currently only has camera presets for the top end cameras, so finding sensor sizes and focal lengths for the iPhone on the net can be tricky. However, I found a post over in Blenderartists where Malcando has been collecting settings which seems to have worked perfectly.
I took the opportunity to test out my iRover rig too. It was a productive experience since I’ve discovered a few things that need sorted out before I use him again, particularly his feet. They need some tweaking since they don’t seem to rotate around the z axis without twisting. I had also previously added IK/FK switches too, but I don’t think he’s going to need any FK control for his feet. I may simplify the rig and remove the FK controls. I also need to add in some custom control bones to make the rig easier to pose. And I still have to work on his textures, particularly his eyes, which I intend to have as a digital style display.
Here’s another motion track using Blender 2.62. I thought I would try something a bit more static to test a better example of how well the object is moving with the camera. Another little lesson learnt with this one, I managed to figure out how to pull out a reflection pass to add to the comp. These tracking experiments are as much an experiment in use of Blenders’ node system as they are in testing out the tracking. Of course, I couldn’t just have a completely static object, so I had to throw a little bit of character animation in there as well!
This is the first relatively successful track I’ve managed to create with the new motion tracking features in Blender 2.62. I wasn’t really concentrating on the animation too much, I just gave the little dragon guy a bit of life to see how well he would sit among some live footage. I’m still trying to get the hang of it, I haven’t done much 3d motion tracking in the past (plenty of 2d though). This footage I just shot on my iPhone at my desk, so the compression and motion blur on clip isn’t overly conducive to tracking but with a bit of fiddling and tweaking by hand I eventually got a decent timeline. I had downloaded various clips from the internet, if you do a search for ‘free tracking clips’ you’ll find a few places to download them from, but I was having trying to find sensor sizes and focal length for these particular clips. I then decided to shoot some footage with my phone since I already knew these details and finally got this result. It’s far from perfect but it’s a good start with a Blender feature which I can see fast becoming a powerful tool in my arsenal of vfx tools.
I’ve been experimenting with the Cloth Modifier and the Wind Force Field this week. A client wanted a spring season feel to their logo, so I decided to play with part of their logo which is in the shape of a flower and try to blow it across the screen with a few others before it settles in place. I found the perfect tutorial on Vimeo by the ever faithful CG Cookie, so thanks to them for helping out once again.
It took a while to play around with the force fields and get the results I wanted. The flower was set up by creating a plane and subdividing it a few times, then applying the flower as a png texture with an alpha channel. The flower was duplicated, repositioned and scaled a few times. One big thing I’ve learned is that both the cloth modifier and the force field seem to rely a lot on the global scale of the scene, in the same way that lights also seem to be related, so I had to adjust the cloth settings on the smaller flowers differently from the larger ones. It’s something I’m going to have to play around with and try to get used to building models the correct size. A simple plane was set up as a floor for collision and shadow catching. Here are a few of the experiments, and the final result.
I fancied having a play around with Cycles, so I downloaded the Cycles trunk (2.60.5 r42078) from Graphicall.org. I really love how easy it is to work with, I was getting some really nice results using the Nvidia Quadro with Cuda. You can get some gorgeous lighting setups quickly and easily, once you have a play around. There’s still a few things to iron out, for example I found that there didn’t seem to be any fresnel effect with glassy textures (maybe I’m just missing something). Here’s a few results:
Here’s a scene I set up with a few objects I had lying around from various jobs. I just threw them into a scene adding a floor plane with a texture I downloaded from CGTextures.com. Again, it was pretty easy to set up, although I did get a couple of crashes along the way, but then this isn’t a stable trunk I’m using. The first image is a setup with just one sun lamp. The second I added some cubes to the right off camera to simulate some building shadows and added some depth of field, again surprisingly easy to set up. The first two images are the Cycles renders from the 3d views. Where I hit a little trouble in the third image was with a final render, it seems a lot darker than the preview render. I’m not sure if something was set that I don’t know about, the only thing I thought it might be was the exposure button, however this didn’t seem to fix the problem. I’ll have to investigate further.
This is Sally, she will be the other star of the iRover short, in other words iRover’s owner. I built her head over the past couple of weeks and her body was adapted from the Pepper model I built a while back. I was never happy with Pepper’s head, the topology was very messy and didn’t animate very well. Here’s my initial sketch of her, I did play around with her proportions quite a bit.
So I did a bit of searching and found a few good professional workthroughs on youtube that helped me sort the head topology out. So a quick thank you to Angela Guenette of Ponder Studios for her ‘Sintel’ making of videos:
And also a thank you to Glen Southern over on 3D World’s youtube channel for his 14 episode topology workthrough. Be warned his whole tutorial is around 2 hours long.
After studying these videos for a bit, I started building Sally’s head and got a better feel for the flow of the curves, it’s looks and feels a lot better now, and I’m 100% confidence that it will animate a lot better.
This is another work in progress of the star of the short film which I’m working on with Craig Smith, mentioned previously in the Apartment mockup post. Obviously he’s not textured yet, and the animation is pretty rough at the minute, I didn’t spend so much time on the running cycle since this was just a test to make sure the rig was working okay. Although he’s a robot dog, I still think I need to use some artistic licence and loosen up his legs a bit, his movement doesn’t feel quite right yet. I also think his feet may need tweaking, they’re too long as they are. The rigging is all pretty simple, it’s mostly all ‘hard’ rigged, in other words there aren’t many soft areas in the vertex weights.
Here’s some of the original sketches I did, I think he looked a lot more like K9 back then, when I really wanted him to look more like Gromit. I definitely prefer the floppy ears as they are now, than these antennae types…
I’d started building him in C4D version 8.5 a few years ago. I’d managed to get some really nice renders there, but the rigging wasn’t so hot in that version, so this was as far as I’d got…
But now that I know Blender a lot better, I’ve a better feeling about getting further with the story. I’m still working on the animatic, so I’ll post that when I get the first draft done.
I haven’t had much time for posting recently, but I have been doing a few bits and pieces here and there. I’ve finally moved up to Blender 2.59, and looking forward to some of the integration from the GSOC developments, particularly the tracking feature. We’re all gonna have a lot of fun with that, tracking robots into our backyards, etc!
In the meantime, I’ve been trying to plan out a set for a short animation of a script written by Craig Smith over at Motion Comics. It’s been in development for what seems like forever, simply because I can only get working on it between work. More on all that later. I’ve so far been trying to build an interior to get an idea of how things will act out, and roughed in some furniture. I’ve also put some figures in there to get an idea of scale. I had originally built it as an apartment, but I think it’s going to end up being a house, therefore I’ll be losing the bedroom. Still a lot of work to be done with it, but it’s a start, and it’s the first time I’ve ever built a proper interior set before.
I recently bought a copy of ‘Stop Staring – Facial Modeling and Animation Done Right’ by Jason Osipa. For anyone wanting to perfect their character animation and acting, this book is a must. A couple of things though, firstly, the book is mostly geared towards using Maya. However a lot of the basic principles can be translated to many other software packages with a bit of tweaking, which brings me to the second point. In my opinion you would need to have a good intermediate knowledge of your chosen 3d program to be able to work with the book if your not using Maya. All that aside, there is an amazing amount to be gleaned from the book, from lipsync, the main principles of mouth shapes or ‘visemes’ and how to control them, all the way through to the correct topology when building a head. Well worth the money.
So I’ve been trying to translate the principles of using one bone in the armature to control Shape keys with IPO drivers in Blender to get some basic lipsync working on a rig. I haven’t gone into any of the eye controls or expressions yet, but that will just be an extension of what I’ve created here. I’ve only used three shape keys here, the X location of the bone controls the narrowness of the mouth and the Z location controls the openness. Just a basic setup to make sure I know how it all works before I take it onto a proper rig.
I’ve spent a lot of time over the past couple of months going over rigging again and again, building a few different types of characters and rigging them to try and get it drilled into my head exactly how Blender deals with the subject, and I think I’m pretty much there with it now. As I’ve said before I’m still working with Blender 2.49, things may be a little easier to work with in Blender 2.5, but from what I’ve seen most of the things learned in 2.49 will still apply to 2.5, it’s just that things are implemented much more efficiently and easier to find in the interface.
All these characters are four-limbed humanoid types, but on closer inspection they have quite different body types. This really boils down to the rigs being pretty much the same setup, but the big difference is how the armatures are weighted to the mesh. The female character is a pretty usual setup, but the alien character has his body and head as all one shape with no neck, so it took a bit of work with the weight painting brush to get some nice smooth transition of movement down his body. Also he has no shoulders or hips, so his upper arms are still a little tricky to animate without cutting into his body. I may be able to fix this with a segmented bone (curvy bone), but I’ll have to play around with this to see how well it works. I also learned how to do a switchable IK/FK setup with this guy to, the FK really helps with his arms swinging. His face is also fully rigged now too, so at some stage soon I will post up some lip sync with him.
The robot character is an old design I’ve had in my head for many years, and only now have I got around to bringing him to life. His weighting is pretty straight forward, I used vertex groups for him instead of vertex painting, so that the solid shapes were absolutely weighted to each bone. I may try using segmented bones on his arms and legs as well, just to try to loosen his movements up a little, he feels a little TOO stiff at the minute. But again I’ll have to experiment with that a bit. He still needs an FK setup too, but the biggest difference with him is how I dealt with his eyes. The eyelids and eyeball are actually built as spheres within the main mesh and set as vertex groups, then each eye is distorted with a lattice object which is parented to the head. These eyeball and eyelid vertex groups are weighted to an eye bone and blink bone, and these bones rotate the vertices before the lattice modifier is applied in the mod stack. It sounds a little complicated and takes a little bit of work to get it set up but works really well. I’ll try and create a little tutorial at some stage to explain how I did it.
If you’ve any questions about any of this, just leave a comment below and I’ll see if I can answer.