Handy shortcuts for quickly checking out detail in highlights/shadows etc as well as negating the image via negative contrast values - all done via the graphics card without using any extra processing. This is image display only and will not affect your clips.
> Works with all desktop modules, player and batch except for sparks editing
> First you must enable display colour management by clicking 'Bypass' on the bottom-left of any view - you will see it change to 'Active' with the other controls in there (there's also RGB/Matte and Linear/Video/Log switches)
Exposure - Hold Shift+E then Drag cursor L/R
Contrast - Hold Shift+C then Drag cursor L/R
You will see the values changing on the bottom-left and theres a big [RESET] button there too that appears once you start changing values
Sunday, April 25, 2010
Saturday, April 3, 2010
Okay, so one of the biggest bottlenecks in doing what I consider to be high-level work is understanding camera tracking and--more importantly--what that camera tracking software gives you.
In brief, I'm going to show you how to do a simple camera track in Syntheyes and how to position shapes into that track correctly. I always felt this was relatively elementary stuff, but I've had to explain to to far to many bright people, so here we are.
We'll start with what should be a relatively easy camera track. It's easy because it has:
1. Lots of Parallax (which is to say, camera movement, not to be confused with tripod movement where the camera itself doesn't move through space.)
2. No moving objects.
3. No fast motion (twitches are generally fine, whip pans are not)
4. Lots of detail.
So I ran it through Syntheyes (hit the big green button) and got a good result. All the 3d points stick to their trackers (that's a nice giveaway if a bad track) and I'm happy with it. The only issue is that the scene isn't oriented correctly:
As you an see along the ZY orthographic view the floor is rather off angle. Easy enough fix for that, in the coordinates pane in Syntheyes.
I've noted where I put the Origin, Left/Right Coordinate (the X axis) and the (Z) Plane coordinate. For a better explanation of how this works, check out my favorite CG tutorial of all time over at the Syntheyes website.
Now we export this to Maya/Nuke/Flame/AE/Whatever. What you will get on the other end is effectively the same, and maybe this is where people fail with understanding this, but here is what you get:
-A Static Scene of points in spaceThat's it. It's very important to remember that the scene is static. It's also very important to remember that all the annoying locators or axes now cluttering up your scene are there for a reason. The amount of times I've seen people delete or ignore them is numerous. Coincidentally, these same people often blow what I consider to the crux of camera tracking, and this is the reason I'm spending a good piece of my afternoon writing about it. This crux being the correct spatial placement of objects in your 3d scene.
-A moving camera.
Here's the big secret: If a locator from the camera track sticks to a point on the backplate, placing an object in the same place as that locator will make it stick as well.
Basic basic stuff, I know. I also know I got in a few heated discussions with a guy who professed to be an expert at tracked cameras and I had to waste valuable time explaining why close wasn't cutting it when all his objects were hovering a foot off the ground. Turns out he was dicking with the camera and locators (which is allowable, so long as you move them as a group.) The shock.
See you in Part 2.
So I want to place that awesomely textured soda can into my scene. I can eyeball it, but that comes with a lot of trial and a lot more error as I'll demonstrate in a moment. How do we go about setting this giant can on the ground plane, or, even better (cos who doesn't like a challenge?) on the seat of the chair!
Went to frame 100 and placed it by eye. It looks great!
see? Let's preview the motion:
Oooh. No good! Where on earth did I go wrong??? Let's look at the front view:
Now we can assume, as I italicized earlier, that each of these points touches a physical object in the scene. The can is not very close to any of them, so it now makes sense that the can appeared to fly away in that last video. There's two ways to alleviate this, easy and less easy but still easy.
The easy way to get your object to sit in the scene is to make it the child of one of the locators with no translation of it's own.
The slightly less easy way is something I find helpful in flame a lot. The reason I find it so helpful, if we're being honest, is because I haven't gotten around to learning Syntheyes' coordinate system very well so my cameras often come in super small. They work fine, but any image plane you add to the scene will be huge. So I group the camera and locators under an axis and scale them way up (generally 3000-10,000). I could still do the parenting trick, but it would upscale anything downstream and defeat the scale-up (the track would still work), so I have another way, and here it is.
learn it well, it will safe your ass one day.
Step 1: Find the locator you want your object to share space with and select it (rename it if you like) in the camera view.
Step 2: Go into an orthographic view (side, top, front--it doesn't matter), locate your desired point and place your object on it. Note which view you are in because once you've resolved the position for those two axes you don't want to touch them. In my case I was in the side view so I've resolved the Z an Y coordinates, meaning only X is left to position.
Back in camera view, pick the slider for the one unresolved coordinate, in my case X, and slide it until the object rests in the correct place according to the shot.
then you render it out, and viola!
So, that was a really long way of saying this:
- Resolve your coordinates one at a time, using orthographic views (in flame, down in the bottom left corner of action you can change the view from "result" to any number of other things, including "Front", "Top", and "Side". It's much easier and faster in 3d software, but the principle is the same. sub-note: alt+2,3 and 4 will split up your flame view into four assignable windows)
- Once you've resolved two orthographic views, you can resolve the third one by eye in the camera view by sliding it around.
- Don't take shortcuts. Camera tracks speed up so much of your 3d and compositing life that you can afford to take your time getting them right.
Thursday, April 1, 2010
Pretty dumb node for my first node highlight you say?
RGB Blur is a lifesaver!
It's first and primary duty is to smoth out gmask shapes. Gmasks always have a linear gradient which leaves a hard line at both the black and white ends of it. It completely sucks and there's no way to turn it off (Nuke, by contrast, has multiple gradient settings for gmasks), so what you do is pipe said mask into an RGB Blur and blur it up. Do it by eye. When the hard edge goes away, you win! Completely necessary.
The only thing that sucks about this is when you have to blur past the edge of the frame, which, because the blur doesn't have an imagination, will produce results that are less than ideal--vignettes for example tend to get this problem but are still better looking blurred than not.
Similarly, the "use matte" isn't all that useful as it will blur in areas outside of the matte (which is completely understandable, but makes it so you might as well just logic op-blend your shit together vs blurring through a matte) If you want to blur inside of a matte and not suck in the outside-of-matte areas, Sapphire's "Matte Blur" does a nice job of this.
In fact, the regular sapphire blur is probably generally nicer than the stock blur, but I think it's bad policy to use plugins when they aren't specifically necessary.