2012-08-16

Unified Sampling


Boy is it hard to keep up with our ever changing technology of 3D.  Even though this is old news (2 years old...at least) and there are several posts about unified sampling simply by doing a Google search, I'll still bring it up. In my last post, I mentioned that I had used unified sampling for my MR renderings, and someone asked about it, so I figured I'd post my findings and impression of it.




It was incorporated with mr in 3ds max 2012. Although there is no user interface, there are maxscripts that invoke all of the unified sampling string commands thanks to folks like Artur Leao and Thorsten Hartmann.  In fact the script I use is an older simplified maxscript from Artur, and you can get it here.  It's a smart script that actually saves the data into the file so you can render through Backburner.  It still doesn't work with distributed rendering though.


So here are my comparisons of unified sampling with standard sampling in mental ray.  For all of the examples below, standard sampling was set to a min: 1 and max: 16 and a spatial contrast of 0.05 for RGB.  However I did tweak the unified settings for some of the tests.  These were also rendered on a 8 threaded i7 chip so nothing fancy here.






Motion Blur


For this test, I put some serious moBlur on these teapots to really push the AA engines.  In general unified was much faster.  However I did have to increase the min on unified to at least 4 samples to get comparable visuals.  Even with increasing the min samples it was still faster than standard.
standard: 2min 50sec
unified: 1min 45sec

Depth of Field


For DOF I used the Bokeh camera shader with 16 samples and a bias of 10 for both tests.  In this case for unified sampling, I used a min - max of 1 - 50, and an error threshold of 0.1 and it still turned out decent even though it has a little noise.
standard: 4min 42sec

unified: 3min 37sec

Glossy Reflections


For this test again I really wanted to push the samplers, so the glossy reflections on the teapots and floor both have 32 glossy samples.  In this one I left the unified error threshold at 0.1 and you can see the noise on the edges of the teapots, however if I had decreased this my feeling is that the render would look even better without taking a serious time hit.
standard: 2min 27sec
unified: 1min 45sec

All Together Now


So for the last test, I turned everything on: moBlur, DOF, and glossy reflections. This is where unified really stands out over traditional sampling.  All of these bells and whistles brought standard sampling to its knees and a screeching halt at almost 1.5 hrs of render time for a 600 px image...yikes!  For this example I set unified to min: 2 and max: 50 with an error threshold of 0.1.  Again even with the threshold that high it still turned out fairly decent, especially for the time.
standard: 1hr 25min 10sec
unified: 24min 9sec

Unified handles things like scattered samples quite well, especially in combination.  Where I've found unified sampling especially helpful are for those pesky electric cables in my renderings that are like 100 feet away from the camera.  Unified is really good at catching things like grass, fur, and real fine objects for every frame.  This is really critical for animations where you may get flickering lights at a glancing angle.

I've heard some say unified sampling is mental ray's equivalent of v-ray's DMC sampling.  Not sure about that though as I still find v-ray's sampling to be superior to anything out there currently.  I guess now the real test is to try this same scene with v-ray and see what I get. Although it's not quite apples to apples...er teapots to teapots.

EDIT:

So I said that I would try this with v-ray.  In the CPU battle between mr and vr, there is a reason that Chaos Group just celebrated a successful 10 years, and why everyone is using vray.  Using Adaptive DMC at the default settings in vr 2.0, I threw all of the same bells and whistles at vray.  Motion blur, DOF, glossy samples...and v-ray ate it for lunch!  Not only did it render 2.5 times faster, but the quality of sampling just looks better.  Judge for yourself, but my opinion, v-ray is still king!

v-ray (Adaptive DMC): 9min 54sec


4 comments:

  1. Thanks for this. It may be old news but it's something I'm not so familiar with. Looking forward to the Vray comparison.

    ReplyDelete
  2. But for the unified case, you should turn DOWN the samples for the individual features, i.e. you should test it not with 32 glossy samples, but something like 8 (or even 1). Same for DOF samples; should be low (maybe even 1). Doing that should be closer to what vRay is doing.

    /Z

    ReplyDelete
    Replies
    1. Zap, I will try that again and see what it comes up with. For vray I used 32 glossy samples, but to be fair the DOF & moBlur was at 6. What I'm more concerned with is the end result of the noise in the image. With unified, lower glossy/DOF samples was giving very noisy images.

      Delete
  3. vray king of time?
    Ramy try with an INTERIOR image in vray and compare with mentalray time....
    And about unified, as zap said you need to decrease values in glossy and shadows samples...

    ReplyDelete