3D Software Help and Assistance. Ask Away.

5.00 star(s) 1 Vote

amster22

Well-Known Member
Nov 13, 2019
1,198
2,180
The biggest issue with using the in-built denoiser is that it is the only copy you get. If you render the 'noisy' version, you're able to use an external denoiser and blend/layer mask both versions to get the best of both. Whereas, if you're using the in-built denoiser, you're getting rid of the noise, but losing the skin/hair/etc. detail with it. That's a pretty huge downside, imo.

There's no misconception to be had as that's the inherent science of denoising. In the most absolute basic terms, denoising is the blurring of nearby pixels (e.g. gaussian blurring, typically) to both remove noise and blend in with the rest of the image. If you're blurring images, what's going to happen? A bit of an extreme example:

View attachment 3626430

What happens if you add small amounts of gaussian blur to a render? Gradual detail loss.

View attachment 3626438

That's exactly why it's repeated ad-nauseum that you shouldn't use a denoiser (especially Daz's in-built one) if you can avoid it. Seeing as he/she has a 3090, there's zero actual point for them to be natively rendering a fully denoised image. They can always use an external one later if he/she needs to.

There's a reason many experienced devs will tell you that the only time you should be using the in-built denoiser is for animations.
MissFortune post is very interesting, but may lead to confusion. Denoising indeed is a low-pass filter (like a gaussian blur), but it is only applied to outlier pixels. An outlier pixel is supposed to be significantly different form all its surrounding pixel. The first denoisers used a more or less large neighborhood and an empiric threshold to decide if a pixel is an outlier or not. Recent ones use an AI training on a large region and are extremely accurate. For instance, they would not consider skin pores as outliers.
But they can still do errors and misclassify a pixel.

To minimize these errors, here is how I proceed.
1/ I render the image at twice the desired resolution, but with a limited number of iteration (around 300-400 depending on the lighting). I generate the image as a png, to avoid lossy compression artifacts (that are basically low pass filters).
2/ I denoise in post process using an external denoiser. As the image is twice the final size, and that only single pixel outliers are suppressed by the denoiser, even if a pixel is wrongly denoised, its impact on the final image will be very weak. Again, I save the image as png.
There are two main (free) denoisers: intel and nvidia. I prefer intel, because I mostly postprocess on my laptop (without a GPU), but both denoisers are very good (and better than daz integrated denoiser).
3/ I do the downsizing and generate the final webp image.
All is done with command line tools (denoiser or ImageMagick) and to limit all manual operations, I have a script that applies steps 2/ and 3/ to all images in a directory if denoised images are non-existent or older that the rendered ones.

Besides its interest in terms of denoising, having a double sized image is very useful. For instance, if you are not completely happy with the framing and need to do some cropping, or want to do a close-up on an image. And of course, if I need to do some post processing (retoning, darkening, blurring, blending/adding images, etc), I also do that before downsizing.

For animations, I proceed similarly, but I generally reduce the number of iterations (say 250), and I downsize more aggressively (generally x3). If I want to add some video effects (zooming, pan, etc), it is obviously better to do that on the full-scale image. Ditto if you want to retime your frames (for instance for slow motion).

The main drawback of this method is that you can have to keep several large png images on your disk.
 

MissFortune

I Was Once, Possibly, Maybe, Perhaps… A Harem King
Respected User
Game Developer
Aug 17, 2019
4,634
7,639
MissFortune post is very interesting, but may lead to confusion. Denoising indeed is a low-pass filter (like a gaussian blur), but it is only applied to outlier pixels. An outlier pixel is supposed to be significantly different form all its surrounding pixel. The first denoisers used a more or less large neighborhood and an empiric threshold to decide if a pixel is an outlier or not. Recent ones use an AI training on a large region and are extremely accurate. For instance, they would not consider skin pores as outliers.
But they can still do errors and misclassify a pixel.
It was very much in laymen, and was there more to present a general point than be exactly accurate.

But that fact remains that they tend to be wrong more than they're right. Currently, of course. It's only going to get better as the technology/methods evolve. But as of now, it's fairly easy to get a denoiser to fuck up. Certain color warmth/coolness, type of lighting and the strength of it, etc. Denoisers are particularly bad with hair and some types of skins, from my experience. Also tends to introduce artifacts in certain scenarios, as well. I guess, at the end of the day, the point is to only use if you need to. And to use an external one, if you do.

To minimize these errors, here is how I proceed.
Pretty much my process, as well. Only difference is I used Taosoft GUI for the Intel/Nvidia denoisers because I hate command line softwares. And XnConvert for the converting + downsizing.
 
  • Like
Reactions: amster22

Pr0GamerJohnny

Forum Fanatic
Sep 7, 2022
4,916
6,956
The main drawback of this method is that you can have to keep several large png images on your disk.
Well that and it assumes one's hardware so greatly exceeds capability generating at 4k vs 1080 the time impact is negligible - which I dunno about you guys, but certainly wouldn't be the case for me for many interior shots.

Though I appreciate you posting your workflow, always good to hear people's processes and get more information.
 
5.00 star(s) 1 Vote