Is it just me, or anyone else is feeling all of the examples of grain simulation (I even followed links in the comments here on HN to papers and plugins) all feel a bit "off" in comparison to real film pictures? Probably film enthusiasts would agree, else they would not be shooting on film even today.
My best guess is that besides grain digital photos have a different dynamic range. The whole stack from silicon sensor, raw processing, etc, including screen manufacturing is trying to get to "real" images, in the sense that it is similar to what the human eye of the photographer saw in site. Film also tried to do the same, yet, I guess there are more characteristics than we can simulate with math.
I bet if someone gathers enough data an AI based plugin might get closer to the actual film look rather hand-coded math based plugins will be able to.
My best guess is that besides grain digital photos have a different dynamic range. The whole stack from silicon sensor, raw processing, etc, including screen manufacturing is trying to get to "real" images, in the sense that it is similar to what the human eye of the photographer saw in site. Film also tried to do the same, yet, I guess there are more characteristics than we can simulate with math.
I bet if someone gathers enough data an AI based plugin might get closer to the actual film look rather hand-coded math based plugins will be able to.