A question about exposure

  • Thread starter Deleted member 120647
  • Start date
D

Deleted member 120647

Guest
I should perhaps ask this on a proper camera forum but let's see what turns up.
A while ago I ran into an exposure problem whilst trying to photograph some Gorse set against clear, blue, sunny sky. Went home, read the manual, discovered HDR and what the camera could do.
That got me to thinking, since it is possible to change the 'film speed' (sorry I grew up in the Oly OM10 era) without actually seemingly physically changing the 'film'/sensor, is it inconceivable that the people who design sensor chips and cameras could come up with a way of controlling the sensitivity of the individual pixels? on the sensor in such a way that all areas of the photograph are 'correctly' exposed.
I suppose that implicit in that question is that in order to change the 'film speed' current sensors change the sensitivity of the individual pixels? en masse which may not be the way things work, in which case my question is probably stupid, if that's the case please kill me gently.
 
In theory what you want to do makes sense. However, none of the big boys (Nikon, Canon, Sony) have been able to figure out how to do it. Their public focus has been to focus on increasing the dynamic range of the overall sensor, not controlling gain on individual pixels. The results, compared with film dynamic range is quite amazing. Kodachrome film has a dynamic range of 6 or 7 stops. Current high end sensors get 15 stops. Bearing in mind that each stop is a doubling of the previous stop, that is a pretty broad range. I almost never need greater dynamic range now, where before I used multiple exposures blended in post to get the range.

The biggest challenge to what you are proposing is the physics of the sensor. Most, if not all sensors scan the pixel matrix serially, meaning that there is a finite delay between each pixel. This delay causes vertical lines in the image to be distorted, leaning in the direction of the scan from left to right. Putting more individual pixel processing around each pixel will slow the scan down, and make the motion distortion worse.

If you want more information about the prevalence of this problem, search for “jello effect” and you will get lots of info.
 
  • Like
Reactions: Oso
Thanks for the reply, I have seen some threads concerning jello and it's quite interesting stuff. There might even be a DJI video ad concerning it which opens jelly falling onto something, that amused me, though I didn't watch the rest of the ad.
I liked Kodachrome!
 
About 15 years ago Fuji did some different sensor layouts that were intended to help with dynamic range. They arranged the pixels in a hexagonal arrangement and added smaller pixels between that were less sensitive so they could capture the brighter, highlight information without overloading.
These days sensors are better without the special configuration and it's easy to get HDR shots taken and processed by inboard dedicated chips, ala the iphone.
FujiFilm FinePix S3 pro.jpg
 
Thanks for the info. A friend has a newish I phone and it is impressive in such situations.
 
I'm a professional photographer. I assume you are talking about exposure for still imaging and not video. Photographing dark objects against a light background is challenging. You can try using the HDR feature on your camera. You can try bracketing and blending exposures in post processing, but not many are fluent at that. Geese flying against a sky is not that hard to do since geese are generally not real dark birds. So, try to expose for the birds and see where the sky ends up. Each situation is unique so no advice can be spot on for all conditions. Experiment and learn!
 
Last edited:
In truth my question related to both stills and video but video is new to me (with interest stemming from the drone) and past experience has been shooting photos.
 
I should perhaps ask this on a proper camera forum but let's see what turns up.
A while ago I ran into an exposure problem whilst trying to photograph some Gorse set against clear, blue, sunny sky. Went home, read the manual, discovered HDR and what the camera could do.
That got me to thinking, since it is possible to change the 'film speed' (sorry I grew up in the Oly OM10 era) without actually seemingly physically changing the 'film'/sensor, is it inconceivable that the people who design sensor chips and cameras could come up with a way of controlling the sensitivity of the individual pixels? on the sensor in such a way that all areas of the photograph are 'correctly' exposed.
I suppose that implicit in that question is that in order to change the 'film speed' current sensors change the sensitivity of the individual pixels? en masse which may not be the way things work, in which case my question is probably stupid, if that's the case please kill me gently.

Hi PhiliusFoggg,

I am also a pro photographer and this is my opinion from my point of view. Although they are linked in the photography world, exposure has a different function from HDR, or it doesn't depends on it. Why I say that? Well, without all the tech jargon, you control the exposure with the aperture, shutter speed and the ISO. If your pictures that you take against a bright blue, sunny sky come out too bright (overexposed) you have the decrease some of the settings I mentioned before, for what you mention specifically: the sensitivity of the sensor (that is the ISO!), but you also have to adjust the others in order to get the right exposure.

But HDR, is definitely another animal, it gives you as a result a picture with all the tonal range of what you shot, pretty much similar to what your eyes see, although it does that, but if you don't set correctly your settings, the resulting picture is not going to be that good either. Because, HDR is going to register the scene according to the settings you've set.

From what I read, there's no way to change the exposure of only a few of pixels since the sensor is a unique piece that registers the image (and for me it wouldn't make any sense from the camera itself, because what anyone wishes is to capture an entire image, isn't it?). The only way "to change" the sensitivity (in this case) of individual pixels is through editing and from my point of view would be very daunting...

A piece of advice, go discarding those old terms on photography since you could get confused with the new literature and digital technology. For this example, just consider saying ISO that is the sensor sensitivity since the sensor replaces the old film rolls, because "speed" could apply to the aperture to denote how fast is a lens.

Good luck!
 
The variable ISO of today's digital sensor simulates the ASA/ISO sensitivity of the film days, but even film could not hold the entire range of a scene, all the time.
So into the darkroom you would go and dodge (lighten) or burn (darken) areas that need to be fine tuned.

The best bet is to shoot in a RAW or DNG format that retains all the information the sensor captures, and keep your exposure within the confines of the histogram, paying special attention to not going off the scale on the right and blowing out your highlights past recovery. Then, in an editing program such as Lightroom, you can bring down those bright highlights, open up the shadows, adjust the exposure and contrast and white balance, and really transform a mediocre image into something very polished.

A lot of times, HDR looks over-processed but a carefully processed single image can be terrific.
 
  • Like
Reactions: BigAl07

Recent Posts

Members online

No members online now.

Forum statistics

Threads
143,094
Messages
1,467,599
Members
104,980
Latest member
ozmtl