Settings when shooting stills? or taking the still from video?

Joined
Nov 2, 2014
Messages
149
Reaction score
2
Hi guys,

I have tried to seach around other post but i dont really got my answer..

When you guys is taking stills, are you just taking it out of a video then, or are you actually taking stills?
(Got a P3A)
This summer im going turkey with my P3A to take some pictures of a house, and i would like to take some good stills in the sunset, but i am not sure of wich setting i should go for, i dont want to edit the image (at the momment i have NO knowledge at all with editing pictures)

Should i go for a fast shutter speed as possible?
What about sharpness etc?

I already ordered 3 ND filters, buy is this mostly for Video or also usefull for Stills?

Hope you can help a newbie :)


Best regards Nicolai, from Denmark!
 
Video for video and stills for pics. only takes seconds to switch mode

Play with settings to get best results depending on the situation/light/location. Shoot RAW format so you can remedy mistakes

Filters will be handy for both
 
I like to shoot in RAW with log color settings, but I touch up virtually all of my shots in post. ND filters are a must have for the camera on the phantom on account of the fixed aperture, especially on really bright days. The camera makes up for low light conditions buy cranking up the ISO, which can introduce a bunch of noise- it's tricky to find the longest functional exposure in order to keep the ISO down when shooting at night or dusk.

Watch some basic photography tutorials on youtube, even if they're not geared towards aerial cameras and try to get a grasp on the relationship between ISO, Aperture, and Shutter speeds.

If you need to, grabbing still from your video is *ok* quality but stills come out far better.
 
Okay, hm i read that if i shoot video in 60 fps my shutter should be 2x of the fps = 120, is there rule for the Stills of the shutter speed, i just think if i set it manually what should i "go for" ?
Should i maybe try to reach "0.0" in EV? or maybe there is "finger rule" :) ?
I have read so much that the "Histogram" should look like a mountain in the middle of the graph :D
 
The controls and adjustability of our Phantom 3 camera are extremely basic. Unlike my dslr camera, which has multitudes of controls. This one video has taught me more about my Nikon D800 than the thick manual, it came bundled with.
Watch it once for a definitive description of basic photography terms and beneficial uses.

Try using HDR or taking multiple images of the same scene with different exposures, and find the shots that appeal to your eyes.

RedHotPoker
 
  • Like
Reactions: Holrits
There are a number of factors against frame capture from video.
1. Unless you are shooting 4k video (which you aren't on a P3A) then the frame grab from video (even at 2.7k) will be way smaller than the 4000x3000 still resolution from the P3.

2. Video is a series of jpegs. Jpeg is NOT a good source file type as its a lossy format. Information is purposfully destroyed or altered to produce small file sizes.

In virtually every situation, shoot a still as a still rather than relying on frame grabbing from video. The only circumstance I can think of where this shouldn't be the case is where you were not expecting or prepared to snap a still and happened to be running video. (for example if you were flying over Vegas recording video and Elvis' UFO flew through the shot, well then a frame grab is ok).

Benefits of going with a still:
1. Higher resolution, uses full 12mp sensor.
2. DNG output. Zero loss
3. Full control over ISO/Shutter speed

The only downside of shooting a still is you have to stop video, snap the still and start video again meaning an edit point in your post production. If you can handle that, do it.


The histogram is a layout of the pixel illumination in the scene. At the left is totally black, at the right is totally white, and everything in between are shades of gray (pretend you are shooting black and white). What you don't want to do is set your exposure so its slammed against either end but spread out across the graph as much as possible without hitting either end to any significant degree.

It, of course, is scene dependent. What that means is if you are shooting something on a white background, of course you are going to get a histogram that has large spikes at the right. Same with shooting something on a black background and having spikes at the left. Not every image should be spread out along the whole graph. Shooting snow scenes are going to have a right loaded histogram. Shooting evening shots will have a left loaded histogram. You have to use your brain and analyze the scene and judge the exposure and histogram match reasonably. There is no "right" histogram. Every one will be different just as every photo will be different. There is a best or acceptable histogram for each image though.

To learn what a "proper" histogram looks like, you should examine as many photos as you can find (off the internet or your own) that you judge by eye to be well exposed and pleasing to you (not the subject matter, the exposure, the colors, the overall luminescence of the image). Then load those up into any paint program that has a histogram feature and look at the histogram for that image. After a few you will start to see the patterns to look for that please YOU. After all, its your eye you must please first.

As for settings, if you are going to post process (and you probably should), shoot DNG raw+jpeg. You get immediate feedback with the jpeg but the lossless quality of the DNG when going into post with Lightroom or Photoshop for example.

My camera settings for video and still are:
-3 for sharpness
-2 for contrast
-2 for saturation

While these really don't do too much for the RAW image they do affect the jpeg and the video.

By setting the sharpness down to -3 you are telling the P3 NOT to sharpen. You are NOT removing sharpness, you are telling it not to ADD any. The algorithem in Photoshop, Lightroom and Premiere is vastly superior to the sharpness code in the P3. So, we choose not to add it in the P3 and we will add it in post.

By setting the contrast down a bit, we prevent the sensor from slamming anything totally black or totally white and clipping data. We are centering the data within the area the sensor is best at recording.

I feel the sensor is over saturated in the P3. Thats the reason I set it to -2. It also helps with noise reduction in post and we can adjust saturation to bring it back after doing noise reduction on it.

Here is the raw image right from the Phantom:

As you can see it looks a bit flat and colorless. Thats due to the settings above.

And here is the post processed version:

As you can see the color, contrast and overall density is there.

Again, before:


After:
 
Last edited:
Got my DJI Phantom 3 Pro last week and why i have not used it much yet, i find taking photos in 4:3 format to give a more clear image than using 16:9, using 16:9 it seems to zoom in and crop the image from a 4:3 to a 16:9 image.
 
Got my DJI Phantom 3 Pro last week and why i have not used it much yet, i find taking photos in 4:3 format to give a more clear image than using 16:9, using 16:9 it seems to zoom in and crop the image from a 4:3 to a 16:9 image.
Never noticed so.
 
There are a number of factors against frame capture from video.
1. Unless you are shooting 4k video (which you aren't on a P3A) then the frame grab from video (even at 2.7k) will be way smaller than the 4000x3000 still resolution from the P3.

2. Video is a series of jpegs. Jpeg is NOT a good source file type as its a lossy format. Information is purposfully destroyed or altered to produce small file sizes.

In virtually every situation, shoot a still as a still rather than relying on frame grabbing from video. The only circumstance I can think of where this shouldn't be the case is where you were not expecting or prepared to snap a still and happened to be running video. (for example if you were flying over Vegas recording video and Elvis' UFO flew through the shot, well then a frame grab is ok).

Benefits of going with a still:
1. Higher resolution, uses full 12mp sensor.
2. DNG output. Zero loss
3. Full control over ISO/Shutter speed

The only downside of shooting a still is you have to stop video, snap the still and start video again meaning an edit point in your post production. If you can handle that, do it.


The histogram is a layout of the pixel illumination in the scene. At the left is totally black, at the right is totally white, and everything in between are shades of gray (pretend you are shooting black and white). What you don't want to do is set your exposure so its slammed against either end but spread out across the graph as much as possible without hitting either end to any significant degree.

It, of course, is scene dependent. What that means is if you are shooting something on a white background, of course you are going to get a histogram that has large spikes at the right. Same with shooting something on a black background and having spikes at the left. Not every image should be spread out along the whole graph. Shooting snow scenes are going to have a right loaded histogram. Shooting evening shots will have a right loaded histogram. You have to use your brain and analyze the scene and judge the exposure and histogram match reasonably. There is no "right" histogram. Every one will be different just as every photo will be different. There is a best or acceptable histogram for each image though.

To learn what a "proper" histogram looks like, you should examine as many photos as you can find (off the internet or your own) that you judge by eye to be well exposed and pleasing to you (not the subject matter, the exposure, the colors, the overall luminescence of the image). Then load those up into any paint program that has a histogram feature and look at the histogram for that image. After a few you will start to see the patterns to look for that please YOU. After all, its your eye you must please first.

As for settings, if you are going to post process (and you probably should), shoot DNG raw+jpeg. You get immediate feedback with the jpeg but the lossless quality of the DNG when going into post with Lightroom or Photoshop for example.

My camera settings for video and still are:
-3 for sharpness
-2 for contrast
-2 for saturation

While these really don't do too much for the RAW image they do affect the jpeg and the video.

By setting the sharpness down to -3 you are telling the P3 NOT to sharpen. You are NOT removing sharpness, you are telling it not to ADD any. The algorithem in Photoshop, Lightroom and Premiere is vastly superior to the sharpness code in the P3. So, we choose not to add it in the P3 and we will add it in post.

By setting the contrast down a bit, we prevent the sensor from slamming anything totally black or totally white and clipping data. We are centering the data within the area the sensor is best at recording.

I feel the sensor is over saturated in the P3. Thats the reason I set it to -2. It also helps with noise reduction in post and we can adjust saturation to bring it back after doing noise reduction on it.

Here is the raw image right from the Phantom:

As you can see it looks a bit flat and colorless. Thats due to the settings above.

And here is the post processed version:

As you can see the color, contrast and overall density is there.

Again, before:


After:

Very nice explanation thanks! and very nice photos ! i need to learn that :) are you using Lightroom?
Also are those shot in 4:3, and is that better than 16:9 or is it more just a question of what you like?
Last thing, what is the main thing you have been doing to those images?
 
Yes, lightroom but Photoshop would do as well as it has ACR too.

Main develop settings are:
1. Adjust exposure
2. White balance
3. Vibrance/Clarity

As I don't have ND filters yet I have to underexpose to allow detail in the sky. For that I use the gradient and then adjust exposure/clarity/saturation for the sky's gradient.

Couple other changes I forgot to mention...

Set color mode to D-LOG. Will provide a more linear exposure.

Turn off Automatic White Balance. I set mine to custom, 5000k. No matter what I am shooting. With RAW, WB in camera doesn't do squat. In video and jpeg, it does. Its going to look odd. But I use 3 way color in Premiere or the white balance tool in LR or PS to correct the white balance. I know it seems like extra work but the AWB in the Phantom will constantly try to correct (and not well sometimes). This can easily be seen by doing a nice horizon shot then aiming down with the gimbal. You can then watch the colors change a second or two after you change the gimbal. Its easier to post process without that changing WB during the video. By setting it to manual, you always have the same starting point from which to base your post production conversions on.

I also briefly record my wind/temp and I use a ColorChecker Passport as seen here:
(Apparently the media selector here ignores the time offset, skip to 7:37)

I record those at the start of each session so I have a base to color correct and WB and I have a record of the temp/wind.
 
  • Like
Reactions: Holrits
Yes, lightroom but Photoshop would do as well as it has ACR too.

Main develop settings are:
1. Adjust exposure
2. White balance
3. Vibrance/Clarity

As I don't have ND filters yet I have to underexpose to allow detail in the sky. For that I use the gradient and then adjust exposure/clarity/saturation for the sky's gradient.

Couple other changes I forgot to mention...

Set color mode to D-LOG. Will provide a more linear exposure.

Turn off Automatic White Balance. I set mine to custom, 5000k. No matter what I am shooting. With RAW, WB in camera doesn't do squat. In video and jpeg, it does. Its going to look odd. But I use 3 way color in Premiere or the white balance tool in LR or PS to correct the white balance. I know it seems like extra work but the AWB in the Phantom will constantly try to correct (and not well sometimes). This can easily be seen by doing a nice horizon shot then aiming down with the gimbal. You can then watch the colors change a second or two after you change the gimbal. Its easier to post process without that changing WB during the video. By setting it to manual, you always have the same starting point from which to base your post production conversions on.

I also briefly record my wind/temp and I use a ColorChecker Passport as seen here:
(Apparently the media selector here ignores the time offset, skip to 7:37)

I record those at the start of each session so I have a base to color correct and WB and I have a record of the temp/wind.

i will defenetly try to take some shots now and see if i can Edit them in LR :)
Do you ever use the "auto" settings in LR, to let LR adjust the picture?
Also are you always using Custom WB in LR?

Wow this is a whole new world for me, i see i need to start viewing som Youtubes :p
 
Jpeg is fine for OUTPUT. Last stage. Uploading for the web, etc. Its NOT good for source. Remember though, LR is a non-destructive editor. No matter what you do to an image in LR (short of deleting it off the HD), will not alter the original in any way. You can always return to the original with no loss in quality.

The problem with jpegs is from their very nature. They discard data or activly alter it to reduce file size. Its NOT a ZIP. You will NEVER get back the original data before you saved as a jpeg if all you have is the jpeg. If you then open the jpeg again and edit it and re-save it you loose MORE data. Every time you open and save again, you loose a generation. Its like photocopying an original. Then photocopying the photocopy. And photocopying that photocopy. Then compare the original to what you have as a copy and you sill see the degradation in quality. Exact same thing happens with jpeg.

As for automatic settings, no. Hardly ever does it get it right so I quit trying. I use the white balance dropper on the tabs of the ColorChecker. You can get pretty close by getting a white (true white) piece of paper and sticking a black dot on it or coloring in a sharpie spot on it. Then shoot that at the beginning of your session. That will give you a white and a black point. The hard part is getting the neutral gray which is what the ColorChecker has on it.

If you can afford it, head down to a photo supply place and get a gray card. They may have a combo card that has black, midtone and white. If so thats your ticket to nervana. Simply shoot that in the same light as the rest of the photos. If you are shooting at sunset, shoot it several times as the light changes. Then you can use the WB eye dropper on the gray patch and get perfect white balance for a whole set of images at once (lightroom: select many images in the list with one being the card, LR will apply changes to all at the same time).

For video, the 3 color card (W,G,B) will give you reference points for the 3 color corrector tool for Whites, Midtone and Blacks.

Once you have a true WB adjusted you can warm or cool the image(s) or video to suit your taste or style. I like mine just a smidge on the warm side of neutral. There are special warming and cooling WB patches on the ColorChecker Passport which make that easy to do. But you can accomplish this tweak by using the WB sliders in ACR.
 
Jpeg is fine for OUTPUT. Last stage. Uploading for the web, etc. Its NOT good for source. Remember though, LR is a non-destructive editor. No matter what you do to an image in LR (short of deleting it off the HD), will not alter the original in any way. You can always return to the original with no loss in quality.

The problem with jpegs is from their very nature. They discard data or activly alter it to reduce file size. Its NOT a ZIP. You will NEVER get back the original data before you saved as a jpeg if all you have is the jpeg. If you then open the jpeg again and edit it and re-save it you loose MORE data. Every time you open and save again, you loose a generation. Its like photocopying an original. Then photocopying the photocopy. And photocopying that photocopy. Then compare the original to what you have as a copy and you sill see the degradation in quality. Exact same thing happens with jpeg.

As for automatic settings, no. Hardly ever does it get it right so I quit trying. I use the white balance dropper on the tabs of the ColorChecker. You can get pretty close by getting a white (true white) piece of paper and sticking a black dot on it or coloring in a sharpie spot on it. Then shoot that at the beginning of your session. That will give you a white and a black point. The hard part is getting the neutral gray which is what the ColorChecker has on it.

If you can afford it, head down to a photo supply place and get a gray card. They may have a combo card that has black, midtone and white. If so thats your ticket to nervana. Simply shoot that in the same light as the rest of the photos. If you are shooting at sunset, shoot it several times as the light changes. Then you can use the WB eye dropper on the gray patch and get perfect white balance for a whole set of images at once (lightroom: select many images in the list with one being the card, LR will apply changes to all at the same time).

For video, the 3 color card (W,G,B) will give you reference points for the 3 color corrector tool for Whites, Midtone and Blacks.

Once you have a true WB adjusted you can warm or cool the image(s) or video to suit your taste or style. I like mine just a smidge on the warm side of neutral. There are special warming and cooling WB patches on the ColorChecker Passport which make that easy to do. But you can accomplish this tweak by using the WB sliders in ACR.

Thanks very much for you explanation, i will now start looking at some videos of how to use the ColorChecker :D
 
ive been playing with aeral mapping and topos. First attempts were just screen grabs from 2.7 video. Next attempt was with 5sec interval photos. The photos produced a more detailed map but the video grab was surprisingly good (perhaps good enuf) for the topo mapping. I used VLC player to extract every x frame for the grab. So in summary - photos are better than video grabs, but video is pretty darn good as well. The nice thing about video is that you can get takeoff to landing completely covered.
 
ive been playing with aeral mapping and topos. First attempts were just screen grabs from 2.7 video. Next attempt was with 5sec interval photos. The photos produced a more detailed map but the video grab was surprisingly good (perhaps good enuf) for the topo mapping. I used VLC player to extract every x frame for the grab. So in summary - photos are better than video grabs, but video is pretty darn good as well. The nice thing about video is that you can get takeoff to landing completely covered.

It, of course, depends on your end purpose. Is a 1920x1080 image enough? If so, a frame grab from 1080p video will work. However, if you want better quality or larger sizes (ie for print for example), then a frame grab isn't going to cut it. Again, the OP is running a P3A, that maxes video at 2.7k (2704x1520). That too can be good enough. Just depends on whats needed for the end result.
 
It, of course, depends on your end purpose. Is a 1920x1080 image enough? If so, a frame grab from 1080p video will work. However, if you want better quality or larger sizes (ie for print for example), then a frame grab isn't going to cut it. Again, the OP is running a P3A, that maxes video at 2.7k (2704x1520). That too can be good enough. Just depends on whats needed for the end result.
the problem I see with video is that 24 or 30 FPS screen grabs can be a bit blurry from VLC if I am moving fast. The photos were all clear. Might be a way VLC screen grabs, might be compression to compression from video.
 
As I said, frame grabs are not ideal. They will never rival or even come close to stills taken for that purpose.

One problem is how video works. Many people think video is simply a series of images. Its not. Or I should say, most of the time its not. Its actually a series of changes from frame to frame. Most codecs don't store every frame in its entirety. That method would make video freaking huge. What it does is store whats called a keyframe. This is, as the name implies, a key to the next sequence. Its a full frame with all its pixels and doesn't rely on any other frame for its contents. Then the next frame is only the pixels that changed from that keyframe. Then the next frame is only the pixels that changed from the previous frame, and so on until the next keyframe where the process starts all over again.

Understanding that process is important to allow you to choose wisely when putting keyframes in during video render. Too few and the video can get out of sync. Too many and the video gets huge unnecessarily. The best balance, generally, is when you put a key frame in far enough apart where it doesn't make the video too large but also gives it enough anchors to stream well if there is a dropped frame (I am sure we all have seen it, video that suddenly shows wierd blocks in the scene for a short time then they go away suddenly, thats when it hit a keyframe).

When you do a frame grab from an app like VLC, it has to go back to the last keyframe, and every frame since to the one you want to grab and calculate what pixel changes each frame until the end where you wish to save. Then it saves as a jpeg (remember my explanation of a jpeg, well here it is in full force).
 
the problem I see with video is that 24 or 30 FPS screen grabs can be a bit blurry from VLC if I am moving fast. The photos were all clear. Might be a way VLC screen grabs, might be compression to compression from video.

Part of that is that the frame grab is 2 - 3 generations of jpeg encoding as well.

Generation 1: Sensor data is encoded to H263 (jpeg) by the Phantom.
Generation 2: VLC frame grab saves as jpeg
Generation 3: Photo edit software and then save as jpeg again

or

Generation 1: Sensor data is encoded to H263 (jpeg) by the Phantom.
Generation 2: Process through video editor and saves mpeg
Generation 3: VLC frame grab saves as jpeg
Generation 4: Photo edit software and then save as jpeg again

or

Generation 0: RAW image taken by Phantom
Generation 0: Non-destructive edit in Lightroom
Generation 0: Edit in Photoshop and save as TIFF
Generation 1: Export from Lightroom as Jpeg
 

Members online

No members online now.

Forum statistics

Threads
143,091
Messages
1,467,576
Members
104,974
Latest member
shimuafeni fredrik