Which picture ratio?

Interesting...I did not know that.
 
They're still far superior to the jpegs, but they are absolutely not real RAWs.
 
So, the conclusion is that 16:9 does, in fact, produce larger final photos?
I still can't see this.
If you ignore the what's-DJI-doing-with-raw issue, and just look at what you get from the camera (not trying to go back to pre-correction images), you get the same image +/- the additional top and bottom.
Both are the same width.
The idea that you can get more ground coverage width in 16:9 does not stack up.
In these two images, the only difference I can detect are a very small position difference, presumable due to drift while I changed the settings.
I get the same result whether you look at the dng file or the jpg.
16:9 does not give you a wider coverage than 3:2
i-QjbScsr-XL.jpg

i-MTdhhGr-XL.jpg
 
  • Like
Reactions: With The Birds
I'm not sure what you are missing either but take a look at my version of your two pics - I haven't edited them, simply placed them on top of each other on photoshop and drawn two red lines

on the left hand side, I have drawn a line which goes from the edge of the tree down through the bottom picture where it intersects that same tree
on the right hand side, I have done the same, frame edges aligned but the line from the edge of the top tree intersects the tree in the bottom frame.

You've taken this looking straight down (your position was dead accurate so it's a reasonable comparison) which will eliminate a lot of the distortion we've been talking about and reduce the effect but....in the 3:2 image the same trees are closer to the edge of the frame than they are in the 16:9 image which means the 16:9 has recorded a section of extra detail which is missing from the 3:2 even though the images are still the same width - therefore it's giving more ground coverage.

trees.jpg


We aren't talking huge amounts here but it will be exaggerated the further from the lens that the subject is and (probably more importantly) the angle of the camera.
 
I have flow the same ginormous construction property twice a month since August. I started with the P3P with a considerable wider FOV. When I started this project, I wanted to pre-program some Litchi missions, which I was positive I would have to adjust after the first flights. I asked here on PP how to do a FOV calculation. A guy on here sent me an Excel file. I wish I could remember his name, because it is simple, but genius work.

I measured the property on Google earth. I calculated the number of pics to get a good stitch and programmed Litchi according to the files results. It was PERFECT the first time.

Fast forward a bit. I bought the P4P. I took his file, looked up the FOV of the P4P, and copied his math over a few columns changing the P4P FOV

@#$%, I was not going to be able to use my new bird to fly this property. The math said it would not fit. But wait... there are different FOV's on the P4P. I plugged them in. At 400ft it said the 16:9 would capture the widest part of the property with only about 2ft to spare. The rest of the property with about 15 ft to spare. 3:2, and 4:3 would not capture it.

So... that is the math of some guy I don't know. And my adaptation of the file for the P4P. Does it work?

I reset the Litchi mission for the height to capture the width of the property in 16:9 and changed the numbers and spacing of waypoints to capture the difference in the height of the 16:9 pic, with 1/3rd overlaps. It was also perfect the first time.

The second time I flew the property, after buying the P4P, I had the site managers national boss yaking in my ear the hole time I was trying to fly. Distracted, I did not switch to 16:9 for the mission. (normally I fly 3:2) The property DID NOT fit in the frame flying 3:2. The pics would not even stitch.

So I promise, in reality, 16:9 shoots a wider image than 4:3 or 3:2. There is nearly a 30ft width difference at 400ft between 16:9 and 3:2 and nearly double that between 16:9 and 4:3.

I attached the ziped file to review. Save it, it is awesome. Using Google Earth measurements of ground captured, this file is SPOT ON ACCURATE for both the P3P and the P4P

Thanks for sharing the Excel spread sheet, very useful!!
 
  • Like
Reactions: AyeYo
I'm not sure what you are missing either but take a look at my version of your two pics - I haven't edited them, simply placed them on top of each other on photoshop and drawn two red lines

on the left hand side, I have drawn a line which goes from the edge of the tree down through the bottom picture where it intersects that same tree
on the right hand side, I have done the same, frame edges aligned but the line from the edge of the top tree intersects the tree in the bottom frame.

We aren't talking huge amounts here but it will be exaggerated the further from the lens that the subject is and (probably more importantly) the angle of the camera.
I shot horizontal as well as vertical images and when I compared them they were all similar.
As far as I can see the only difference is a very slight rotational shift I mentioned and I believe this accounts for the minor differences you picked up with the red lines.
I've shot from a fixed position at a wall and from the air and compared jpg and raw images and still can't see any evidence to support the idea that you somehow get extra width with 16:9.
The evidence just isn't there.
If there is a pixel or two different (and I'm not convinced there is), it's not going to make any difference and you will capture the same width whether you shoot 16:9 or 3:2.
 
Now I'm beginning to understand why I'm not seeing significant differences in the P4's dng vs. jpg when I try to get the most out of my images. In fact, I think I'm just fooling myself and have been taking it on faith that the dng would yield better details. Those who have been touting working with dng as advantageous I suspect are just touting the party line with no real verifiable results.

My Nikon nef on the other hand yields tremendous results as compared to its jpg when it comes to revealing details and latitude.

I now plan on using UFRAW to see what I can see.
 
  • Like
Reactions: AyeYo
I'm not sure what you are missing either but take a look at my version of your two pics - I haven't edited them, simply placed them on top of each other on photoshop and drawn two red lines

on the left hand side, I have drawn a line which goes from the edge of the tree down through the bottom picture where it intersects that same tree
on the right hand side, I have done the same, frame edges aligned but the line from the edge of the top tree intersects the tree in the bottom frame.

You've taken this looking straight down (your position was dead accurate so it's a reasonable comparison) which will eliminate a lot of the distortion we've been talking about and reduce the effect but....in the 3:2 image the same trees are closer to the edge of the frame than they are in the 16:9 image which means the 16:9 has recorded a section of extra detail which is missing from the 3:2 even though the images are still the same width - therefore it's giving more ground coverage.

trees.jpg


We aren't talking huge amounts here but it will be exaggerated the further from the lens that the subject is and (probably more importantly) the angle of the camera.
Red reference line or not these are close enough in horizontal coverage to demonstrate there is no difference between 16:9 and 3:2 that might impact for any practical purposes. The vignetting and barrel distortion and any in camera corrections that might be applied are a different story.

We don't need to argue about the physics here, perhaps only to understand them. A 1" (8.8 by 13.2mm) sensor illuminated by an 8.8mm focal length lens will provide for an 84.1 deg FOV (largest poissible diagonal accross the sensor) at 3:2 aspect ratio (full sensor area) and 73.7deg FOV left to right. The diagonal FOV most be less at 16:9 however the FOV accross the longest horizontal dimension remains at 73.7deg. We know this as the horizontal pixel dimension (as depicted in DJI specs) is identical for both formats suggesting that 16:9 is derived simply by on sensor crippling of the vertical dimension.
 
  • Like
Reactions: Meta4
Now I'm beginning to understand why I'm not seeing significant differences in the P4's dng vs. jpg when I try to get the most out of my images. In fact, I think I'm just fooling myself and have been taking it on faith that the dng would yield better details. Those who have been touting working with dng as advantageous I suspect are just touting the party line with no real verifiable results.

My Nikon nef on the other hand yields tremendous results as compared to its jpg when it comes to revealing details and latitude.

I now plan on using UFRAW to see what I can see.
Now I'm beginning to understand why I'm not seeing significant differences in the P4's dng vs. jpg when I try to get the most out of my images. In fact, I think I'm just fooling myself and have been taking it on faith that the dng would yield better details. Those who have been touting working with dng as advantageous I suspect are just touting the party line with no real verifiable results.

My Nikon nef on the other hand yields tremendous results as compared to its jpg when it comes to revealing details and latitude.

I now plan on using UFRAW to see what I can see.

Yep, the difference in what can be achieved with NEF files compared to the DNGs the Phantom turns out is quite amazing. RawTherapee is also worth looking at, the interface is daunting but once you get to grips with it, superb results can be achieved.
 
Red reference line or not these are close enough in horizontal coverage to demonstrate there is no difference between 16:9 and 3:2 that might impact for any practical purposes. The vignetting and barrel distortion and any in camera corrections that might be applied are a different story.

We don't need to argue about the physics here, perhaps only to understand them. A 1" (8.8 by 13.2mm) sensor illuminated by an 8.8mm focal length lens will provide for an 84.1 deg FOV (largest poissible diagonal across the sensor) at 3:2 aspect ratio (full sensor area) and 73.7deg FOV left to right. The diagonal FOV most be less at 16:9 however the FOV accross the longest horizontal dimension remains at 73.7deg. We know this as the horizontal pixel dimension (as depicted in DJI specs) is identical for both formats suggesting that 16:9 is derived simply by on sensor crippling of the vertical dimension.

What a few of us are trying to get across (and clearly failing) is that the specs DJI have published and the specs we actually have are two very different things. there may well be a 20mp sensor installed in the camera but the images we are getting out of it are nowhere near a 20mp raw file. They are what appears to be a 20mp image file but that could just as easily be achieved by scaling a 2mp picture.

To some people it might not matter but to those of us that are used to 'developing' raw images the difference is immediately apparent and image quality degrades much quicker than we expect.

The red reference lines I added (along with screen grabs of my own raw files) show that there is a distinct difference which is clearly visible - as I said in my post, a few centimeters at 3 metres very quickly becomes a lot more at 500 metres.

Red lines or not, there is clearly more space either side of the trees in the 16:9 shot posted above - this can be achieved in several ways

1, by zooming out - not possible on a Phantom
2, by changing altitude - as this was supposed to be a 'test' there would be no point in that so it's reasonable to discount it.
3, by editing/cropping the image slightly differently in the software

So to say there is no difference when there clearly is a difference is an odd way of proving your point :)

The 16:9 image above (and the ones I posted earlier) appear to have some 'extra' pixels at the edge of the frame - of course, they aren't 'extra' at all, it just suggests that both images have been cropped slightly differently.

It's not unreasonable to assume this as we already know that the software is applying several corrections to the raw data and cropping the image before it writes to the SD card. Having slightly different algorithms/calculations for the 16:9 and 3:2 output/processing is not unrealistic as most of the distortion appears in the area (top and bottom) which are excluded from the 16:9 output.

It doesn't mean the camera isn't any good, it just means that what we were sold is not exactly what we are getting.
 
What a few of us are trying to get across (and clearly failing) is that the specs DJI have published and the specs we actually have are two very different things. there may well be a 20mp sensor installed in the camera but the images we are getting out of it are nowhere near a 20mp raw file. They are what appears to be a 20mp image file but that could just as easily be achieved by scaling a 2mp picture.

To some people it might not matter but to those of us that are used to 'developing' raw images the difference is immediately apparent and image quality degrades much quicker than we expect.

The red reference lines I added (along with screen grabs of my own raw files) show that there is a distinct difference which is clearly visible - as I said in my post, a few centimeters at 3 metres very quickly becomes a lot more at 500 metres.

Red lines or not, there is clearly more space either side of the trees in the 16:9 shot posted above - this can be achieved in several ways

1, by zooming out - not possible on a Phantom
2, by changing altitude - as this was supposed to be a 'test' there would be no point in that so it's reasonable to discount it.
3, by editing/cropping the image slightly differently in the software

So to say there is no difference when there clearly is a difference is an odd way of proving your point :)

The 16:9 image above (and the ones I posted earlier) appear to have some 'extra' pixels at the edge of the frame - of course, they aren't 'extra' at all, it just suggests that both images have been cropped slightly differently.

It's not unreasonable to assume this as we already know that the software is applying several corrections to the raw data and cropping the image before it writes to the SD card. Having slightly different algorithms/calculations for the 16:9 and 3:2 output/processing is not unrealistic as most of the distortion appears in the area (top and bottom) which are excluded from the 16:9 output.

It doesn't mean the camera isn't any good, it just means that what we were sold is not exactly what we are getting.
I didn't say there was no difference, my point, based on Meta's sample images, is that it is insignificant, at least with respect to the FOV depicted by the widest portion of the image.

Do us all a favour- you have the images in photoshop, let us know how many extra pixels are depicted in the 16:9 image and what that amounts to as a % of the total width of the 2:3 shot. I will be surprised if it's as much as 2%.

My point is that looking at the samples posted the amount, if any, of additional horizontal FOV is an itrelavent consideration to acquiring images for stitching. Tracking errors while flying will almost certainly introduce greater error. In fact to shoot in 3:2 ratio will require less images to be shot as you aren't throwing away the top and bottom. Subject distance is also irrelavent, any FOV difference will be a fixed percentage.
 
OK, here's another look at the image sizes - using Meta4's images from last night, I've overlayed the 16:9 on the 3:2
sizes2.jpg


The 16:9 image is B/W and semi transparent so you can see the detail corresponds in both shots

I corrected for rotation (as mentioned above) and then scaled the 16:9 image so that it pretty much matches the 3:2 below it

As you can see the 16:9 image has marginally more coverage
 
  • Like
Reactions: Jesse_M
I didn't say there was no difference, my point, based on Meta's sample images, is that it is insignificant, at least with respect to the FOV depicted by the widest portion of the image.

Do us all a favour- you have the images in photoshop, let us know how many extra pixels are depicted in the 16:9 image and what that amounts to as a % of the total width of the 2:3 shot. I will be surprised if it's as much as 2%.

My point is that looking at the samples posted the amount, if any, of additional horizontal FOV is an itrelavent consideration to acquiring images for stitching. Tracking errors while flying will almost certainly introduce greater error. In fact to shoot in 3:2 ratio will require less images to be shot as you aren't throwing away the top and bottom. Subject distance is also irrelavent, any FOV difference will be a fixed percentage.

You posted that whilst I was messing about in Photoshop :)

I get the point you are making but it's also missing the point that these images we are getting are already edited, they've already been 'warped' to correct for distortion. As these edits inevitably alter the shape and details of the images (more noticeably at the edges) it has an affect on any further edits/stitching.

Yes the field of view should remain constant if we were viewing the whole image - we are not, as it seems to vary from shot to shot, it does matter. As for viewing angle, subject distance is vital. If the field of view varies by as little as 1 degree it can make a huge difference at the horizon.
 
You posted that whilst I was messing about in Photoshop :)

I get the point you are making but it's also missing the point that these images we are getting are already edited, they've already been 'warped' to correct for distortion. As these edits inevitably alter the shape and details of the images (more noticeably at the edges) it has an affect on any further edits/stitching.

Yes the field of view should remain constant if we were viewing the whole image - we are not, as it seems to vary from shot to shot, it does matter. As for viewing angle, subject distance is vital. If the field of view varies by as little as 1 degree it can make a huge difference at the horizon.
73deg field of view is 73deg regardless of how far the camera is from the horizon. You will see more of everything at every distance in the frame if you move the camera further away from it. The FOV may decrease slightly if you focus on a closer subject, this is quite common with a lot of lenses as the effective focal length will often reduce as you focus closer than the infinity setting. A one degree field of view variance will be represented, or at least should for the purpose of this discussion, be expressed as a number of pixels (sensor Photosites). The success of any stitching operation is largely dependent on the number of overlapping pixels available. Whether its a distant mountain range or a tree your hovering 10m behind makes no difference and you will see the same proportionate amount more or less of each with any incremental change of FOV.
 
So to say there is no difference when there clearly is a difference is an odd way of proving your point
Looking at shots taken 30 seconds apart from a Phantom hovering on a windy day doesn't convince you.
So here are two new images from a Phantom sitting on a table.
i-2Jb49tT-XL.jpg


i-WH4bVn9-XL.jpg

They appear to be the same to my eyes but if there are any extra pixels, I don't care.
There certainly aren't enough to make any practical difference to anything you will ever do with an aerial camera.
The discussion has ended up in pointless pixel peeping that won't do anything to make anyone's images any better to look at.
But to your other point, forget the file. From ACTUAL bird-in-the-air results:
I CAN NOT fit the width of the property in at 400ft, with the P4P at 3:2 or 4:3
I CAN fit the width of the property in at 400ft with, the P4P at 16:9
There's more than enough evidence here to show that shooting 16:9 will not help fit a larger area into your images.
 
  • Like
Reactions: With The Birds
Yep, you are right, it's a pointless discussion, no matter what anybody else says you discount it even if they offer evidence to back up there argument :)
 
The success of any stitching operation is largely dependent on the number of overlapping pixels available.

Yep, and if every frame coming out of the P4P camera has been 'processed/distorted' getting those pixels to line up accurately becomes more and more problematical - which is what some of us are noticing.

Panoramas of large urban areas seem to suffer the most because they require accurate data. I've got a few panoramas that just wouldn't stitch using the DNG/JPEG files but when I did the initial corrections in Raw Therapee or UFRaw then they worked.

I would far rather the raw file we are being served up in the DNG was a genuine, unedited raw - it would save me a lot of grief.
 
  • Like
Reactions: AyeYo
Yep, you are right, it's a pointless discussion, no matter what anybody else says you discount it even if they offer evidence to back up there argument :)
Ok- let's consider the OP's original question. The 3:2 image will, to use his words, give the fullest 20mp image. In fact it is the only available aspect mode that can produce a 20mp image. I don't think anyone is disputing that the 16:9 image seems to be a tiny bit wider across the frame. To speak for myself (although I would expect most who have even a casual interest in photography would agree) I am not discounting it I am saying it's of no consequence. To fly the additional altitude of distance required to make up for it is a trivial excersize.
 
Yep, and if every frame coming out of the P4P camera has been 'processed/distorted' getting those pixels to line up accurately becomes more and more problematical - which is what some of us are noticing.

Panoramas of large urban areas seem to suffer the most because they require accurate data. I've got a few panoramas that just wouldn't stitch using the DNG/JPEG files but when I did the initial corrections in Raw Therapee or UFRaw then they worked.

I would far rather the raw file we are being served up in the DNG was a genuine, unedited raw - it would save me a lot of grief.
Shoot a few extra frames and increase your overlap. Any lens corrections you might apply in pre processing should be a simple click once you have the profile. Stitching is a delight compared to not so long ago when you needed to set your nodal point and fiddle for hours manually assigning control points. Allow 40% overlap and shoot all frames at the same exposure settings and it's hard to find a batch of images that don't stitch.
 

Members online

Forum statistics

Threads
143,108
Messages
1,467,692
Members
104,993
Latest member
canadiansauna