DronePan

All very good questions -- please email me: pw [at] unmannedairlines [dot] com and we'll get it all answered!
With all due respect, I don't think it is advisable to take this offline. These are all valid questions that everyone using your app needs to have answered. Can you please answer them here, after obtaining the answers? If you only need a copy of the questions emailed to you, for investigation before answering, please feel free to copy my questions and paste them into an email that you can send to yourself. :cool: If you wish to contact me directly, please send me a PM.
 
Last edited:
  • Like
Reactions: pdw
Thanks for the feedback!
1) The newer DJI SDK actually detects the type of bird, so this type of issue will be resolved soon.
Thanks again for the thorough feedback!
Great! It was my boneheaded mistake, but if there's one bonehead, there are likely others lurking in the bushes, who could easily make the same mistake! :eek:
 
Lots of questions for you, some of which were precipitated by a near crash this evening!

1. Are your two shots at the bottom in the 20 shot sequence shot at 90° to each other, or at 180°, as shown in the Inspire demo video? If shot at 180°, the second image is just an upside down version of the first. What would that gain?
2. Are you currently shooting the 20 shots in JPG only?
3. Can you copy the current camera mode settings from DJI GO?
4. If so, then the user could specify JPG+DNG in DJI GO, and get raw and jpg images from each shot.
5. If so, the user could also specify a 5 frame AE bracketed JPG+DNG mode and your 20 shot sequence could automatically create a total of 200 shots (5 bracketed shots of each of the 20 views, in both JPG and DNG)
This would take a full 10 minutes, but your app could rotate the P3P and gimbal to the next shot position immediately, and during the 30 second processing time of each shot, this would always give the gimbal plenty of time to stabilize before the next shot..
6. Can DJI GO and Dronepan both run at the same time in a taskswitching mode?
7. If so, when you complete a 20 shot pano, can you seamlessly reconnect to the P3P with DJI GO, or what needs to take place to accomplish this in a less than seamless method?
8. Does DJI GO have to be terminated before starting up Dronepan?
9. Does your app require first installing the developer SDK onto the P3P, the RC, and the batteries to work correctly?
10. If the signal connection is lost at high altitude directly overhead, while under Dronepan control, how do you cancel the return to home decension?
11. Some have reported that the only way to get DJI GO to reconnect to the P3P, after completing a Dronepan pano, is to unplug the control cable and replug it. Is this true?
12. How far out is your public release of the P3P version?
1) we're actually investigating removing the last two shots completely as we've had a power user demonstrate that you can achieve a complete stitch without those two.
2,3,4) the user sets camera settings in GO, DronePan does nothing in the way of controlling or altering camera settings.
5) due to a) limitations of the SDK and b) development resources on our end, bracketing gets much more complicated. if we had endless access to lots of developers, we could do all sorts of cool things like this but playing with the timing in that manner requires more resources than we have to dedicate at this time.
6,7) In the latest update of GO, yes DJI has addressed the occasional video feed disconnection that users experience when switching between the two. This is huge as it has been one of the worst bugs and couldn't address it -- the problem had to be sorted out on DJIs end. While testing this morning I seamlessly switched between the two numerous times. As such, the next build that we send out within the next week or so will have the ironed out.
8) No
9) No
10) I will have to confirm, but I do not think we have a method for doing this, which is why we recommend users keep it close to home and well within LOS.
11) I've heard android users plug/unplug due to android requirement that only one app can be associated with the plug at any given time. That said I have not tested android and can't speak in detail on this. I have never needed to do this with iOS.
12) hard to say, but I think we'll be there in another week or two.

For your general understanding related to the logic, what happens is we tell I1 and P3 to yaw to a position and then take a photo. Right now the photo command is done with a timer, independent of the yaw command. Unfortunately sometimes the aircraft doesn't yaw when we expect or it's delayed. The timer doesn't care about this and triggers the photo regardless of whether or not the aircraft is in a new position. The right thing for us to do is wait for the aircraft to tell us "Hey, I'm in a new position and you can take a photo now!" We are hoping that DJI will build out the SDK so that we have better control of what happens when.
 
1) we're actually investigating removing the last two shots completely as we've had a power user demonstrate that you can achieve a complete stitch without those two.
2,3,4) the user sets camera settings in GO, DronePan does nothing in the way of controlling or altering camera settings.
5) due to a) limitations of the SDK and b) development resources on our end, bracketing gets much more complicated. if we had endless access to lots of developers, we could do all sorts of cool things like this but playing with the timing in that manner requires more resources than we have to dedicate at this time.
6,7) In the latest update of GO, yes DJI has addressed the occasional video feed disconnection that users experience when switching between the two. This is huge as it has been one of the worst bugs and couldn't address it -- the problem had to be sorted out on DJIs end. While testing this morning I seamlessly switched between the two numerous times. As such, the next build that we send out within the next week or so will have the ironed out.
8) No
9) No
10) I will have to confirm, but I do not think we have a method for doing this, which is why we recommend users keep it close to home and well within LOS.
11) I've heard android users plug/unplug due to android requirement that only one app can be associated with the plug at any given time. That said I have not tested android and can't speak in detail on this. I have never needed to do this with iOS.
12) hard to say, but I think we'll be there in another week or two.

For your general understanding related to the logic, what happens is we tell I1 and P3 to yaw to a position and then take a photo. Right now the photo command is done with a timer, independent of the yaw command. Unfortunately sometimes the aircraft doesn't yaw when we expect or it's delayed. The timer doesn't care about this and triggers the photo regardless of whether or not the aircraft is in a new position. The right thing for us to do is wait for the aircraft to tell us "Hey, I'm in a new position and you can take a photo now!" We are hoping that DJI will build out the SDK so that we have better control of what happens when.
Thank you for your detailed answers. However, I am still a bit puzzled as to the difficulty with the AE bracketing problem. You state that "the user sets camera settings in GO, DronePan does nothing in the way of controlling or altering camera settings."

1. I specify in DJI GO that my still captures are to be JPG+DNG, and set the camera to record stills using Auto exposure, instead of manual, and set WB to Auto LOG, and I then press and hold the on-screen shutter release button to bring up the 5 different shutter release modes. Instead of the default Single exposure, I select the AE Bracketing mode, and select a 5 shot bracket under the sub menu.
2. I then switch to Dronepan and shoot a panorama.
===>What will happen???

If your first statement is true, a 20 shot sequence of 5 bracketed exposures should occur in both JPG and DNG, producing 200 images, 100 JPG and 100 DNG.

It takes about 30 seconds to complete a 5 shot bracketed JPG+DNG AE exposure. If your camera timer is set to 35 seconds and your yaw command is set to initiate immediately after the timer has completed the shutter release, the yaw command should be completed by the time the 5 shot AE bracket has completed processing because it has 30 seconds to complete the yaw.

What am I missing here, and why won't this method work?

You never did answer my original question about the bottom two shots.
Why does your video state they are shot at 180° from each other, when that is actually the same original photo upside down? Are they actually taken at 90° to each other, which would make far more sense?

If you eliminate the straight down shot, you will lose the best part of the panorama, and introduce distortion into that portion. You will also likely cause some stitching programs to fail, even if others can handle the missing downward shot(s).

Also, if your 15° of overlap at 60° intervals of the 360 (based upon the 90° FOV of the lens) is causing stitching problems, then shoot 8 shots at 45° intervals instead, and have 22.5° of overlap on each side.

I have been shooting all 8 directions of facing straight down (using 45° rotations to reach the 360° coverage), but I now realize that only 4 are needed, for the same reason your two 180° rotated downfacing images are also the same! My last 4 are just upside down versions of the first 4!

Looking forward to the public release in a week or two.:cool:
 
Last edited:
GG - you keep saying that the last 2 downward photos are inverted duplicates. How do you come to this statement? Do you always shoot above a symmetrical landscape? My latest panorama was taken above our local church which would sprout two extra transepts at the west end if you are right. It seemed to me that my last 2 photos where at 90 deg when I had expected them at 180.

What I would like to see is a bit more sky. If they go for your suggestion of 8 shots per rotation it may be worth looking up by 30 deg between the blade arms for an extra row.
 
Good to know. That explains it. Thank you! Sell the sizzle on youtube and facebook and offer the steak elsewhere! I also found these guys who have an incredible compilation of what is possible, including full spherical panorama videos! :cool: Don't mind the Russian characters in the text on lots of their website, even when the English version is selected, as they are all Russian.
" www.airpano.ru/files/Istanbul-Turkey/m-1 "

What plugins are they using?
 
GG - you keep saying that the last 2 downward photos are inverted duplicates. How do you come to this statement? Do you always shoot above a symmetrical landscape? My latest panorama was taken above our local church which would sprout two extra transepts at the west end if you are right. It seemed to me that my last 2 photos where at 90 deg when I had expected them at 180.

What I would like to see is a bit more sky. If they go for your suggestion of 8 shots per rotation it may be worth looking up by 30 deg between the blade arms for an extra row.
You are confirming my strong suspicions. The Dronepan YouTube video for the Inspire1 stated and demonstrated a 180° rotation of the Inspire1 camera for the bottom 2 images, which made absolutely no sense. I have asked them this same question no less than 6 times, and they have never answered it. They keep dodging the question! If there are only 2 shots at the bottom, they have to be at 90° to each other, and not 180°! The Inspire1 demo video is simply wrong, and they have apparently changed the shots on the P3P beta to be at 90° from each other.

Another row at 120° would help for those unable to composite in another sky at the top using Photoshop, but will require cloning over the props in the frame anyway. If you use PhotoStudio Pro, you can simpy crop the top row in the final stitched panorama just below where the props appear in the frame, and it won't show a black ceiling, like PTGui does, without compositing in a sky.:cool:
 
What plugins are they using?
They are using their own, from what I can discern. They are also shooting spherical panorama videos with overlapping GoPros as well as a DSLR video rig with up to 4 cameras, each with a 14 mm lens, which when combined, cover every direction! If we attached one of the smaller $300 spherical video cameras to the P3P, we could do the same thing on a smaller scale in the air, too. Because of the weight of the multicamera DSLR video rig, these guys are flying real helicopters! Really cool video imagery though! :cool:
 
No fb here as well, I'm finding dronepan easy on iOS. No drop outs. I hope the new release is for 9.02 ios. Haven't updated yet waiting on dronepan update. Keep up the good work pdw
 
Thank you for your detailed answers. However, I am still a bit puzzled as to the difficulty with the AE bracketing problem. You state that "the user sets camera settings in GO, DronePan does nothing in the way of controlling or altering camera settings."

1. I specify in DJI GO that my still captures are to be JPG+DNG, and set the camera to record stills using Auto exposure, instead of manual, and set WB to Auto LOG, and I then press and hold the on-screen shutter release button to bring up the 5 different shutter release modes. Instead of the default Single exposure, I instead select the AE Bracketing mode, and select a 5 shot bracket under the sub menu.
2. I then switch to Dronepan and shoot a panorama.
===>What will happen???

If your first statement is true, a 20 shot sequence of 5 bracketed exposures should occur in both JPG and DNG, producing 200 images, 100 JPG and 100 DNG.

It takes about 30 seconds to complete a 5 shot bracketed JPG+DNG AE exposure. If your camera timer is set to 35 seconds and your yaw command is set to initiate immediately after the timer has completed the shutter release, the yaw command should be completed by the time the 5 shot AE bracket has completed processing because it has 30 seconds to complete the yaw.

What am I missing here, and why won't this method work?

You never did answer my original question about the bottom two shots.
Why does your video state they are shot at 180° from each other, when that is actually the same original photo upside down? Are they actually taken at 90° to each other, which would make far more sense?

If you eliminate the straight down shot, you will lose the best part of the panorama, and introduce distortion into that portion. You will also likely cause some stitching programs to fail, even if others can handle the missing downward shot(s).

Also, if your 15° of overlap at 60° intervals of the 360 (based upon the 90° FOV of the lens) is causing stitching problems, then shoot 8 shots at 45° intervals instead, and have 30° of overlap.

I have been shooting all 8 directions of facing straight down (using 45° rotations to reach the 360° coverage), but I now realize that only 4 are needed, for the same reason your two 180° rotated downfacing images are also the same! My last 4 are just upside down versions of the first 4!

Looking forward to the public release in a week or two.:cool:
I think what you're missing is that the shutter timer and yaw timer operate independently. Sure, conceptually, the method that you're describing makes complete sense. However with the SDK we do not have the ability to easily introduce additional timing scenarios to that level.

I think the video is wrong, and those last two are actually vertical shots at 60 degrees from each other, which was simply a result of using the same method of previous loops and not have to build out specific code for the bottom. Of course we have a lot of optimizing to go still :)

You're right though, it would be awful to have holes in the bottom...thats the last thing we want!
 
I think what you're missing is that the shutter timer and yaw timer operate independently. Sure, conceptually, the method that you're describing makes complete sense. However with the SDK we do not have the ability to easily introduce additional timing scenarios to that level.

I think the video is wrong, and those last two are actually vertical shots at 60 degrees from each other, which was simply a result of using the same method of previous loops and not have to build out specific code for the bottom. Of course we have a lot of optimizing to go still :)

You're right though, it would be awful to have holes in the bottom...thats the last thing we want!
You never answered my first question:
1. What would currently happen after that detailed setup I just described, when I switched to Dronepan and started the spherical panorama?

Your promotional YouTube video also shows the Inspire1 camera rotating 180° after taking the first downward shot, to take the second, under the control of Dronepan. The audio description accurately describes what is taking place with the camera facing the ground. However, the methodology is flawed. 180° rotation just copies the first image upside down.

You still have yet to answer my second question:
2. What orientation to each other are the two downward photos currently taken at?
90°? 60°? 180°?
I have only asked this specific question EIGHT times now, still without an answer...

It's not an issue of holes on the bottom, but elimination of all lens distortion in the imagery directly below the P3P, that would otherwise be present, if you eliminate the bottom shots, and replace them with 30° shots.

3. Since you have no idea what the current camera settings and camera mode settings are when Dronepan is initiated, and also have no idea what speed of micro SD card is in the P3P, how can you possibly come up with any algorithm that will time the necessary yaw and gimbal elevation change between shots, since you have no idea how long it will take to process the images, since you didn't set them? You said the shutter timer and the yaw timer operate independently (and I assume the gimbal elevation timer, as well). What intervals do you currently have each of those three independent timers set at?

4. The only way you can have consistent, accurate results is if you specify to the user before the Dronepan initiation what the necessary camera settings must be, and which of the 5 camera modes the camera must be set to, including all submenus. Since you don't do that, and have no way of knowing what those settings are (according to your statements), you will never achieve consistent results until you do! You are completely dependent upon those settings for your final results. If the previous photo processing sequence is incomplete (and some, like AE bracketing can take 30+ seconds to complete while the shutter circle wheel spins), the next shutter release sequence will be ignored! Try it manually, if you don't believe me!

Still seeking clarity, as your answers, so far (or lack thereof), demonstrate a lack of understanding of the very manual process you are seeking to automate.
 
You never answered my first question:
1. What would currently happen after that detailed setup I just described, when I switched to Dronepan and started the spherical panorama?

Your promotional YouTube video also shows the Inspire1 camera rotating 180° after taking the first downward shot, to take the second, under the control of Dronepan. The audio description accurately describes what is taking place with the camera facing the ground. However, the methodology is flawed. 180° rotation just copies the first image upside down.

You still have yet to answer my second question:
2. What orientation to each other are the two downward photos currently taken at?
90°? 60°? 180°?
I have only asked this specific question EIGHT times now, still without an answer...

It's not an issue of holes on the bottom, but elimination of all lens distortion in the imagery directly below the P3P, that would otherwise be present, if you eliminate the bottom shots, and replace them with 30° shots.

3. Since you have no idea what the current camera settings and camera mode settings are when Dronepan is initiated, and also have no idea what speed of micro SD card is in the P3P, how can you possibly come up with any algorithm that will time the necessary yaw and gimbal elevation change between shots, since you have no idea how long it will take to process the images, since you didn't set them? You said the shutter timer and the yaw timer operate independently (and I assume the gimbal elevation timer, as well). What intervals do you currently have each of those three independent timers set at?

4. The only way you can have consistent, accurate results is if you specify to the user before the Dronepan initiation what the necessary camera settings must be, and which of the 5 camera modes the camera must be set to, including all submenus. Since you don't do that, and have no way of knowing what those settings are (according to your statements), you will never achieve consistent results until you do! You are completely dependent upon those settings for your final results. If the previous photo processing sequence is incomplete (and some, like AE bracketing can take 30+ seconds to complete while the shutter circle wheel spins), the next shutter release sequence will be ignored! Try it manually, if you don't believe me!

Still seeking clarity, as your answers, so far (or lack thereof), demonstrate a lack of understanding of the very manual process you are seeking to automate.
1) nothing. it won't work.

2) As I stated above, "60 degrees from each other"

3) we can't. we are testing / adjusting different intervals on a near daily basis.

4) sounds great, thanks!
 
pdw, lot's of...strongly worded comments on here. Let me take a moment to say thanks for the FREE app that none of us have paid for. I have been testing as time allows on both I1 and P3P. Not perfect, however, where else are people find a better at app at a similar price-point?

Have a great afternoon! Looking forward to future iterations, even if you decide to start charging.

GT
 
Have a look at ptgui and, I think, Microsoft ICE.
 

Recent Posts

Members online

Forum statistics

Threads
143,086
Messages
1,467,525
Members
104,965
Latest member
cokersean20