I think people underestimate the difficulty in implementing waypoint navigation with user controlled camera positioning.
Simple waypoint navigation, like it was on the P2V+, is easy to implement: phantom just uses its orientation and current gps position to calculate the direction of flight to the next waypoint, and uses feedback from gps, to know when it is there. The problem starts when you need to allow the user camera control in the horizontal plane. If the camera had 2 degrees of freedom, like it has on the inspire, it would be relatively easy to allow you to do almost anything you want. Almost, because on the Inspire, the camera can not continuously rotate on its vertical axis. Because of this, you have to use yawing to get your camera shots. Here lies the problem: yawing impacts direction of flight, so in order to maintain a certain path with simultaneous yawing, orientation calculations have to compensate for the yawing. This is not trivial to do while keeping a smooth motion.
Think about a scenario like this: you are on a straight flight path from A to B, you have your camera pointed ahead and down 45 degrees, and you are starting to repeatedly yaw right and left, to perform seesaw panning. While doing that you have gusts of winds. In a perfect implementation, the P3 will be able to do all that while maintaining a straight line from A to B at a constant speed....
Doable? yes, but with very heavy control systems algorithms. This is why I always thought it would be simpler to have a camera that has its motion fully independent from the flight path. From my perspective, the Inspire still does not provide this solution as it cannot continuously rotate on its vertical axis. I guess this is why there will be a P4, a P5 or a P17. Each iteration will hint at what we can do and the next one will do it better, just slightly better....