Why do you need a “Revolutionary Dynamic-stabilised Active Mount” for the camera for photogrammetric mapping?

Any drone (or UAV) will fly in accordance with the natural aeronautical physics of the air – which means tip, tilt and yaw of the airframe during flight as the air thermodynamic and wind turbulence forces effect. So unless compensated for, a rigid camera body within any drone airframe will of course produce photos of the ground with the effects of tip, tilt and yaw impressed on the raw images – which effectively means distortion and hence a significant metric error source for mapping. DroneMetrex has developed its own DMX-FMS (Photogrammetric Flight and sensor control Management System) integrated into the firmware of the autopilot. Each photo is captured as nadir (pointing vertically downwards regardless of the tip and tilt of the airframe) because our DMX-FMS controls the movement of our Dynamic-stabilised Active Mount to counter-react and compensate for the airframe tips and tilts, as well as the yaw (rotation “crab”). It is exactly why the large photogrammetric cameras are operated with the same mount on large manned aircrafts.

How is the TopoDrone different from all other systems that claim accurate mapping from UAVs?

We have started from fundamental accurate photogrammetric mapping principles – and designed, integrated and built a Drone Mapping System (Topodrone-100) specifically for accurate, repetitive, reliable mapping. We rigorously account for all of the known photogrammetric errors including:



• Our own “Dynamic-stabilised Active Mount” to ensure near-nadir, crab-free imagery Forward Motion Compensation (FMC) and Lateral Motion Compensation (LMC)

• Lens distortion and camera distortion correction including due consideration for the in-flight temperature effects on the camera calibration

• Elimination of the “rolling shutter effect” and pixel distortions that are inherent in the sensors of COTS digital cameras


The TopoDrones are built, serviced, maintained and upgraded in Australia on our premises. We integrate all the hardware components and have developed our own software embedded into the autopilot firmware to control our “Dynamic-stabilised Active Mount”, GPS synchronisation and even temperature recording for accurate pixel sensor distortions.
All the errors that are treated as random error movements in all the other systems, are measured and accounted for by our unique hardware and software photogrammetric solution. Hence large random errors become systematic errors in the TopoDrone that are treated correctly and eliminated.

Can I use other “Cloud Processing” UAV software solutions for accurate mapping with the TopoDrone data?

Yes, you can providing you use our Pre Processing Software (DMX-PPS) for preparing the distortion free images. Let me explain: most of the common Commercial-Off-The-Shelf (COTS) cameras used in most drones record the exposed imagery via a rolling shutter mechanism. So as the drone with the camera flies, the side and forward motions will of course have geometry effects because of this method of electronic recording and read out time of the CMOS imaging chip during exposure. In fact, every pixel of every frame is written in a slightly different time. This means that the entire full-frame digital image will effectively have random geometry pixel errors which will have an extreme effect on the image geometry and final product accuracies. The amount of this random distortion varies from one photo to the next, therefore a unique and robust camera calibration model cannot be established for interior orientation and image processing in photogrammetry – even by self-calibration.

Random geometric distortions on each individual photo will produce unreliable, unpredictable and inaccurate results irrespective of the kind and make of the software used for photogrammetric computations. To achieve the highest possible accuracy, our DMX-PPS accounts rigorously for these errors because in the TopoDrone System, these errors are systematic, measurable and known – they are no longer random. We also consider the temperature effects on the camera calibration parameters.

Why don’t you land with parachute option?

On flying days of “no wind”, it may be possible to control the landing target area for a drone with a deployable parachute. However, once winds are introduced (and this is the normal situation for most project work), the landing target area cannot be guaranteed with a parachute, because once deployed, the wind can readily blow the (now) powerless drone wherever – a very dangerous situation for (say) open mine site mapping. A second consideration is also the ground impact force of a “dead-weight” hitting the ground even with the parachute deployed. Typically, parachutes are designed to “land” the drone at a velocity of 2-3 metres per second. Now think of the impact of the abrupt dead stop as the drone, with the camera etc inside, hits the ground, having traveled the last 2 metres (= tall human) in just one second! Not just the drone, but think of the calibrated camera and lens stopping abruptly at this velocity. We prefer the gentle skid landing or net landing of our aircraft!

What is the difference between a photogrammetric calibrated mapping camera and a high quality COTS camera? (COTS = Commercial Off The Shelf)?

COTS cameras by themselves can not fulfill the accurate photogrammetric mapping requirements due to the above mentioned systematic and random errors. Currently we are using specially modified COTS cameras and compensating for all the error sources to achieve metric equivalent camera accuracies, at the same time we are developing our own metric camera system for UAV’s which can be installed on any other UAV system, i.e. distortion free and forward and side motion compensation enabled camera for UAVs!

So any drone system that uses a high quality COTS camera will basically get the same mapping results as the TopoDrone-100, right?

No! Nobody can fool the natural physics and mathematics of the world by using just pretty photographs and attempting to distribute all the inherent random errors via the so-called “self-calibration bundle adjustment”. There are a number of inherent random errors by using digital COTS cameras on a UAV, which MUST be accounted for in a rigorous mathematical method for accurate photogrammetric mapping.