This documents provides a basic workflow of the georeferencing process of Solo/Pixhawk flight data as well as the optimized workflow for deriving high quality point clouds 3D models and orthoimages. In detail it covers the topics:
The base for good micro-remote sensing products are the images takes by the drone. They will pass through a post processing workflow where the original picture quality determines the quality in every further step. It is pretty simple and successful if one keep in mind some very basic constraints during data acquisition.
Actually you should take care of two requirements:
The following workflow requires some tools: You need themapillary_tools which is a very useful bunch of python scripts dealing with image placement on web maps. To run them you also need to install gpxpy
, pillow
, exifread
,pyexiv2
. If running windows please install first a pip installer derivative, running Linux you can install pip with sudo apt-get install pip
. Then you may install the dependencies like following:
pip install git+[[https://github.com/mapillary/mapillary_tools.git|https://github.com/mapillary/mapillary_tools.git]]
pip install gpxpy
pip install exifread
pip install Pillow
sudo apt-get install python-pyexiv2 (Linux)
For Windows you will find at launchpad pyexif2.
Next (if not already installed) you need to install the gpsbabel binaries.
sudo apt-get install gpsbabel
For the visualization of the gpx data you may use QGIS or [mapview](https://cran.r-project.org/package=mapview | mapview) in R. |
You will also find some utility python files in the uavRst package.
Almost no standalone camera has a GPS or an aGPS onboard. As a result no image will have a spatial reference at all. Georeferenced images can be aligned better and faster and, as a side effect, one will have a fairly good georeference (~1-5 meters). So even if this task is a bit cumbersome it will help a lot.
The most prominent way to apply a spatial position is to add geotags to the images. This needs a semi-automated post processing. The best software doing so is geosetter, which is unfortunately not running very stable under Linux and OSX (besides some other shortcomings).
Actually, while adding geotags you have to deal with three effects:
For our needs we want to align the images along the flight track within a accuracy of 2-5 seconds (according to lapse rate). It seems to be kind of senseless to automate this process due to the fact that you always have to start and stop the cameras manually and you always will take pictures on the ascending, descending going to and coming back from the core task.
If not done you need to organize some basic things. It is more than helpful to separate each partial flight an the corresponding gpx file in a subdirectory. Very straightforward it may look like below.
You have to find the start and stop image of the partial task. Load the task in QGIS open the Bing aerial map and the first and last image. In most cases you can easily identify the rough position of the camera. It is by no means necessary to get an extremely exact result. You just need to to generate a correct sequence according to the gpx track points as described below.
Open a terminal and navigate to the image/gpx folder.
python ~/dev/R/uavRst/inst/python/add_fix_dates.py.'2017-05-16 16:44:12'**2**
”
gpsbabel -t -i gpx -f current_solo.gpxtrack, start=20170516144408,stop=20170516144930-o gpx-F clean_task.gpx
Now you can finish the task using the utility script from the uavRst
package
python ~/dev/R/uavRst/inst/python/geotag_from_gpx.py.clean_task.gpx
You are done with the preparation now. Time to start Metashape