You can obtain many unique vantages capturing 3D data with aerial robotics platforms. UAVs or “Drones” are a natural tool to acquire data in this format along with traditional video and stills. There are two major ways to convert the real world into a digital one without manually 3D modeling the scene, 3D point cloud acquisition and photogrammetry. The acquisition of raw 3d point-clouds vs. the ability to reconstruct a 3D scene from a sequence of photos are drastically different technologies. The acquisition of raw 3D point-cloud measurements is generally done with a ‘LIDAR’ sensor, which is a laser, or group of lasers that are used to measure distances from the sensor to objects in the physical world.
Photogrammetry has recently come to the forefront because of the simplicity and clarity of model it can produce. In traditional photogrammetry, a digital camera is used to collect a series of pictures that can be tied with position (GPS or more advanced non-GPS based positioning as provided by our partners 5D Robotics) data for geolocation. The level of detail exhibited by a 3D model or orthomosaic stitched photo is directly related to the quality of the camera sensor sensitivity, sensor size, and resolution.
- Texture map included in 3D model
- Less expensive acquisition (smaller drones)
- High value
- Less precise than LIDAR
- Time consuming processing
- Change detection
- Environmental monitoring
- Video Gaming and 3D environment generation where accuracy is not the first priority
- Precise measurements of positions within point cloud (sub 5mm accuracy)
- Highly accurate individual positions of items within point cloud
- Engineering ready data
- Geo-Located, high degree of confidence in measurements
- More expensive acquisition (heavier sensors)
- Generally does not include texture and terrain map
- GIS surveys
- Highly accurate measurements of a 3D environment
Drones make acquisition easier and more versatile due to their inherent ability to launch and recover in urban environments, small form factor, and precision flying characteristics. The size of a drone used for both applications varies from a small sub 4 lb commercial aircraft all the way up to a ~40 lb industrial aircraft. All of which have vertical takeoff and landing capabilities (VTOL) for a small ground and operations footprint – making deployment simple. Aerial MOB ensures proper FAA and other regulatory compliance are met when flying these types of sensors on our aircraft.
The physical capture of LIDAR data of any substantial quality requires a drone to carry a payload of at least eight (8) to ten (10) pounds with the all up system weight being close to 40 lbs. While the drones required for photogrammetry range from 4 lbs up to 40 lbs depending on the level of detail required for the 3D model or orthomosaic. The samples below were taken with a 4 lb drone carrying a gimbal with a 12 MP camera.
This sample of lidar data obtained using the Velodyne HDL-32 integrated onto an Aerial MOB heavy lifter with a Routescene Lidarpod. The terrain including utility power lines and solar panels as test subject. The aircraft was flown by an Aerial MOB pilot in a semi-autonomous mode (GPS aided) while the Lidarpod collected integer RTK and IMU data to reference and geo-locate the laser data. Our recently announced data partners, Quantum Spatial (QSI) provide the processing prowess and unique data packaging capabilities for any type of analysis or measurement tools.
Here is the different type of mapping data to review. We used two different types of stitching software to model the coast line. The aircraft used for this mission was a much smaller vehicle than the one used for the Lidar data acquisition. Primarily because the method of photogrammetry only requires a traditional camera, the method of acquisition can be more cost effective if accuracy and detail are not the highest priority
This effort was approximately a 15 minute flight with 299 images, covering 37 acres, and processing time of about 8 hrs. Flight was accomplished autonomously with pilot defining the area to be mapped and having control over the aircraft in case of an action needing to be taken. Photos were geo-located using the on-board GPS
This effort was approximately 10 min with three different flights over the course of two days, over 800 images, and processing of about 6 hrs. Flight was accomplished by a pilot while the drone was autonomously taking pictures at a preset distance interval to maintain a consistent overlap for stitching. Photos were geo-located using the on-board GPS, which helps with map accuracy. During this flight, the pilot chose to fly with the coast line vs. a grid pattern up the coast line. This allowed for shorter flight times and more direct focus on mapping the specific bluff line.
As you can see, there are multiple tools for capturing and processing data, but it requires expertise to get to a customer oriented solution.