Falcon Robotics

Machine Vision.

Machine Vision is a technology combined of hardware and software that is capable of perceiving objects, individuals, and the surrounding environment. This technology has wide applications across different industries.
The hardware can be comprised of discrete elements or integrated into a single smart camera, installed on a device like a drone or robot to capture images. The cameras integrated with this technology serve as lightweight sensors that would sense a wider range of wavelengths than those sensed by human eyes.
The software uses the images captured by the cameras, and processes the information detected in the images using the latest Artificial Intelligence techniques. The data acquired can be used for analysis, to feed a database, to control a process or have a device follow pre-provided commands by taking a form of action.

Due to its characteristics, this technology can depend entirely on the visual information processed to fully control a vehicle autonomously, this is found very useful for GPS denied environments. Using this technology, vehicles can be deployed safely for aerial inspection, search and rescue assistance, insurance inspections, roof and building inspections, material inspection, object recognition, visual stock control and management, barcode reading and counting, as well as inspection of sensitive, large, remote or dangerous equipment. This technology can also be used in cinematography allowing the operators to utilize high-resolution drones to capture challenging subjects and terrains.

This technology plays a big role in process automation and autonomous operations leading to an increase in safety, reduction of wasted time, cost, and human power. This technology can be implemented using drones, robots, or only cameras to optimize the efficiency of autonomous missions.

Machine Vision Use Cases

Vision-Guided Flight: Incorporating visual information gathered by a drone into its control loop. This feature first identifies objects using inspection and recognition, then used information on each object detected to adjust both the drone’s speed and trajectory and to intelligently seek near objects by adjusting the onboard sensors’ tilt and pan.

Object Tracking: In this option, the system intelligently analyses the position, trajectory, and speed of the objects encountered by a UAV, and predicts the future course of action.
This AI-driven functionality allows for far more efficient operations. The object-tracking platform has been fully tested in both aerial and terrestrial unmanned vehicles with positive results.

Visual Inspection: This option allows unmanned vehicles to identify specific objects chosen by the user. This powerful feature has limitless civilian applications. Using this option, routine inspections can be performed automatically and with high accuracy. Using this option would result in an increase in efficiency and safety by eliminating the need for frequent manual inspection while allowing human inspectors and repair personnel to focus their attention only on issues requiring immediate intervention.

Video Stabilization: The system automatically adjusts for the degradation of video quality caused by the vibration of aircraft while flying in turbulent air. These anomalies can render video unusable even for recreational viewing, let alone for critical visual analysis. The video pre-processing technology compares each frame to its predecessor, allowing frames unaffected by turbulence to serve as labels for affected frames, which are then adjusted to maintain continuity, clarity, and visual stability.

Orthophotography: Using this feature the system stitches together discrete photographs taken by a drone to create one large composite image. This requires precise indexing of each image to a common coordinate system, which in turn relies on the accurate matching of each image’s perspective. This process is usually iterative, involving multiple passes over the same subject and comparing the results to resolve any accumulative errors.

Autonomous Aerial Inspections: This feature allows for critical insights on large areas quickly, efficiently, and safely. Advanced computer-vision techniques allow this feature to identify key features of the environment, sites and infrastructure, delivering actionable information.

Localization and Mapping: This feature allowed for extraction of information from each item in an image sequence and intelligently interpolated that information to create detailed and accurate maps. For complex surface features, Speeded Up Robust Features (SURF) and Scale-Invariant Feature Transform (SIFT) descriptors were employed. The information provided by this analysis feeds a Simultaneous Localization and Mapping (SLAM) algorithm to produce localization and mapping resources from still images.

Have Questions? Let's Talk!