Autonomous operation is often said to be the future of personal vehicles, and to form the basis of a growing industry that may bring the markets for image sensors, processors and other electronics along for the ride. However, for these projections to come to fruition, self-driving cars may have to get much better at "seeing" the road first.
These cars may rely on arrays of 3D cameras in order to move through their typical environment in complete safety and efficacy. However, these sensors may still be subject to issues such as the need to use vanishing points (in which parallel lines converge into the distance) in order to navigate. They (and the AI behind them) may take too long to calculate these for perspective, thus increasing the risk that the vehicle may veer off a line on a straight road.
In addition, these cameras may be subject to displacement, which may occur in response to mirror adjustments by the user or external factors. These potential problems may be solved through improved 3D calibration methods.
Pre-existing techniques that may deliver this include Gaussian-sphere construction, voting algorithms or random sample consensus (RANSAC) algorithms. However, a new paper published in the journal Optics Express by Dr. Joonki Paik and his team from Chung-Ang University (CAU) in South Korea describes what is claimed to be the most effective form of calibration yet.
This new method combines the application of a Gaussian sphere, its conversion into x-, y- and z-axes using a Hough transform and subsequently voting on them using a circular histogram. This was applied to a road-scape simulation using video data from relevant sensors (e.g. a "front" CMOS sensor with a fish-eye lens and a HD resolution at 60fps) run on a PC with 16GB of RAM and an Intel Core i7-7700 CPU.
This was navigated using the new 3-step method and a RANSAC algorithm for comparison. The researchers asserted that their method was at least as accurate as the established technique in terms of vanishing-point calculations. They also claimed that their form of calibration was improved in terms of speed, but did not supply any timing-related results. All in all, it is indeed possible that this new method may inform the autonomous-vehicle cameras of the future.