Notebookcheck Logo

A Korean research group claims to have improved camera calibration for self-driving vehicles

Self-driving cars could become more acceptable through improved camera tech. (Source: GSA)
Self-driving cars could become more acceptable through improved camera tech. (Source: GSA)
Cars that are capable of autonomous driving are often touted as the future. However, their actual ability to do so through perceiving the real world in 3D with sufficient accuracy is still in development. A new research paper published by a group from a South Korean university purports to detail the most sophisticated method to deliver these camera functions yet.

Autonomous operation is often said to be the future of personal vehicles, and to form the basis of a growing industry that may bring the markets for image sensors, processors and other electronics along for the ride. However, for these projections to come to fruition, self-driving cars may have to get much better at "seeing" the road first.

These cars may rely on arrays of 3D cameras in order to move through their typical environment in complete safety and efficacy. However, these sensors may still be subject to issues such as the need to use vanishing points (in which parallel lines converge into the distance) in order to navigate. They (and the AI behind them) may take too long to calculate these for perspective, thus increasing the risk that the vehicle may veer off a line on a straight road.

In addition, these cameras may be subject to displacement, which may occur in response to mirror adjustments by the user or external factors. These potential problems may be solved through improved 3D calibration methods.

Pre-existing techniques that may deliver this include Gaussian-sphere construction, voting algorithms or random sample consensus (RANSAC) algorithms. However, a new paper published in the journal Optics Express by Dr. Joonki Paik and his team from Chung-Ang University (CAU) in South Korea describes what is claimed to be the most effective form of calibration yet.

This new method combines the application of a Gaussian sphere, its conversion into x-, y- and z-axes using a Hough transform and subsequently voting on them using a circular histogram. This was applied to a road-scape simulation using video data from relevant sensors (e.g. a "front" CMOS sensor with a fish-eye lens and a HD resolution at 60fps) run on a PC with 16GB of RAM and an Intel Core i7-7700 CPU.

This was navigated using the new 3-step method and a RANSAC algorithm for comparison. The researchers asserted that their method was at least as accurate as the established technique in terms of vanishing-point calculations. They also claimed that their form of calibration was improved in terms of speed, but did not supply any timing-related results. All in all, it is indeed possible that this new method may inform the autonomous-vehicle cameras of the future.

A poster for the new Optics Express study. (Source: CAU)
A poster for the new Optics Express study. (Source: CAU)
static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2020 03 > A Korean research group claims to have improved camera calibration for self-driving vehicles
Deirdre O Donnell, 2020-03- 1 (Update: 2020-03- 1)