The 2020 CES was also a venue for evaluation of LiDAR companies. Many are dying because there is no demand from the (yet) non-existent self-driving car industry. Only a few companies have increased their expertise and set them apart.
Furthermore, 2021 is a situation in which we must look beyond LiDAR. New methods of sensing and imaging must be used to counteract and complement laser-based technology.
LiDAR has made progress by enabling things that traditional cameras couldn’t. And now, some companies are trying to make progress with less-new technologies.
A good example of addressing this issue, or perception technology, in another way is Eye Net’s V2X tracking platform. This is one of the technologies described in relation to 5G (generally still some new technology), which, even if it’s an exaggerated claim, could be a savior technology for short-range, low-latency applications. is there.
Eye Net warns of collisions between vehicles equipped with the company’s technology, but it doesn’t matter if the vehicle is equipped with a camera or other sensing technology.
For example, when driving in a parking lot by car, you may not notice that a terribly dangerous electric scooter is coming from the side in front of you, and you may collide as it is, but you may not be able to see it at all because of the parked car.
Eye Net sensors detect the location of devices in both vehicles and warn you that either or both brakes will be in time.
Other companies are working on this, but the company has made it easy to build a large-scale network by offering it as a white label solution, and has expanded it to Volkswagen, Ford, electric motorcycles, etc. I want to come.
However, the future image of driving and operating a car is still unclear, and development is proceeding here and there.
Brightway Vision, for example, addresses the problem of ordinary RGB cameras, which have poor visibility in many real-world environments, by incorporating multispectral technology.
In addition to ordinary visible light images, the company’s cameras are equipped with near-infrared beamers that scan the road ahead many times per second with a sense of distance.
In other words, even if the main camera cannot see 30 meters ahead due to fog, the NIR image periodically sweeps the area in front and scans “slices” to show obstacles and road surface conditions.
It combines the advantages of traditional cameras with the advantages of IR cameras, but avoids the shortcomings of each. The selling point is that there’s no reason to use a regular camera to do a better job with this one. You can do the same thing better and even eliminate the need for a single sensor.
Foresight Automotive has also adopted multispectral imaging technology in its cameras (in a few years, there will probably be no visible spectrum-only in-vehicle cameras). In partnership with FLIR, they’re also working on thermal imaging, but what they’re really trying to sell is something else.
Multiple cameras are typically required to cover 360 degrees (or close to it). However, even if such cameras are manufactured by the same manufacturer, the mounting positions of compact sedans and SUVs are different, let alone self-driving freight vehicles.
Since these cameras need to work together, perfect calibration is required and the position of other cameras must be accurately known. You need to be aware that each camera is looking at the same tree or bike, not two of the same.
Foresight’s advanced technology simplifies the calibration process, requiring manufacturers, designers, and test platforms to perform tedious retests and certifications every time the camera needs to move half an inch in the same or different directions.
There is no. A demonstration shows how to attach the camera to the roof of a car within a few seconds before driving.
A similar company is another startup, Nodar. This also uses a 3D camera, but the approach is different. As the company points out, the method of deriving the depth from the triangulation of two points has been around for decades or tens of thousands of years, considering that it is the same as the mechanism of the human eye.
The slowdown in the use of this approach is not due to the fact that optical cameras are inherently unable to provide the depth information needed for self-driving cars, but to the lack of confidence that calibration is always correct.
According to Nodar, the company’s two pairs of stereo cameras don’t have to be mounted on the vehicle’s body, reducing the jitter and slight deviations seen between multiple camera views.
The company’s “hammer head” camera set is wide (like a shark), and when mounted in a rear-view mirror, the wide spacing of the cameras provides high accuracy.
The distance is determined by the difference between the two images, so as with a one-camera solution, this is some form, probably a car, maybe this big, maybe this far apart. There is no need for object recognition or complicated machine learning such as “te”.
“We know that using cameras side-by-side, just like the human eye, can clear terrible bad weather,” said Nodar COO and co-founder Brad Rosen.
“For example, Daimler engineers have published results showing that the latest 3D approach is significantly more stable in estimating depth in bad weather than a single point of view or complemented by LiDAR. The advantage of our approach is that the hardware we use is already available, of a quality that can be used in the automotive industry, and that there are many manufacturers and distributors to choose from.”
In fact, the big handicap of LiDAR was the cost of the main unit. Even what is considered “cheap” is many times more expensive than a regular camera, and if you add something, it quickly becomes expensive. But the LiDAR team isn’t doing anything.
Sense Photonics has entered the field with a new approach that takes advantage of LiDAR and cameras.
By combining a relatively low-cost and simple LiDAR (as opposed to spins and scans, which tend to be complicated) with a conventional camera, and seeing the same image on each of the two, LiDAR and the camera work together to identify an object. It is possible to measure the distance.
Since its introduction in 2019, the company has been refining its technology in manufacturing and beyond. The latest achievement is custom hardware that can image objects up to 200 meters away, which is generally considered the limit of LiDAR and traditional cameras.
“In the past, we used our laser source (Sense Illuminator) in combination with off-the-shelf detectors. However, when we developed the detector in-house, it was completed in two years and was a great success. This allows us to manufacture short- and long-distance automotive products,” said CEO Shauna McIntyre.
“We design LiDAR in the same” building block “format as the camera. In other words, it can be combined with various optical products to support various FOVs, ranges, resolutions, etc. Since it has a fairly simple design, it can actually be mass-produced. It’s easy to understand if you think of it as an architecture just like a DSLR camera. If you attach a macro lens, zoom lens, fisheye lens, etc. to the base camera body, you can take various pictures.”
Perhaps one of the consensus among all companies is that one sensing scheme will not dominate the self-driving car industry as a whole.
Aside from the fact that the needs of fully autonomous (levels 4-5) vehicles are quite different from those of driving assistance systems, the field of autonomous driving is changing so fast that one approach is rarely a long-term priority.
“AV companies can’t succeed unless they are convinced of the safety of the platform. The safety margin can only be increased by using multiple sensor schemes with different wavelengths,” McIntyre said.
Whether you use visible light, near-infrared, thermal imaging, radar, or LiDAR, or a combination of a few methods as introduced in this article, there are a lot of hot spots on the market. It is clear that it will change.
But it could also be a warning that the road to unification isn’t too far away, as was the wave of booms and busts seen in the LiDAR industry a few years ago.