目前,汽車制造商紛紛投入巨資,開發(fā)多種先進駕駛員輔助技術,讓駕駛過程更加舒適和安全,而且也的確取得了不錯的成果。事實上,很多先進駕駛員輔助系統(tǒng)已經作為車輛功能或特色,成功登陸 SAE 2 級自動駕駛汽車(具體定義見:SAE J3016-2018 《道路機動車輛駕駛自動化系統(tǒng)相關術語的分類和定義》),包括車道保持輔助、自適應巡航控制和自動緊急制動等。這些系統(tǒng)可以在特定行駛條件下發(fā)揮作用,控制車輛的運動;然而,為了確保安全行駛,駕駛員仍必須將注意力集中在駕駛上。
迄今為止,這些 2 級自動駕駛系統(tǒng)基本都是圍繞攝像頭和雷達技術設計的。然而,汽車制造商在為系統(tǒng)選擇感知組件時可以改用激光雷達,從而大大提高駕駛員輔助功能的效果和效率。激光雷達技術在某些方面的性能明顯要優(yōu)于攝像頭和雷達,這是由其工作原理決定的,而這些性能對于避免前向碰撞事故發(fā)生至關重要。因此,激光雷達也成了行業(yè)真正應用先進駕駛員輔助系統(tǒng)的關鍵使能傳感裝置之一。[1]
激光雷達可實時測量周邊目標與車輛的距離,整個過程不需要進行任何額外的運算,也無需其他傳感器的支持,因此其在自由空間中的探測效率和準確度均優(yōu)于攝像頭。事實上,激光雷達可為駕駛員輔助系統(tǒng)提供非常準確的自由空間探測,即利用周圍物體距離車輛的精確位置,繪制車輛的安全行駛區(qū)域,這也是任何駕駛員輔助系統(tǒng)正常工作的基礎。
雷達也具備探測周圍物體的能力,但其相對“模糊”的成像能力根本無法滿足自由空間探測要求,而且還必須依賴其他傳感器才能完成“目標分類”任務。此外,雷達很難探測靜止物體。“毫米波雷達測距精度高、受環(huán)境條件影響小,但缺陷在于角度分辨率差,很容易出現(xiàn)誤報情況。”[2]因此,雷達對自由空間探測的作用不大。
與激光雷達不同,一些基于攝像頭的解決方案則需要更多傳感器的支持,而且還需經過復雜的運算才能推斷周圍物體的距離,從而確定車輛的安全行駛路徑。例如,為了提供“立體視野”,這些系統(tǒng)將至少需要兩個攝像頭,還需配備“深度估算算法”基于左圖像和右圖像之間的三角測量,確定視野中物體的深度”[3]。如果系統(tǒng)僅有一部攝像頭,則車輛處理器則必須通過比較多幀圖像,才能模擬最終的立體圖像。然而,與激光雷達相比,通過這種“運動結構”(structure from motion)的方法推導距離必須進行額外的運算,因此復雜度更高。
事實上,攝像頭還需面對所謂的“隧道視覺”(Tunnel vision)挑戰(zhàn),這會讓基于攝像頭的駕駛員輔助系統(tǒng)更加復雜、成本更高。具體來說,攝像頭為了聚焦距離更遠的物體,則必須以犧牲視場為代價。這種現(xiàn)象其實在民用相機中也很常見:當你對準遠處的物體時,鏡頭中可以捕捉的場景則較少,這點任何使用過相機變焦功能的人應該都有體會。車輛必須同時捕獲不同距離(遠、中、近)的車輛、目標和行人,因此為了獲得恒定的高分辨率圖像,先進駕駛員輔助系統(tǒng)必須具備多個焦距,即需要多個攝像頭。
現(xiàn)階段的駕駛員輔助系統(tǒng)過于依賴攝像頭,這也帶來了一定弊端。具體來說,基于攝像頭的駕駛員輔助系統(tǒng)經常會利用算法,來分析攝像頭捕捉的圖像,從而識別檢測到的對象。然而,這種“從圖像中提取特征的算法將嚴重依賴‘對比度’(無論是顏色對比度,還是強度對比度)。”正因如此,基于攝像頭的駕駛員輔助系統(tǒng)很容易受到錯覺的干擾,比如將大卡車的側面誤報為天空等。[4]事實上,基于攝像頭的先進駕駛員輔助系統(tǒng)不僅會受到這些假陰性讀數的影響,也會同時受到假陽性讀數的干擾。最近,美國公路安全保險協(xié)會(IIHS)的一項研究顯示,這些有缺陷的讀數會導致系統(tǒng)在真實的道路駕駛場景中做出不恰當的的舉動。報告稱,“在 180 英里內的總行程中,車輛共意外減速 12 次,且其中 7 次均是受到馬路上樹影的干擾。”[5] IIHS 機構擔心,這種糟糕的性能可能會讓駕駛員直接完全放棄車輛的安全系統(tǒng)。
此外,攝像頭在弱光條件下的性能也令人堪憂,這點很容易理解。事實上,攝像頭的工作原理與我們的雙眼很類似,也必須依賴環(huán)境光才能工作。目前,一些公司正在探索解決這一缺陷的方法,例如借助紅外功能改善攝像頭在弱光下的性能。這種趨勢實際反映了兩個問題:首先,汽車制造商也已經意識到單憑攝像頭并無法滿足車輛的需要;其次,紅外技術可能是問題的答案之一。因此,相較于通過一系列“縫縫補補”增強某一種傳感器的功能,我們真正需要的是通過一種新的傳感器技術,提供一種與以往不同的數據類型。為了在更多條件和環(huán)境下保證安全操作,汽車制造商在設計先進駕駛員輔助系統(tǒng)時必須“集各家之所長”,結合使用市場上各種可用的傳感器技術。
關于作者
文獻參考
[1] See also “A Safety-First Approach to Developing and Marketing Driver Assistance Technology.”
[2] Wu, X., Ren, J., Wu, Y., and Shao, J., "Study on Target Tracking Based on Vision and Radar Sensor Fusion," SAE Technical Paper 2018-01-0613, 2018, https://doi.org/10.4271/2018-01-0613.
[3] SAE, “J3088: Active Safety System Sensors,” https://www.sae.org/standards/content/j3088_201711/.
[4] Iain Thomson, “Man killed in gruesome Tesla autopilot crash was saved by his car's software weeks earlier,” The Register. June 30, 2016. https://www.theregister.co.uk/2016/06/30/tesla_autopilot_crash_leaves_motorist_dead/.
[5] Insurance Institute for Highway Safety, “Evaluating Autonomy: IIHS examines driver assistance features in road, track tests,” Status Report, 53, No. 4. August 7, 2018. https://www.iihs.org/iihs/news/desktopnews/evaluating-autonomy-iihs-examines-driver-assistance-features-in-road-track-tests.
Automakers have invested heavily in developing advanced driver-assistance technologies to make driving more comfortable and safe. The most advanced of these systems are already offered as vehicle features that satisfy Level 2 automated driving as defined by SAE International in SAE J3016-2018 Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles and incorporate capabilities, such as Lane Keep Assist (LKA), Adaptive Cruise Control (ACC), and Automatic Emergency Braking (AEB). These features can intervene in certain driving scenarios to control the vehicle’s movement; yet, to ensure safe operation, the driver must remain attentive and focused on the driving environment.
To date, these L2 systems have been designed around camera and radar technology. However, automakers can greatly improve the effectiveness and efficiency of driver-assist features by employing a system in which lidar is a key perception component. Lidar technology is inherently superior to camera and radar in certain performance aspects that are crucial for avoiding forward collisions and which support a move within the industry to implementing lidar as a crucial sensor for ADAS applications.[1]

Lidar performs free-space detection more efficiently and precisely than cameras, by providing real-time measurements of how far surrounding objects are from the vehicle, with no additional computational processes or sensors required. As a result, data from a single lidar sensor directly provides the fundamental building block of a successful driver assistance system: accurate free-space detection. That is, lidar utilizes precise distance measurements of surrounding objects to map areas where it is safe for the vehicle to drive.
Radar has the ability to detect some surrounding objects; however, its relatively “fuzzy” image does not provide accurate free-space detection and makes radar dependent on other sensors for object-classification tasks. Furthermore, radar struggles to detect stationary objects. “Millimeter-wave radar has high range accuracy, and is little influenced by environmental conditions. But its angle resolution is poor, and the millimeter-wave radar is prone to false alarm.”[2] These combined weaknesses result in radar not being very helpful in free space detection.
In contrast with lidar, camera-centric approaches require multiple sensors and complex computational processes to infer distance of surrounding objects and thereby determine safe driving paths. For example, in a “stereo vision” approach requiring at least two cameras, “a depth estimation algorithm uses triangulation between the left and right images to determine the depth of objects in the field of view.” [3] Alternatively, if a system utilizes only one camera, the vehicle’s computer must compare multiple frames to simulate a stereo image. However, compared to lidar, this “structure from motion” approach also requires additional computational complexity to derive distance estimates.
The complexity and cost of camera-based approaches are compounded by the fact that cameras suffer from what might be called “tunnel vision”. That is, as cameras focus on objects at greater distances, they sacrifice field of view. Any photographer who has utilized a camera’s zoom feature will recognize this phenomenon: Focusing on a distant object results in less of the scene being captured in the image. As a result, to achieve the constant high-resolution image needed to detect vehicles, objects, and pedestrians at every necessary range (near, mid, and far), advanced driving systems that are designed around cameras require multiple focal lengths and, therefore, multiple cameras.
Overdependence on cameras for driver assistance suffers from yet other setbacks. Although current systems often analyze camera images to identify detected objects, “algorithms performing feature extraction from images rely heavily on the presence of ‘contrast’ (either color-wise or intensity-wise).” This dependency on contrast can make camera-centric systems prone to optical illusions; for example, if the side of a tractor-trailer blends in with the sky. [4] Camera-based systems can suffer not only from these false-negative readings, but also from false positives. A recent IIHS study revealed that these flawed readings can cause systems to react inappropriately in real road-driving scenarios. “In 180 miles,” the report explains, “the car unexpectedly slowed down 12 times, seven of which coincided with tree shadows on the road.”[5] This poor level of performance caused IIHS to fear that drivers would actually turn off their vehicles’ safety systems altogether.
Exacerbating each of these characteristic challenges in camera-centric approaches is their relatively weak performance in low light conditions. Cameras, like our eyes, are dependent on ambient light to function. Some companies are exploring engineering workarounds for this deficiency; for example, by incorporating infrared cameras to improve low light function. Efforts to enhance existing camera and radar modalities demonstrate not only that automakers recognize that these technologies alone are not capable of solving the problem, but that they are exploring infrared as a possible solution. Therefore, rather than designing patchwork solutions to bolster the performance of any single sensor modality, what is truly needed to cover the gaps in existing approaches is a new sensor technology that provides a different kind of data. To achieve safe operation in a broad range of conditions and contexts, the complexity of advanced driving safety requires automakers to combine the relative strengths of every available and appropriate sensor technology on the market.
About the authors
References
[1] See also “A Safety-First Approach to Developing and Marketing Driver Assistance Technology.”
[2] Wu, X., Ren, J., Wu, Y., and Shao, J., "Study on Target Tracking Based on Vision and Radar Sensor Fusion," SAE Technical Paper 2018-01-0613, 2018, https://doi.org/10.4271/2018-01-0613.
[3] SAE, “J3088: Active Safety System Sensors,” https://www.sae.org/standards/content/j3088_201711/.
[4] Iain Thomson, “Man killed in gruesome Tesla autopilot crash was saved by his car's software weeks earlier,” The Register. June 30, 2016. https://www.theregister.co.uk/2016/06/30/tesla_autopilot_crash_leaves_motorist_dead/.
[5] Insurance Institute for Highway Safety, “Evaluating Autonomy: IIHS examines driver assistance features in road, track tests,” Status Report, 53, No. 4. August 7, 2018. https://www.iihs.org/iihs/news/desktopnews/evaluating-autonomy-iihs-examines-driver-assistance-features-in-road-track-tests.