當(dāng)談到自動(dòng)駕駛,特斯拉的"純視覺"方案通常占據(jù)頭條。這種以視覺感知為主的技術(shù)路線,試圖通過深度學(xué)習(xí)算法分析車載攝像頭采集的影像,實(shí)現(xiàn)對(duì)車輛周圍環(huán)境的感知和理解。然而,Wayve公司的研究團(tuán)隊(duì)卻另辟蹊徑,將語(yǔ)言交互引入自動(dòng)駕駛系統(tǒng)。他們的最新成果LINGO-2,讓智能汽車不僅能聽懂駕駛員的口頭指令,還能用自然語(yǔ)言解釋自身決策。這一突破性進(jìn)展為無人駕駛開啟了全新的可能性。
LINGO-2正在重新定義自動(dòng)駕駛?
LINGO-2并非要取代視覺感知,而是與其形成互補(bǔ)。通過語(yǔ)言指令優(yōu)化決策過程,LINGO-2能夠提升自動(dòng)駕駛系統(tǒng)應(yīng)對(duì)復(fù)雜場(chǎng)景的能力。舉例來說,當(dāng)遇到惡劣天氣或道路施工等特殊情況時(shí),駕駛員可以通過語(yǔ)音提示車輛采取應(yīng)對(duì)措施。而LINGO-2則會(huì)根據(jù)指令和自身感知,給出恰當(dāng)?shù)姆磻?yīng)并作出解釋。這種人機(jī)協(xié)同不僅增強(qiáng)了自動(dòng)駕駛的安全性,也讓決策過程更加透明。
可以說,LINGO-2代表了自動(dòng)駕駛技術(shù)的一種"另類"革命。它探索了人工智能在認(rèn)知和交互層面的新邊界,豐富了無人駕駛的實(shí)現(xiàn)路徑。未來,視覺感知與語(yǔ)言交互勢(shì)必會(huì)深度融合。
17 April 2024|Research
LINGO-2:Driving with Natural Language
This blog introduces LINGO-2,a driving model that links vision,language,and action to explain and determine driving behavior,opening up a new dimension of control and customization for an autonomous driving experience.LINGO-2 is the first closed-loop vision-language-action driving model(VLAM)tested on public roads.
Driving with Natural Language
In September 2023,we introduced natural language for autonomous driving in our blog on LINGO-1,an open-loop driving commentator that was a first step towards trustworthy autonomous driving technology.In November 2023,we further improved the accuracy and trustworthiness of LINGO-1’s responses by adding a“show and tell”capability through referential segmentation.Today,we are excited to present the next step in Wayve’s pioneering work incorporating natural language to enhance our driving models:introducing LINGO-2,a closed-loop vision-language-action driving model(VLAM)that is the first driving model trained on language tested on public roads.In this blog post,we share the technical details of our approach and examples of LINGO-2’s capability to combine language and action to accelerate the safe development of Wayve’s AI driving models.
2023年9月,我們?cè)诮榻BLINGO-1的博客中首次提出了在自動(dòng)駕駛中應(yīng)用自然語(yǔ)言的概念。LINGO-1是一個(gè)開環(huán)(open-loop)駕駛評(píng)論系統(tǒng),朝著實(shí)現(xiàn)值得信賴的自動(dòng)駕駛技術(shù)邁出了第一步。2023年11月,我們通過增加"邊顯示邊講述"的參考分割功能,進(jìn)一步提高了LINGO-1響應(yīng)的準(zhǔn)確性和可信度。今天,我們很高興地介紹Wayve公司在將自然語(yǔ)言融入駕駛模型方面取得的新進(jìn)展:LINGO-2,這是一個(gè)閉環(huán)(closed-loop)視覺-語(yǔ)言-行動(dòng)駕駛模型,簡(jiǎn)稱VLAM(Vision-Language-Action Model)。它是全球首個(gè)在公共道路上進(jìn)行測(cè)試的、基于語(yǔ)言訓(xùn)練的駕駛模型。在這篇博文中,我們將分享LINGO-2的技術(shù)細(xì)節(jié),并通過示例展示它如何將語(yǔ)言和行動(dòng)結(jié)合起來,加速Wayve的AI駕駛模型的安全開發(fā)。
Introducing LINGO-2,a closed-loop Vision-Language-Action-Model(VLAM)
LINGO-2:一個(gè)閉環(huán)的視覺-語(yǔ)言-行動(dòng)模型
Our previous model,LINGO-1,was an open-loop driving commentator that leveraged vision-language inputs to perform visual question answering(VQA)and driving commentary on tasks such as describing scene understanding,reasoning,and attention—providing only language as an output.This research model was an important first step in using language to understand what the model comprehends about the driving scene.LINGO-2 takes that one step further,providing visibility into the decision-making process of a driving model.
我們之前的模型LINGO-1是一個(gè)開環(huán)的駕駛評(píng)論系統(tǒng)。它利用視覺和語(yǔ)言輸入來執(zhí)行視覺問答(Visual Question Answering,VQA),對(duì)駕駛場(chǎng)景進(jìn)行描述、推理和關(guān)注點(diǎn)分析,但只能生成語(yǔ)言輸出。這個(gè)研究模型是我們?cè)诶谜Z(yǔ)言來理解駕駛模型對(duì)場(chǎng)景理解的重要一步。而LINGO-2則更進(jìn)一步,它能讓我們深入了解駕駛模型的決策過程。
LINGO-2 combines vision and language as inputs and outputs,both driving action and language,to provide a continuous driving commentary of its motion planning decisions.LINGO-2 adapts its actions and explanations in accordance with various scene elements and is a strong first indication of the alignment between explanations and decision-making.By linking language and action directly,LINGO-2 sheds light on how AI systems make decisions and opens up a new level of control and customization for driving.
LINGO-2同時(shí)將視覺和語(yǔ)言作為輸入和輸出,在輸出駕駛動(dòng)作的同時(shí),還能生成對(duì)駕駛決策的實(shí)時(shí)解釋。它可以根據(jù)不同的場(chǎng)景元素來調(diào)整駕駛行為和解釋內(nèi)容,初步證明了模型的解釋與決策之間的高度一致性。通過直接關(guān)聯(lián)語(yǔ)言和行動(dòng),LINGO-2揭示了AI系統(tǒng)的決策機(jī)制,為實(shí)現(xiàn)可控、個(gè)性化的駕駛體驗(yàn)開辟了新的可能。
While LINGO-1 could retrospectively generate commentary on driving scenarios,its commentary was not integrated with the driving model.Therefore,its observations were not informed by actual driving decisions.However,LINGO-2 can both generate real-time driving commentary and control a car.The linking of these fundamental modalities underscores the model’s profound understanding of the contextual semantics of the situation,for example,explaining that it’s slowing down for pedestrians on the road or executing an overtaking maneuver.It’s a crucial step towards enhancing trust in our assisted and autonomous driving systems.
盡管LINGO-1能夠?qū)︸{駛場(chǎng)景進(jìn)行事后評(píng)論,但它的評(píng)論與駕駛模型是分離的,并不是基于實(shí)際的駕駛決策。而LINGO-2不僅能生成實(shí)時(shí)駕駛解說,還能直接控制汽車的行駛。將這兩個(gè)關(guān)鍵能力結(jié)合起來,凸顯了LINGO-2對(duì)場(chǎng)景語(yǔ)義有著深刻的理解。例如,它能解釋減速是因?yàn)榍胺接行腥?,或者說明正在執(zhí)行超車動(dòng)作。這是我們?cè)谔岣哂脩魧?duì)輔助駕駛和自動(dòng)駕駛系統(tǒng)信任度方面邁出的關(guān)鍵一步。
It opens up new possibilities for accelerating learning with natural language by incorporating a description of driving actions and causal reasoning into the model’s training.Natural language interfaces could,even in the future,allow users to engage in conversations with the driving model,making it easier for people to understand these systems and build trust.
通過將駕駛動(dòng)作和因果推理的描述納入模型訓(xùn)練,LINGO-2為加速自然語(yǔ)言學(xué)習(xí)開辟了新的可能性。未來,自然語(yǔ)言交互界面甚至可以讓用戶與駕駛模型直接對(duì)話,讓大眾更容易理解和信任這些智能駕駛系統(tǒng)。
LINGO-2 Architecture:Multi-modal Transformer for Driving
LINGO-2架構(gòu):用于駕駛的多模態(tài)Transformer網(wǎng)絡(luò)
用于駕駛的多模態(tài)Transformer網(wǎng)絡(luò)
LINGO-2 architecture
LINGO-2 consists of two modules:the Wayve vision model and the auto-regressive language model.The vision model processes camera images of consecutive timestamps into a sequence of tokens.These tokens and additional conditioning variables–such as route,current speed,and speed limit–are fed into the language model.Equipped with these inputs,the language model is trained to predict a driving trajectory and commentary text.Then,the car’s controller executes the driving trajectory.
LINGO-2包含兩個(gè)主要模塊:Wayve視覺模型和自回歸語(yǔ)言模型。視覺模型將連續(xù)多幀相機(jī)圖像轉(zhuǎn)化為一系列tokens(可以理解為視覺特征)。這些tokens與其他控制變量(如路線、當(dāng)前速度和限速等)一起輸入到語(yǔ)言模型中。語(yǔ)言模型基于這些輸入,學(xué)習(xí)預(yù)測(cè)駕駛軌跡和生成相應(yīng)的解釋文本。最后,汽車控制器負(fù)責(zé)執(zhí)行規(guī)劃好的駕駛軌跡。
LINGO-2’s New Capabilities
The integration of language and driving opens up new capabilities for autonomous driving and human-vehicle interaction,including:
1.Adapting driving behavior through language prompts:We can prompt LINGO-2 with constrained navigation commands(e.g.,“pull over,”“turn right,”etc.)and adapt the vehicle’s behavior.This has the potential to aid model training or,in some cases,enhance human-vehicle interaction.
2.Interrogating the AI model in real-time:LINGO-2 can predict and respond to questions about the scene and its decisions while driving.
3.Capturing real-time driving commentary:By linking vision,language,and action,LINGO-2 can leverage language to explain what it’s doing and why,shedding light on the AI’s decision-making process.
LINGO-2的新功能
將語(yǔ)言與駕駛控制結(jié)合,為自動(dòng)駕駛和人機(jī)交互帶來了諸多新的可能性,例如:
1.通過語(yǔ)言指令調(diào)整駕駛行為:我們可以用一些特定的導(dǎo)航命令(如"靠邊停車"、"右轉(zhuǎn)"等)來指示LINGO-2,從而改變車輛行駛方式。這有助于優(yōu)化模型訓(xùn)練,在某些情況下還能提升人車交互體驗(yàn)。
2.實(shí)時(shí)詢問AI模型:駕駛過程中,LINGO-2能夠根據(jù)提問預(yù)測(cè)并回答與場(chǎng)景理解和決策相關(guān)的問題。
3.獲取實(shí)時(shí)駕駛解說:通過關(guān)聯(lián)視覺、語(yǔ)言和行動(dòng),LINGO-2能利用語(yǔ)言解釋它當(dāng)前的駕駛行為以及背后的原因,讓我們更清楚地了解AI的決策過程。
We’ll explore these use cases in the sections below,showing examples of how we’ve tested LINGO-2 in our neural simulator Ghost Gym.Ghost Gym creates photorealistic 4D worlds for training,testing,and debugging our end-to-end AI driving models.Given the speed and complexity of real-world driving,we leverage offline simulation tools like Ghost Gym to evaluate the robustness of LINGO-2’s features first.
In this setup,LINGO-2 can freely navigate through an ever-changing synthetic environment,where we can run our model against the same scenarios with different language instructions and observe how it adapts its behavior.We can gain deep insights and rigorously test how the model behaves in complex driving scenarios,communicates its actions,and responds to linguistic instructions.
接下來,我們將通過在虛擬仿真環(huán)境Ghost Gym中測(cè)試LINGO-2的幾個(gè)案例,來進(jìn)一步探討這些功能的應(yīng)用。Ghost Gym是一個(gè)逼真的4D虛擬世界,用于端到端AI駕駛模型的訓(xùn)練、測(cè)試和調(diào)試??紤]到真實(shí)世界駕駛的高速性和復(fù)雜性,我們首先利用Ghost Gym這樣的離線仿真工具來評(píng)估LINGO-2的性能和穩(wěn)定性。
在Ghost Gym中,LINGO-2可以在不斷變化的虛擬場(chǎng)景中自由導(dǎo)航。我們可以讓模型在同一場(chǎng)景下執(zhí)行不同的語(yǔ)言指令,觀察它如何相應(yīng)地調(diào)整駕駛行為。這使我們能夠深入分析模型在復(fù)雜駕駛場(chǎng)景下的決策機(jī)制,了解它如何描述自己的行動(dòng),以及它對(duì)語(yǔ)言指令的響應(yīng)能力。
Adapting Driving Behavior through Linguistic Instructions
通過語(yǔ)言指令調(diào)整駕駛行為
LINGO-2 uniquely allows driving instruction through natural language.To do this,we swap the order of text tokens and driving action,which means language becomes a prompt for the driving behavior.This section demonstrates the model’s ability to change its behavior in our neural simulator in response to language prompts for training purposes.This new capability opens up a new dimension of control and customization.The user can give commands or suggest alternative actions to the model.This is of particular value for training our AI and offers promise to enhance human-vehicle interaction for applications related to advanced driver assistance systems.In the examples below,we observe the same scenes repeated,with LINGO-2 adapting its behavior to follow linguistic instructions.
LINGO-2的一大特色是可以通過自然語(yǔ)言來指揮駕駛。為了實(shí)現(xiàn)這一點(diǎn),我們調(diào)換了文本token(可以理解為詞匯單元)和駕駛動(dòng)作的順序,使得語(yǔ)言指令成為駕駛行為的先導(dǎo)。本節(jié)將展示該模型在我們的虛擬仿真器中根據(jù)語(yǔ)言提示改變駕駛行為的能力,這對(duì)于模型訓(xùn)練大有裨益。這一全新功能為智能駕駛的控制和個(gè)性化開啟了新的維度。用戶可以向模型下達(dá)指令或建議替代動(dòng)作。這不僅有利于優(yōu)化我們的AI模型訓(xùn)練,還有望改善高級(jí)駕駛輔助系統(tǒng)中的人機(jī)交互體驗(yàn)。接下來,我們將通過一些示例來觀察LINGO-2如何根據(jù)語(yǔ)言指令靈活調(diào)整駕駛行為。
Example 1:Navigating a junction
示例一:路口導(dǎo)航
In the three videos below,LINGO-2 navigates the same junction but is given different instructions:“turning left,clear road,”“turning right,clear road,”and“stopping at the give way line.”We observe that LINGO-2 can follow the instructions,reflected by different driving behaviors at the intersection.
在下面三個(gè)視頻中,LINGO-2駕車通過同一個(gè)路口,但我們給出了三種不同的指令:"左轉(zhuǎn),道路通暢"、"右轉(zhuǎn),道路通暢"以及"在讓行線處停車"。我們可以看到,LINGO-2能夠遵照指令,在路口執(zhí)行相應(yīng)的駕駛動(dòng)作。
Example of LINGO-2 driving in Ghost Gym and being prompted to turn left on a clear road.(視頻:LINGO-2在Ghost Gym中根據(jù)提示"道路通暢,左轉(zhuǎn)"執(zhí)行左轉(zhuǎn))
Example of LINGO-2 driving in Ghost Gym and being prompted to turn right on a clear road.(視頻:LINGO-2在Ghost Gym中根據(jù)提示"道路通暢,右轉(zhuǎn)"執(zhí)行右轉(zhuǎn))
Example of LINGO-2 driving in Ghost Gym and being prompted to stop at the give-way line.(視頻:LINGO-2在Ghost Gym中根據(jù)提示"在讓行線處停車"而停車)
Example 2:Navigating a bus
示例二:與公交車互動(dòng)
In the two videos below,LINGO-2 navigates around a bus.We can observe that LINGO-2 can follow the instructions to either hold back and“stop behind the bus”or“accelerate and overtake the bus.”
接下來的兩個(gè)視頻展示了LINGO-2與公交車交互的場(chǎng)景。我們可以看到,LINGO-2能夠根據(jù)"停在公交車后"或"超車并超越公交車"的指令采取相應(yīng)的行動(dòng)。
圖片Example of LINGO-2 in Wayve’s Ghost Gym stopping behind the bus when instructed.(視頻:LINGO-2在Wayve公司的Ghost Gym模擬器中根據(jù)指示停在公交車后方)
圖片Example of LINGO-2 in Wayve’s Ghost Gym overtaking a bus when instructed by text.(視頻:LINGO-2在Wayve公司的Ghost Gym模擬器中根據(jù)文字指令超越公交車)
Example 3:Driving in a residential area
示例三:住宅區(qū)駕駛
In the two videos below,LINGO-2 responds to linguistic instruction when driving in a residential area.It can correctly respond to the prompts“continue straight to follow the route”or“slow down for an upcoming turn.”
最后這兩個(gè)視頻展示了LINGO-2在住宅區(qū)道路上對(duì)語(yǔ)音指令的反應(yīng)。它可以準(zhǔn)確理解和執(zhí)行"繼續(xù)直行,沿路線行駛"或"減速,準(zhǔn)備轉(zhuǎn)彎"等命令。
Example of LINGO-2 in Wayve’s Ghost Gym driving straight when instructed bytext.(視頻:LINGO-2在Ghost Gym中根據(jù)文字指令保持直線行駛)
Example of LINGO-2 in Wayve’s Ghost Gym turning right when instructed by text.(視頻:LINGO-2在Ghost Gym中根據(jù)文字指令向右轉(zhuǎn)彎)
Interrogating an AI model in real-time:Video Question Answering(VQA)
實(shí)時(shí)問詢AI模型:視頻問答(VQA)功能
Another possibility for language is to develop a layer of interaction between the robot car and the user that can give confidence in the decision-making capability of the driving model.Unlike our previous LINGO-1 research model,which could only answer questions retrospectively and was not directly connected to decision-making,LINGO-2 allows us to interrogate and prompt the actual model that is driving.
語(yǔ)言交互的另一個(gè)應(yīng)用是在無人駕駛汽車和乘客之間建立一個(gè)對(duì)話界面,增強(qiáng)乘客對(duì)車輛決策能力的信心。不同于此前的LINGO-1研究模型只能事后回答問題,且與決策過程無直接關(guān)聯(lián),LINGO-2允許我們實(shí)時(shí)詢問和指示當(dāng)前行駛中的模型。
Example 4:Traffic Lights
示例四:交通信號(hào)燈
In this example,we show LINGO-2 driving through an intersection.When we ask the model,“What is the color of the traffic lights?”it correctly responds,“The traffic lights are green.”
這個(gè)例子中,LINGO-2駕車通過一個(gè)路口。當(dāng)我們問模型"交通信號(hào)燈是什么顏色?"時(shí),它正確回答:"交通信號(hào)燈是綠色。"
LINGO-2演示視頻問答功能
Example of LINGO-2 VQA in Ghost Gym(視頻:LINGO-2在Ghost Gym中演示視頻問答功能)
Example 5:Hazard Identification
示例五:危險(xiǎn)識(shí)別
In this example,LINGO-2 is prompted by the question,“Are there any hazards ahead of you?”It correctly identifies that“Yes,there is a cyclist ahead of me,which is why I am decelerating.”
在這個(gè)場(chǎng)景中,我們問LINGO-2:"前方有潛在危險(xiǎn)嗎?"它正確指出:"是的,前方有一名騎自行車者,所以我在減速。"
圖片Example of LINGO-2 VQA in Ghost Gym(視頻:LINGO-2在Ghost Gym中演示視頻問答功能)
Example 6:Weather
示例六:天氣情況
In the following three examples,we ask LINGO-2 to describe“What is the weather like?”It can correctly identify that the weather ranges from“very cloudy,there is no sign of the sun”to“sunny”to“the weather is clear with a blue sky and scattered clouds.”
接下來的三個(gè)片段中,我們讓LINGO-2回答"現(xiàn)在天氣如何?"。它分別給出了"烏云密布,看不到太陽(yáng)"、"天氣晴朗"和"晴空萬里,偶有浮云"的恰當(dāng)描述。
Example of LINGO-2 VQA in Ghost Gym(視頻:LINGO-2在Ghost Gym中演示視頻問答功能)
Limitations
目前的局限性
LINGO-2 marks a step-change in our progress to leverage natural language to enhance our AI driving models.While we are excited about the progress we are making,we also want to describe the current limitations of the model.
LINGO-2標(biāo)志著我們?cè)诶米匀徽Z(yǔ)言優(yōu)化AI駕駛模型方面取得了突破性進(jìn)展。盡管如此,我們也意識(shí)到該模型目前還存在一些局限性。
Language explanations from the driving model give us a strong idea of what the model might be thinking.However,more work is needed to quantify the alignment between explanations and decision-making.Future work will quantify and strengthen the connection between language,vision,and driving to reliably debug and explain model decisions.We expect to show in the real world that adding intermediate language reasoning in“chain-of-thought”driving helps solve edge cases and counterfactuals.
從模型給出的語(yǔ)言解釋,我們可以大致了解其決策依據(jù)。但要準(zhǔn)確衡量解釋與決策的吻合程度,還需要做更多工作。未來,我們將著力量化語(yǔ)言、視覺、駕駛?cè)咧g的關(guān)聯(lián),增強(qiáng)模型決策的可解釋性和可靠性。我們希望通過實(shí)車測(cè)試證明,在"思維鏈"駕駛中融入語(yǔ)言推理有助于應(yīng)對(duì)極端和反事實(shí)場(chǎng)景。
Additionally,we plan to investigate whether controlling the car’s behavior with language in real-world settings can be done reliably and safely.Ghost Gym provides a safe off-road environment for testing,but more work needs to be done to ensure the model is robust to noise and misinterpretation of the commands.It should understand the context of human instructions while never violating appropriate limits of safe and responsible driving behavior.This functionality will be more suited to aid model testing and training for fully automated driving systems.
此外,我們還將探索如何在真實(shí)道路環(huán)境中讓語(yǔ)言安全、可靠地指揮車輛行駛。Ghost Gym提供了一個(gè)安全的虛擬測(cè)試空間,但要確保模型能正確理解口語(yǔ)化的指令且不受噪音干擾,還需要投入更多精力。模型必須能夠準(zhǔn)確把握人類指令的意圖,同時(shí)嚴(yán)格遵守安全駕駛的基本原則。這一功能更適合應(yīng)用于全自動(dòng)駕駛系統(tǒng)的模型測(cè)試和訓(xùn)練。
Conclusion
In this post,we have introduced LINGO-2,the first driving model trained on language that has driven on public roads.We are excited to showcase how LINGO-2 can respond to language instruction and explain its driving actions in real-time.This is a first step towards building embodied AI that can perform multiple tasks,starting with language and driving.
結(jié)語(yǔ)
本文介紹了LINGO-2,這是首個(gè)接受語(yǔ)言指令訓(xùn)練并在公開道路上測(cè)試的駕駛模型。令人振奮的是,LINGO-2能夠?qū)φZ(yǔ)言提示作出實(shí)時(shí)反應(yīng),并對(duì)駕駛決策給出清晰解釋。這為我們構(gòu)建多功能智能體邁出了關(guān)鍵的一步。
Wayve是一家位于英國(guó)的自動(dòng)駕駛技術(shù)初創(chuàng)公司,成立于2017年。與許多其他自動(dòng)駕駛公司不同,Wayve的核心理念是通過端到端深度學(xué)習(xí),讓人工智能系統(tǒng)像人類一樣學(xué)習(xí)駕駛技能。
以下是Wayve的一些關(guān)鍵特點(diǎn):
1.端到端學(xué)習(xí):Wayve的自動(dòng)駕駛系統(tǒng)直接將感知信息(如攝像頭圖像)映射到車輛控制指令,無需手工設(shè)計(jì)的中間步驟。這種端到端學(xué)習(xí)方法讓系統(tǒng)能夠自主發(fā)現(xiàn)最優(yōu)的駕駛策略。
2.少量數(shù)據(jù)學(xué)習(xí):與需要海量數(shù)據(jù)訓(xùn)練的傳統(tǒng)方法相比,Wayve的AI系統(tǒng)能夠從較少的數(shù)據(jù)中快速學(xué)習(xí),更加靈活和適應(yīng)性強(qiáng)。
3.模擬到現(xiàn)實(shí):Wayve先在虛擬環(huán)境中訓(xùn)練AI模型,再將其遷移到真實(shí)世界的汽車上進(jìn)行微調(diào)。這種"模擬到現(xiàn)實(shí)"的方法大大加快了開發(fā)進(jìn)度。
4.多模態(tài)融合:除了視覺信息,Wayve還嘗試將自然語(yǔ)言指令整合到自動(dòng)駕駛決策中。LINGO項(xiàng)目就是探索語(yǔ)言交互在無人駕駛中的應(yīng)用。
5.安全與倫理:Wayve高度重視自動(dòng)駕駛的安全性和倫理問題,致力于打造可靠、透明、符合社會(huì)期望的無人車系統(tǒng)。
總的來說,Wayve代表了自動(dòng)駕駛技術(shù)的一種創(chuàng)新思路。他們的研究成果有望加速自動(dòng)駕駛的發(fā)展,為未來交通出行帶來革命性變化。盡管目前還處于探索階段,但Wayve的嘗試無疑為無人駕駛領(lǐng)域注入了新的活力。