About Apollo "cloud + end" of the actual content, Apollo 2.0 combat technical foundation

On March 31, we held the third phase of the autopilot public class "Apollo2.0 Autopilot Platform - Technology Analysis and Application" in Beijing, attracting more than 300 companies from automakers, parts manufacturers, software companies, and autopilot start-ups. Enterprise developers participate.

Senior engineers Yang Fan, Wan Guowei and Apollo eco partners Huang Yingjun and Li Xiaofei from Baidu made a wonderful sharing. From the Apollo "cloud + end" development model and the analysis of iterative code, to Apollo2.0 positioning technology and the use of sub-module functions, distributed computing platform solutions, and low-speed automatic driving landing solutions made a profound explanation and demonstration. Developers rushed to ask questions and the atmosphere was very lively.

Today, we share the videos and materials of the open class that we have organized. Developers who are not able to reach the site can learn more about the course content through PPT.

{Apollo 2.0 Actual Technical Basis}

Baidu Auto Drive Division Senior Architect Yang Fan

I am very pleased to have the opportunity to share with you the Baidu Apollo "cloud and end" R & D model, and then to help you understand the real battle of the Apollo2.0 presentation mode and the actual vehicle and tracking capabilities; then is the core part of this open class, Obstacle recognition and path planning capabilities are practically based; on this basis, the cloud training platform is introduced to train traffic lights to perceive the ability to use the cloud algorithm to enhance the autopilot ability. Finally, the cloud simulation capability is used to complete the verification and combat.

Today's AI's ability to drive autonomously comes from the cloud, so use a lot of data in the cloud for annotation and training. Take the traffic light as an example to explain the AI ​​training process. A car on the road, in the strict requirements of safety, this requirement goes far beyond the general requirements of traditional cars. To complete the safety test of a self-driving system, if 100 cars are used, 7×24 hours of running will take about 100 years to complete. Therefore, it is by no means a real vehicle to complete the autopilot test. 90% of the test work needs to be in the cloud, and the vehicle's capabilities can be verified on a large scale through simulation techniques to ensure its safety.

Just because it is a long-term and difficult thing to ensure the safety of automatic driving is a complicated system project, so Baidu’s automatic driving has adopted an open strategy. Baidu has made long-term explorations in autopilot. The longer it has been done, the more respect it will have for autopilot and the auto industry. There are tens of thousands of parts in a car. Autopilot is not a simple industrial link, but a complete industrial chain, including OEMs, parts, communications, perception, decision-making, and control systems. Therefore, we believe that it is an ecological environment. Everyone finds their position in a complete ecological environment and cooperates in the development of an autonomous driving system.

So Baidu proposed the Apollo open strategy plan. Baidu has opened its accumulated auto-driving research results for many years to the ecological chain, allowing everyone to share technology and data, and achieve resource sharing on this basis. The more you use, the more you share, and the more you get, the more you win in the ecology.

The Apollo technology framework consists of 4 layers. Here are:

• Reference Vehicle Platform (refer to a vehicle platform, a car that can be controlled by electronic signals, we call it a wire-controlled car)

• Reference Hardware Platform (reference hardware platform, including computing unit, GPS/IMU, Camera, laser radar, millimeter-wave radar, human-computer interaction equipment, BlackBox, etc.)

• Open Software Platform (Open Software Platform: includes the real-time operating system, the framework layer that hosts all modules, high-precision maps and positioning modules, sensing modules, decision planning modules, and control modules)

• Cloud Service Platform (cloud service platform: includes high-precision maps, simulated driving simulation services, data platforms, security and OTA services, etc.)

The latest open modules of Apollo 2.0 include Security, Camera, Radar, and Black Box, which means that the Apollo platform has opened four modules including cloud services, service platforms, reference hardware platforms, and reference vehicle platforms. Apollo 2.0's new open security and OTA upgrade services allow only correct and protected data to enter the vehicle and further strengthen capabilities such as self-positioning, sensing, planning decisions, and cloud square arrays. The Black Box module includes software and hardware systems that enable secure storage and large-capacity data set transmission. This helps us detect abnormal situations in a timely manner and improve the security and reliability of the entire platform.

In terms of hardware, the addition of two forward cameras (telephoto + short focus) is mainly used to identify traffic lights, and a new millimeter-wave radar is installed above the front bumper. After Apollo 2.0 opened up the modules of Camera and Radar, the entire platform has the initial capabilities of sensor fusion, which enhances the adaptability to day and night simple urban road conditions.

On the spring night of this year, Baidu Apollo opened more than a hundred vehicles to the Hong Kong-Zhuhai-Macao Bridge, and completed the "8" cross-running operation in the automatic driving mode.

So how to complete the automatic driving demo of the bridge in a short time? Take a look at Apollo's directory structure, which contains Docs, Modules, Scripts, and other subdirectories. Modules is the location of all modules of Apollo; Scripts include some commonly used tooling scripts; Third-party involves some official libraries, and Tools contains some tools.

In Modules, you can see the main modules of Apollo.

Apollo adopts base class and class factory architecture, and has the ability to develop new modules and new functions. Each developer or eco-partner can easily add their own parts to Apollo's framework to get unique auto-driving capabilities.

The relationship between the main modules is shown in the figure below:

The Localization module supported by HD-Map generates maps and positioning information. The entire system is based on high-precision maps and positioning information. Based on this, perception of the surrounding environment and the perception of obstacles are combined. When combined with maps, the perception of traffic lights can be perceived. The prediction module makes the behavior prediction of the obstacle based on the perception result; the Planning module makes the Trajectory decision plan of the path and the speed according to the result of the obstacle prediction and the information of the Routing module; the Control module controls the vehicle to run through the CANBUS module according to the result of Planning.

The Apollo system is based on the ROS platform. ROS is a very familiar platform for everyone in the robotics field. Baidu and ROS have reached an in-depth cooperation to reduce the developer's threshold. ROS features include a complete development kit, a complete calculation and scheduling model, and numerous debugging tools and existing software systems. In order to enhance the capabilities of ROS in autopilot, Apollo has made a number of custom optimizations. If testing autopilot on a real vehicle, developers are advised to use the ROS version provided by the Apollo platform.

ROS communication is based on ROS Topic, and Apollo's main ROS Topic is shown in the figure. Developers can view debug Apollo using ROS native tools.

With these foundations, how to build automatic driving step by step?

Let's first demonstrate how Apollo can demonstrate and verify offline demonstrations without actual vehicle conditions.

At this point, you can see Apollo's DreamView demo on your browser.

Vehicle and tracking driving ability

Next, we will introduce the vehicle and tracking driving ability. This step can verify the basic capabilities of the line control car and hardware and software integration. As shown in the dashed box above, vehicle tracking mainly relies on positioning and control capabilities.

As described in our Apollo architecture, vehicle platforms need to be completed by the depot for wire-controlled retrofits. The installation of hardware devices and the configuration of IPCs are required on the vehicle platform. The specific installation methods can be found in Apollo's installation guide. On the vehicle and hardware platforms, Apollo provides software capabilities such as braking, power, steering control, and some information exchange.

(Recommended to watch in Wi-Fi environment)

Developers can add their own vehicles through the Vehicle interface, see:

[https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_add_a_new_vehicle.md]

• Implement new car controller

• Implement new message manager

• Register a new car in the factory class

• Update the configuration file: canbus/conf/canbus_conf.pb.txt

CANCard also has many requirements. The control of the cart is not the same and requires different algorithms. The overall module features pluggable, flexible configuration and expansion.

• Implement new CAN Card CanClient

Register a new CAN card in the factory class CanClientFactory

• Update configuration file: canbus/proto/can_card_parameter.proto

Next create a controller, add configuration for the new controller in the control_config file, register the new controller.

On the basis of the vehicle, Apollo provides high-precision positioning capabilities, mainly through the fusion of multiple sensors positioning to ensure high accuracy. We will have another topic to talk about positioning technology, not much to talk about here.

After that add the vehicle through the GPS Receiver interface:

· Inherit Parse class to realize data analysis of new GPS receiver

· Adding new interfaces for new GPS receivers in the Parse class

· In the configuration file config.proto, add new GPS receiver data format

· Add new receiver implementation logic for the method create_parser in data_parser.cpp

After these are all ready, they will start tracking automatic driving. The first is recording. In the QuickView directory in Dreamview, click Setup to start all modules and perform a hardware health check. If the hardware health check passes, click the Start button to start recording the driver trace. After reaching the destination, click the Stop button to stop recording.

The second is execution. Under Quick Play, click Setup to start all modules and perform a hardware health check. Make sure the driver is ready! Click the Start button to start autopilot. After reaching the destination, click the Stop button to stop playback of the recorded track.

(Apollo 1.0 demo)

After this series of work was completed, Apollo 1.0's automatic driving capability was completed. It can be seen that with the output of high-precision positioning capability, automatic driving by tracking can achieve very precise movements, and the two vehicles can be staggered with a very precise movement.

Obstacle perception and path planning capabilities

What kind of automatic driving capability is provided in Apollo 2.0? As shown in the dashed box above, based on the positioning and control, the perception and decision planning control is added, so that the closed loop of automatic driving of the vehicle can be completed.

The mainstream sensors include cameras, radars, and lidars. Each type of sensor has both strengths and weaknesses. For example, the camera has a good performance on the classification of obstacles, but if you want to make an accurate judgment on the speed of obstacles, it is difficult for the camera to do so.

For Radar, there is an advantage in determining distance and speed, penetrating power is very good, but the ability to classify obstacles is relatively weak.

Lidars use active emission of energy and rely on echoes to detect. Therefore, it is advantageous to judge the distance between obstacles, such as the state of obstacles under dark conditions. However, laser radar is still very expensive.

Therefore, we must fuse these sensors together and use their respective strengths.

Calibration problems are encountered when using multiple sensors. Because the sensors are all high-precision sensors, the installation operation is difficult to achieve special precision. In order to use multiple sensors effectively, we need to calibrate the sensors on the vehicle. Because it is very difficult for us to install the vehicle when it is installed, we can know the specific installation error by calibration, and then compensate by calculation.

LiDAR GNSS Calibration Reference:

• Start 64-line laser radar and combined inertial navigation system. Novatel combination inertial navigation requires calibration when it is first powered on. At this point, the car should be straightened, turn left and right, etc. in an open area until the initialization of inertial navigation is completed.

• Verify that the topic of sensor data is output.

• The calibration site needs to select a place where there is no high-rise building, flat ground, a flat surrounding structure, and where the 8 track can be used.

1 Bashlidar_calibration.sh start_record/stop_record

• The program will detect if the recorded bag contains all the required topics. After the test passes, the bag is packaged into a lidar_calib_data.tar.gz file, which contains the recorded rosbag and the corresponding MD5 checksum file.

[ https://console.bce.baidu.com/apollo/calibrator/index/list]

1 Mkdir -pmodules/calibration/data/[CAR_ID]/

• Other calibrations please refer to:

[https://github.com/ApolloAuto/apollo/blob/master/docs/quickstart/apollo_2_0_sensor_calibration_guide_en.md]

After installing the sensor, we must have a sensory system. There are many types of perception, such as obstacle recognition, obstacle classification, semantic segmentation, and target tracking. How do we do it?

In Apollo, we use 3D obstacle detection as an example to introduce the process of perception.

The advantage of LiDAR-based 3D obstacle perception is that it can be continuously monitored day and night and continuous detection; we use deep learning in the framework, so that we can identify some traditional rules and solve many difficult problems with precision; we use NVIDIA GPUs. The huge computational load is placed on the GPU for efficient perceptual processing.

In order to identify an obstacle, the main step is to configure a ROI filter on the high-precision map to filter out data that we think is valid; then calculate the feature values ​​and complete the Segmentation of each area through CNN, so that we can effectively identify Out of the object; through the MinBox, complete the obstacle frame construction; finally through the HM object tracking can perceive the trajectory of the obstacle, calculate the speed.

To do 3D inspection, we need to process the laser point cloud. According to the high-precision map, we will filter out the parts we are interested in. After the laser point cloud is characterized, it will be imported into the CNN network to synthesize one object through edge recognition, and finally the object can be Expressed in the coordinate system. By tracking objects in different frames in series, tracking of objects can be accomplished. With the trajectory of an object on a different frame, you can know its position and how fast it is.

There is Lidar's detection on Apollo, millimeter-wave detection, and image-based traffic light recognition. How do these things fit together? Multisensor fusion relies on the Perception fusion DAG framework. As shown in the above figure, by constructing the algorithm subnode and connecting together through the DAG description, the developer can complete customized multisensor sensing and fusion.

How to plan an effective path to autopilot? Take EM as an example. As shown in the figure below, the Planning Structure consists of several parts of ReferenceLine, HD-Map, and EM Planner.

Through the DP path algorithm, the lowest cost adjustable path is obtained.

We discretize the solution process. The benefits are: low impact by the road centerline; cost function is not a single form, adapt to complex road conditions; naturally suitable for parallelization. Fixed the pain points of the rule-based optimization of the decision. On the other hand, the result of this step DP algorithm is not perfect, it is not an optimal solution, and the form at the joint point is fixed. Even if the route is smooth but the route is stiff, the processing of the complex situation is not smooth enough.

As shown in the above figure, we can complete the DP planning in the s,t coordinate system through the same DP logic. On this basis, further QP optimization and iterative adjustments can result in effective Planning results.

Finally create a Planner: add the new Planner configuration to the modules/planning/conf/planning_config.pb.txt file; register the new Planner in module/planning/planning.cc.

"Cloud + End" R&D Iteration Cloud + Vehicles

From the above automatic driving development process, it can be seen that Baidu Apollo has adopted a new “cloud+end” R&D iteration model to accelerate the development efficiency of self-driving cars. Baidu accumulated a huge amount of data in the research and development of automated driving, and used these clustered data to efficiently generate models of artificial intelligence using cloud server clusters, that is, the vehicle brain. When the car brain is updated into the vehicle, the vehicle is given the ability to drive automatically.

Automatic driving data can be divided into four categories:

The data generated by self-driving vehicles is first of all raw data, mainly sensor data, vehicle own data, driving behavior data and so on. These data are characterized by a large amount of data, various types, and mainly unstructured semi-structured data. Regardless of storage, transmission, processing poses a big challenge.

The data platform is the core platform of Baidu's "cloud+end" R&D iterative new model for supporting smart cars. It consists of three parts: data acquisition and transmission, automatic driving data warehouse, and automatic driving computing platform. The first is the data acquisition and transmission section. Using Data-Recorder will produce a complete and accurately recorded data packet according to the Apollo data specification, which can complete the problem recurrence and also complete the data accumulation. Through the transmission interface, data can be efficiently transmitted to operating points and cloud clusters.

Next is the autopilot data warehouse part, which will organize all the massive data into a systematic, fast search, and be used flexibly to provide data support for the data pipeline and various business applications.

The automatic driving computing platform provides powerful computing power based on the heterogeneous computing hardware of cloud resources and provides multiple computing models through fine-grained container scheduling to support various business applications. Such as training platform, simulation platform, vehicle calibration platform and so on.

In order to use data in deep learning, a large amount of data needs to be marked. Baidu marked data sets mainly include traffic light data sets, obstacle data sets (2D, 3D), semantic segmentation data sets, free space data sets, and behavior prediction data sets.

In order to characterize the autopilot behavior, it is also necessary to abstract the data into logical data. Primarily perfect perception data, environmental abstract data, vehicle dynamics models, etc.

Finally, we also build simulation data for simulations, mainly parametric fuzzification data, three-dimensional reconstruction data, and interactive behavioral data.

Apollo has opened 6 annotation datasets:

• Laser point cloud detection classification

Provide three-dimensional point cloud annotation data, marking four types of obstacles: pedestrians, motor vehicles, non-motor vehicles and other, can be used for obstacle detection and classification algorithm development and evaluation.

• Traffic light detection

Provides image data of common vertical traffic lights. The collection period is daytime, and the collection weather covers sunny, cloudy and foggy days with a resolution of 1080P.

• Road Hackers

This dataset has two main types of data, Street View image and vehicle motion status. The Street View image provides the vehicle front image, and the vehicle motion status data includes the vehicle's current speed and trajectory curvature.

• Image-based obstacle detection classification

Data collection covers urban roads and high-speed scenes. Manually labeling four categories of obstacles: motor vehicles, non-motor vehicles, pedestrians and static obstacles, can be used for the development and evaluation of visual obstacle detection and recognition algorithms.

• Obstacle trajectory prediction

Sampling data is derived from the comprehensive abstract features of multi-source sensors. Each set of data provides 62-dimensional vehicle and road related information and can be used for the development and evaluation of obstacle behavior prediction algorithms.

• Scene resolution

The data includes tens of thousands of frames of high-resolution RGB video and its corresponding pixel-by-pixel semantic annotations. At the same time, it provides dense point clouds with semantic segmentation measurement levels, emergency stereoscopic video, and stereoscopic panoramic images.

In addition, we have also opened up the ApolloScape data set, which is currently planned to 200,000 frame levels. On March 8th, we have opened the first batch of 80,000-frame data sets that are scanned with cameras and lidars.

ApolloScape will be of great help to the entire academic community. Both the data itself and the quality of the published data have some characteristics. We will also release some of it in the near future. I hope everyone can also participate in the entire algorithm. [Dataset Download: Please visit http://apolloscape.auto]

We also provide the supporting computing power for each data set through the Apollo training platform. The training platform features: Through the Docker + GPU cluster, providing consistent hardware computing capabilities with the car side. Integrate multiple frameworks to provide a complete deep learning solution. Through interactive visual result analysis, it is convenient for algorithm debugging optimization.

In the development of automatic driving algorithms, one of the biggest pain points is the need for massive data sets and repeated attempts. Through the implementation of the deep learning algorithm's R&D flow (development, training, verification, and debugging) in the cloud, it can make full use of the cloud's large amount of computing resources and at the same time complete the flow of data within the cloud's servers, greatly improving the efficiency of algorithm development. . Specifically, developers first develop algorithms based on Docker in the local development machine and deploy dependency environments. Then push the developed environment to the private Docker Repository in the cloud.

Next, select data sets on the platform to initiate training tasks. The cloud scheduling of the Apollo training platform schedules tasks to be executed on the compute cluster. In this process, inside the cloud cluster, the developer's program uses the data acquisition interface to obtain the data set in the autopilot data warehouse. Finally, the business management framework will return the implementation process, evaluation results, and Model to the visualization platform to complete the visual debugging.

The algorithm is trained on the cloud, and then the algorithm can be applied to the car. This is equivalent to having numerous cars to run, and numerous cars are verifying and optimizing algorithms. Because the cloud has hundreds of thousands of servers, it can support the verification of numerous vehicles.

There is also an emulator on the Apollo open platform. This is a DEMO algorithm that traffic lights detect. SSD: Single Shot MultiBox Detector

[https://arxiv.org/abs/1512.02325]

[https://github.com/weiliu89/caffe/tree/ssd]

Each friend can go up and register an account to experience the entire process. There are various scenarios where you can see your own models and code running in the emulator. It can be seen that the traffic light recognition is good even in the rain.

Insulated Power Cable

Insulated Power Cable,Bimetallic Crimp Lugs Cable,Pvc Copper Cable,Cable With Copper Tube Terminal

Taixing Longyi Terminals Co.,Ltd. , https://www.longyicopperlugs.com

Posted on