– Added a new “deep lane guidance” module to the Vector Lanes neural network, which combines features extracted from the video streams with coarse map data i.e. lane numbers and lane connections. This architecture achieves a 44% lower error rate on the lane topology compared to the previous model, allowing smoother control before lanes and their connections become visually apparent. This provides a way for any Autopilot to drive as well as one driving its own commute, but in a sufficiently general way that adapts to road changes.
– Improved overall smoothness of driving, without sacrificing latency, through better modeling of system and activation latency in trajectory planning. Route Planner now independently considers latency from steering commands to actual steering control, as well as acceleration and braking commands to operation. This results in a trajectory that is a more accurate model of how the vehicle would drive. This provides better downstream tracking and controller smoothness, while also enabling more accurate response during rough maneuvers.
– Improved unprotected left turns with a better speed profile when approaching and exiting median crossover regions, in the presence of high speed crossing traffic (“Chuck Cook style” unprotected left turns). This was done by allowing an optimized initial jerk to mimic hard pedal pressure by a human when needed to go for high speed targets. Also improved lateral profile approaching such safety regions to allow for a better pose well aligned before exiting the region. Finally, improved interaction with objects entering or waiting in the median crossover region with better modeling of their future intent.
– Added control for random moving volumes at low speed from Occupancy Network. This also allows finer control for more precise object shapes that cannot be easily represented by a cubic primitive. This required predicting the velocity at each 3D voxel. We can now check for slow moving UFOs.
– Upgraded Occupancy Network to use video instead of single timestep images. Due to this temporal context, the network is robust against temporal occlusions and the occupancy flow can be predicted. Also improved ground truth with semantically driven outlier rejection, hard mining examples, and 2.4x magnification of the dataset.
– Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network computation is assigned O(objects) instead of O(space). This improved speed estimates for vehicles crossing far away by 20%, while using a tenth of the computing power.
– Increased smoothness for protected right turns by improving the association of traffic lights with ramps versus ramps signs. This reduces spurious delays when no relevant objects are present and also improves the yielding position when they are present.
– Fewer false delays at crosswalks. This was done with a better understanding of the intention of pedestrians and cyclists based on their movement.
– Improved ego-relevant lane geometry error by 34% and crossing lanes by 21% with a full Vector Lanes neural network update. Information bottlenecks in the network architecture were eliminated by increasing the size of the function extractors per camera, video modules, internal parts of the autoregressive decoder and by adding a hard attention mechanism that significantly improved the fine positioning of lanes.
– Speed profile made more comfortable when crawling for visibility, to allow smoother stops when protecting potentially closed objects.
– Improved animal recall by 34% by doubling the size of the automatically labeled training set.
– Creep enabled for visibility at any intersection where objects may cross the ego’s path, regardless of the presence of traffic controllers.
– Improved stop position accuracy in critical intersecting scenarios, by allowing dynamic resolution in trajectory optimization to focus more on areas where finer control is essential.
– Increased forking lane recall by 36% by allowing topological tokens to participate in the autoregressive decoder’s attention operations and by increasing the loss applied to fork tokens during training.
– Improved speed error for pedestrians and cyclists by 17%, especially when ego is cornering, by improving the onboard trajectory estimation used as input to the neural network.
– Improved object detection recall, eliminating 26% of missing detections for far intersecting vehicles by tuning the loss function used during training and improving label quality.
– Improved prediction of the future path of objects in high yaw rate scenarios by including the yaw rate and lateral motion in the probability estimate. This helps with objects turning in or away from ego’s orbit, especially at intersections or turn-on scenarios.
– Improved highway entry speed through better handling of upcoming map speed changes, increasing confidence in highway merging.
– Reduced latency when starting from a stop by taking into account the shock of the leading vehicle.
– Faster identification of red light runners enabled by evaluating their current kinematic state relative to their expected braking profile.
Hit the “Video Record” button on the top bar UI to share your feedback. When pressed, your vehicle’s remote cameras share a short VIN-associated Autopilot Snapshot with Tesla’s engineering team to make improvements to FSD. You cannot view the clip.
.