Autonomous driving has gained popularity due to its high reliability compared to human drivers. Autonomous vehicles combine variety of sensors to perceive their surroundings, and use deep learning (DL) to extract complicated information from the sensing data. However, there are several challenges: Many DL models have explosive model sizes, and therefore not only time consuming but also power consuming when implementing on embedded systems on vehicles, further degrading the battery life-cycle. The current on-board AI treats lane detection and car location separately. In this paper, we propose an end-to-end multi-task environment detection framework. We fuse the 3D point cloud object detection model and lane detection model, with model compression technique applied. As on-board sensors forward information to the multi-task network, it not only parallel two detection tasks to extract combination information, but also reduces entire running time of the DL model. Experiments show by adding the model compression technique, the running speed of multi-task model improves more than 2. Also, running time of lane detection model on Nvidia Jetson TX2 is almost 6 less comparing with running on CPU, which shows reasonableness of using embedded AI computing device on autonomous vehicle.