Self-driving car technologies have been in development for many decades, but breakthroughs were often mostly in recent years. Most often than not self-driving cars have proven to be much safer than human drivers, and automotive companies and other tech companies are investing billions to bring the technology into the real world.

Broadly speaking, self-driving car engineers today use two different approaches for the development of an autonomous system- namely the robotics approach and the deep learning approach. The robotic approach fuses output from a suite of sensors to directly analyse the vehicle surroundings and then navigate accordingly. Self- Driving car engineers have been working on and perfecting robotics approaches from many years. Most recently, engineering teams have begun to use deep learning approach for developing autonomous vehicles. Deep neural networks allow self-driving cars to learn how to drive by mimicking human driving behaviour. Both robotics and deep learning methods are actively being pursued today in the development of self-driving cars.

Lane line finding is the starter project to get in Self Driving Car. We will see a project on Lane Line Finding.

The goals / steps of this project are the following:

1. Computed the camera matrix and distortion coefficients

The code for this step is contained in the code cell [2] and [3] of the IPython notebook located in “Final_Advance_Lane_Lines_Finding.ipynb”

We have run our chessboard finding algorithm over multiple chessboard images taken from different angles to identify image and object points to calibrate the camera. The

former refers to coordinates in our 2D mapping while the latter represents the real-world coordinates of those image points in 3D space (with the z-axis, or depth = 0 for our chessboard images). Now we are creating the pickle matrix using that we will do camera calibration with the given object points and image points.

The OpenCV functions findChessboardCorners and calibrateCamera are the backbone of the image calibration. A number of images of a chessboard, taken from different angles with the same camera, comprise the input. Arrays of object points, corresponding to the location (essential indices) of internal corners of a chessboard, and image points, the pixel locations of the internal chessboard corners determined by findChessboardCorners, are fed to calibrateCamera which returns camera calibration and distortion coefficients. These can then be used by the OpenCV undistort function to undo the effects of distortion on an image produced by the same camera. Generally, these coefficients will not change for a given camera (and lens). The output image is shared below:

2. Apply a distortion correction to raw images

Apply distortion correction to test images I pickled the camera calibration coefficients mtx and dst (code cell 3) and used these in cv2.undistort (code cell 4) to remove distortion in the test images. See the example below of a distortion corrected

3. Using color transforms, gradients or other methods to create a threshold binary image.

In code cell [6] and [7] of the notebook we have applied thresholding to better detect the lines, we will apply color and edge thresholding, it makes easier to find the polynomial that best describes our left and right lanes later. We are going to Combine Thresholding (SobelX, R and S Channel). Results can be seen below:

4. Apply a perspective transform to rectify binary image (“birds-eye view”)

We now need to define a trapezoidal region in the 2D image that will go through a perspective transform to convert into a bird’s eye view. For perspective transform we are going to use 2 important OpenCV functions cv2.getPerspectiveTransform and cv2.warpPerspective.
Codes and results can be checked in code cell [8], [9], [10], [11] & [12] of the “Final_Advance_Lane_Lines_Finding.ipynb”. Results are shown below:

The result of the source and destination points:

Perspective transform is verified as shown below:

5. Identified lane-line pixels and fit their positions with a polynomials

Codes can be found in the code cell [13] and [14]. Here we are plotting a histogram and getting the values of leftx_base and rightx_base, using this identifying the lane-line pixels and fit their position with a polynomial.
Screenshots can be found below:

6. Calculated the radius of curvature of the lane and the position of the vehicle with respect to center

Code cell [16] using the curvature function of the code is used for finding curvature. Calculate the radius of curvature of the lane and the position of the vehicle with respect to the center The radius of curvature is given by the following formula.

Radius of curvature= (1 + (dy/dx)2)1.5 / abs(d2y /dx2)

We will calculate the radius of curvature for left and right lanes at the bottom of the image :

x = ay2 + by + c

Taking derivatives, the formula is: radius = (1 + (2a y_eval+b)2)1.5 / abs(2a)

Also, we need to convert the radius of curvature from pixels to meter. This is done by using a pixel conversion factor for x and y directions. A sanity check suggested in the lectures was to see if calculated radius of curvature ~ 1km for the track in the project video.

Calculation of offset to the center of the lane.

We assume the camera is mounted exactly in the center of the car. We first calculate the bottom of the left and right lane and hence the center of the lane. The difference between the center of the image (1280 /2 = 640) and the center of the lanes is the offset (in pixels). The calculation was then converted to meters.

7. Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position

Now we have wrapped everything. Here is an example of my result on a test image. I have used cv2.putText to display the radius of curvature and offset from center on this image. The code for adding text to image is in code cell [17] and [18]:

Conclusion:

  • The problems I faced during the project were due to lighting conditions, shadows, discoloration, etc. It wasn’t difficult to get the threshold parameters when the lighting condition is not good. The threshold was one of the more challenging part of the project. I am getting good results with the current threshold on the Project_video but it is not suitable for all real-time videos.
  • Pipeline will fail in snow or discolored road. Our pipeline is working really well for yellow and white lane lines, but for the discolored road it will fail.
  • We can make the model robust by introducing dynamic thresholding and can find some thresholding which will work in all the weather conditions. I hope to think of more strategies in the future.

 

Github Link : https://github.com/ranjan-sumit/Self-Driving-Car-Advance-Lane-Line-Finding

Name : Sumit Ranjan

Designation : Data Scientist