Hee Seok Lee and Kang Kim have recently been working with Convolutional Neural Networks in detecting Traffic Signs. To most drivers, a traffic sign is normally fairly clear and identifiable. However, to a computer figuring out where a sign is in a flat image is not so simple.
The system isn’t expected to handle having trees growing over the signs, so any signs that aren’t fully visible, so it is a time to get out the pruning shears. However, CNN is really proving itself when it comes to recognising the shapes at speed.
Using different parameters such as using a lighter base network, ie. deeper neural networks give better results, but to do that requires a higher level of processing power than the rewards. Changing the resolution also allowed recognition of the sames signs at a high frame rate, and cropping the image also assisted in speeding up recognition, ie. far away traffic signs in the centre of the image of a vehicle camera aren’t recognisable, and the normal location for traffic signs will be on the side of the road, so detection can be targeted at the appropriate locations.
In summary the team identified that for the best accuracy at speed, by using the latest architectures, of object detection such as feature pyramid networks and multi scale training, they achieved a 7FPS on a low power mobile platform, ideal for your self driving car, or for alerting drivers to signs that they may have missed!