CURE-TSR: Challenging Unreal and Real Environments for Traffic Sign Recognition
Traffic sign images in the CURE-TSR dataset were cropped from the CURE-TSD dataset, which includes around 1.7 million real-world and simulator images with more than 2 million traffic sign instances. Sign types include speed limit, goods vehicles, no overtaking, no stopping, no parking, stop, bicycle, hump, no left, no right, priority to, no entry, yield, and parking. Unreal and real sequences were processed with state-of-the-art visual effect software Adobe(c) After Effects to simulate challenging conditions as in the CURE-TSD dataset.
Challenge Types and Levels
Please cite these papers if you use the CURE-TSR dataset in your research.
D. Temel, G. Kwon*, M. Prabhushankar*, and G. AlRegib, “CURE-TSR: Challenging unreal and real environments for traffic sign recognition,” in Neural Information Processing Systems (NIPS) Workshop on Machine Learning for Intelligent Transportation Systems, Long Beach, U.S., December 2017, (*: equal contribution), arXiv
- D. Temel and G. AlRegib, “Traffic Signs in the Wild: Highlights from the IEEE Video and Image Processing Cup 2017 Student Competition [SP Competitions],” in IEEE Signal Processing Magazine, vol. 35, no. 2, pp. 154-161, March 2018.
- D. Temel, M-H. Chen, T. Alshawi and G. AlRegib, “Challenging Environments for Traffic Sign Detection: Reliability Assessment under Inclement Conditions, ” arXiv:1902.06857, February 2019.
In order to receive the download link, please fill out this FORM to submit your information and agree the conditions of use. These information will be kept confidential and will not be released to anybody outside the OLIVE administration team.
If you utilize or refer to CURE-TSR dataset, please email firstname.lastname@example.org for your publication to be listed here.
The following papers used the CURE-TSR dataset in their research studies.
- G. Kwon, M. Prabhushankar, D. Temel, and G. AlRegib, “Distorted Representation Space Characterization through Backpropagated Gradients ,” accepted to the IEEE International Conference on Image Processing, Taipei, Taiwan, September 2019.
- M. Prabhushankar*, G. Kwon*, D. Temel, and G. AlRegib, “Semantically Interpretable and Controllable Filter Sets,” IEEE International Conference on Image Processing (ICIP), Athens, Greece, Oct. 7-10, 2018.
- S. Vandenhende, B. De Brabandere, D. Neven and L. Van Gool, “A Three-Player GAN: Generating Hard Samples To Improve Classification Networks ,” arXiv:1903.03496, 2019.
Image File Name Format
The name format of the provided video sequences is as follows:
01 – Real data
02 – Unreal data
01 – speed_limit
02 – goods_vehicles
03 – no_overtaking
04 – no_stopping
05 – no_parking
06 – stop
07 – bicycle
08 – hump
09 – no_left
10 – no_right
11 – priority_to
12 – no_entry
13 – yield
14 – parking
00 – No challenge
01 – Decolorization
02 – Lens blur
03 – Codec error
04 – Darkening
05 – Dirty lens
06 – Exposure
07 – Gaussian blur
08 – Noise
09 – Rain
10 – Shadow
11 – Snow
12 – Haze
A number in between [01-05] where 01 is the least severe and 05 is the most severe challenge.
A number shows different instances of traffic signs in the same conditions.
Performance of Baseline Traffic Sign Recognition Algorithms Under Challenging Conditions
We investigate the robustness of traffic sign recognition algorithms under challenging conditions. Existing datasets are limited in terms of their size and challenging condition coverage, which motivated us to generate the Challenging Unreal and Real Environments for Traffic Sign Recognition (CURE-TSR) dataset. It includes more than two million traffic sign images that are based on real-world and simulator data. We benchmark the performance of existing solutions in real-world scenarios and analyze the performance variation with respect to challenging conditions. Above figure shows the accuracy of baseline methods with respect to challenge levels for each challenge type. We show that challenging conditions can decrease the performance of baseline methods significantly, especially if these challenging conditions result in loss or misplacement of spatial information.
Challenging Conditions Generation
Adobe (c) After Effects version 220.127.116.11 was utilized to emulate challenging conditions with the following configurations:
- Decolorization: Black & White Color Correction filter version 1:0. The filter settings were: Reds= 40, Yellows= 60, Greens= 40, Cyans= 60, Blues= 20, and Magentas= 80. We utilized multiple adjustment layers to compound the effect of the color correction filter and created multiple distinct levels of this challenge.
- Lens Blur: Camera Lens Blur filter version 1:0. The filter settings were: Blur radius was set to 2; 4; 6; 8; and 10 for levels 1 – 5 and Iris Shape was Hexagan, everything else was set as default.
- Codec Error: Time Displacement filter version 1:6. The filter settings were: Max Displacement Time was set to 0:1; 0:2; 0:3; 0:4; and 0:5 for levels 1 – 5, everything else was set as default.
- Darkening: Exposure filter version 1:0. The filter was set to modify the master channel Exposure parameter to be -1;-3;-5;-7; and -9 for levels 1 – 5, everything else was set as default.
- Dirty Lens: a set of dusty and smudged lens images superimposed on the video.
- Exposure: Exposure filter version 1:0. The filter was set to modify the master channel Exposure parameter to be 1, 3, 5, 7, and 9 for levels 1 – 5, everything else was set as default.
- Gaussian Blur: Gaussian Blur filter version 3:0. The filter settings were: Bluriness was 5; 10; 15; 20; and 25 for levels 1 5, everything else was set as default. Unlike Lens Blur, Gaussian Blur is equally distributed in all directions, which leads to less structured blurred objects.
- Sensor Noise: Noise filter version 2:6. The filter settings were: Amount of Noise parameter was set to 20; 40; 60; 70; and 71 using 5 adjustment layers that were compounded to generate levels 1 – 5, everything else was set as default.
- Rain was implemented using Gradient Ramp generator version 3:2 with colors # 0F1E2D and # 5A7492 to create a blueish hue over the video, and CC Rainfall generator from Cycore Effects HD 1.8.2 version 1:1. The setting of CC Rainfall generator were: Drops was set to 10000; 20000; 50000; and 100000 and Opacity was 25% using 5 adjustment layers that were compounded to generates levels 1 – 5, everything else was set as default.
- Shadow: Venetian Blinds filter version 2:3. The filter settings were: Transition Completeness was 47%, Direction was 0x + 0:0°, Width was 142, and Opacity was 15%; 30%; 45%; 60%, and 75% for levels 1-5, everything
else was set as default.
- Snow: Glow filter version 2:6 with color # FFFFF to create a white hue over the video, and CC Snowfall generator from Cycore Effects HD 1.8.2 version 1:1. The setting of Glow filter were: Glow Threshold was 55%, Glow Intensity was 1:4, Glow Operation was Screen, and Glow Dimension was Horizontal. The setting of CC Snowfall generator were: Drops 10000; 50000; 100000, and 140000 using 9 adjustment layers that were compounded to generates levels 1-5, everything else was set as default.
- Haze: Ellipse Shape Layer filter version 1:0 with radial gradient fill using color # D6D6D6 in the center with 100% opacity and color # 000000 with 0% opacity on the edges, Smart Blur version 1:0, Exposure version 1:0, and Brightness & contrast version 1:0. The shape and focal point location of the Ellipse was manually controlled to closely follow the furthest point in the video, which created a sense of depth to the scene and emulated the behaviour of haze in realistic settings. The setting of Smart Blur filter were: Radius was 3, and Threshold was 25. The setting of Exposure filter for the master channel were: Radius was -1, and Gamma Correction was 1. The setting of Brightness & Contrast filter were: Brightness was -34, and contrast was -13. Additionally, we utilized Solid Layer, which was created from color # CECECE and opacity 10%; 20%; 30%; 40%, and 50% to add difficulty to levels 1 – 5.