Home » Perception-Capable Learning and Robustness Analysis

Perception-Capable Learning and Robustness Analysis

Given a high-dimension video input stream, the task is to extract a rich but perceptually salient compact representation that is easily accessible to subsequent processing stages. It is advantageous to add perception within backpropagation. The backpropagation algorithm in it’s current state provides a very powerful cascade of filters that capture edge characteristics. But perception is added as an additional layer on top of existing frameworks, Also, fidelity-based backpropagation overlooks visual system characteristics including color perception processes, just noticeable differences, and center-surround mechanisms. This void can be filled by existing algorithms that use signal processing based handcrafted features, which partially capture visual system characteristics in addition to just edge based features. Hence, modifying the way that the backpropagation algorithm works will be the first step to achieving a ‘cognitive’ learning module.

One of the major challenges facing the current technology is that it can’t deal with all the variations that occur in the real world. Our visual system, however, is very robust to varying conditions. This gap between human and machine vision can be attributed to ‘perception’. A perception capable learning algorithm can intrinsically deal with the robustness issues. We have already built the system that can generate driving scenario videos with different variations such as weather, lighting and view angles. The system is currently designed for traffic sign detection, and it can be extended to build a scene understanding database with different challenges. With this database, we can validate our perception-capable learning algorithm and test the robustness.

In addition to static scene understanding, dynamic information is also an essential part of human perception. The movement and interaction between objects affect how human understands a scene. This means that our perception system processes both spatial and temporal information simultaneously. By understanding actions around us, appropriate decisions can be taken in an informed manner. By adapting action recognition for intelligent mobility applications, such as pedestrian action recognition and emergency detection, we can embed perception-capable learning algorithm into autonomous vehicles.

%d bloggers like this: