Transport Triggered Array Processor for Vision Applications
Low-level sensory data processing in many Internet-of-Things (IoT) devices pursue energy efficiency by utilizing sleep modes or slowing the clocking to the minimum. To curb the share of stand-by power dissipation in those designs, ultra-low-leakage processes are employed in fabrication. Those limit the clocking rates significantly, reducing the computing throughputs of individual cores. In this contribution we explore compensating for the substantial computing power needs of a vision application using massive parallelism. The Processing Elements (PE) of the design are based on Transport Triggered Architecture. The fine grained programmable parallel solution allows for fast and efficient computation of learnable low-level features (e.g. local binary descriptors and convolutions). Other operations, including Max-pooling have also been implemented. The programmable design achieves excellent energy efficiency for Local Binary Patterns computations.