Skip to main content
Video s3
    Details
    Poster
    Presenter(s)
    Hayden Kwok-Hay So Headshot
    Affiliation
    Affiliation
    University of Hong Kong
    Country
    Abstract

    A case study in applying modern FPGAs as a platform to accelerate intelligent vision-guided crop detection in agricultural field robots is presented. A state-of-the-art YOLOv3 object detection neural network was adapted to detect broccoli and cauliflower in image dataset obtained from autonomous agricultural robots. A baseline floating point implementation achieved 96% mAP, and an efficient, quantized implementation suitable for FPGA implementation 92% mAP. The proposed FPGA solution has 136.86 ms inference latency while consuming 12.43W in a low latency configuration, and 28.48 frames per second while consuming 17.78W in a high throughput one. Compared to an embedded GPU implementation of the same task, the FPGA solution was 4.12 times more power-efficient and offers 6.85 times higher throughput, translating to faster and longer operation of a battery-powered field robot.

    Slides