Skip to main content
    Details
    Author(s)
    Display Name
    Tommaso Apicella
    Affiliation
    Affiliation
    Università di Genova
    Display Name
    Andrea Cavallaro
    Affiliation
    Affiliation
    Queen Mary University of London
    Display Name
    Riccardo Berta
    Affiliation
    Affiliation
    Università di Genova
    Display Name
    Paolo Gastaldo
    Affiliation
    Affiliation
    University of Genoa
    Display Name
    FRsco Bellotti
    Affiliation
    Affiliation
    Università di Genova
    Display Name
    Edoardo Ragusa
    Affiliation
    Affiliation
    University of Genoa
    Abstract

    Affordance detection consists in predicting the possibility of a specific action on an object. While this problem is generally defined for fully autonomous robotic platforms, we are interested in affordance detection for a semi-autonomous scenario, with a human in the loop. In this scenario, a human first moves their robotic prosthesis (e.g. lower arm and hand) towards an object and then the prosthesis selects the part of the object to grasp. The main challenges are the indirectly controlled camera position, which influences the quality of the view, and the limited computational resources available. This paper proposes an affordance detection pipeline to overcome framing issues leveraging object detectors and a reduced computational load for the pipeline to run on resource-constrained platforms. Experimental results on two state-of-the-art datasets show improvements in affordance detection with respect to the baseline solution which consists in considering only an affordance detection model. We argue that the combination of the selected models allows achieving a trade-off between performance and computational for embedded, resource-constrained systems.