Details
Presenter(s)
![Jing Liu Headshot](https://confcats-catavault.s3.amazonaws.com/CATAVault/ieeecass/master/files/styles/cc_user_photo/s3/user-pictures/11992_0.jpg?h=b85e41a0&itok=HgPihN_i)
Display Name
Jing Liu
- Affiliation
-
AffiliationFudan University
- Country
-
CountryChina
Abstract
We propose a novel hybrid neural network model based on multi-level attention fusion for multimodal DMR. The proposed model use convolutional neural networks and gated recurrent unit networks to extract temporal-spatial features from multimodal sensing signals and propose the multi-level attention fusion to explore the significant patterns over local and global periods. In addition, we design three different levels of fusion (early, late, and full) to explore the effects of different attention fusions on the model. Extensive experiments show that the proposed model achieves superior performance to the baseline methods, and multi-level attention fusion brings 6.17% gain to the F1-score.