Show simple item record

dc.contributor.authorSmolic, Aljosa
dc.date.accessioned2021-03-14T16:38:37Z
dc.date.available2021-03-14T16:38:37Z
dc.date.issued2020
dc.date.submitted2020en
dc.identifier.citationWang, Z., She, Q., Chalasani, T., Smolic, A., "CatNet: Class Incremental 3D ConvNets for Lifelong Egocentric Gesture Recognition," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 2020, pp. 935-944en
dc.identifier.otherY
dc.identifier.urihttp://hdl.handle.net/2262/95670
dc.description.abstractEgocentric gestures are the most natural form of communication for humans to interact with wearable devices such as VR/AR helmets and glasses. A major issue in such scenarios for real-world applications is that may easily become necessary to add new gestures to the system e.g., a proper VR system should allow users to customize gestures incrementally. Traditional deep learning methods require storing all previous class samples in the system and training the model again from scratch by incorporating previous samples and new samples, which costs humongous memory and significantly increases computation over time. In this work, we demonstrate a lifelong 3D convolutional framework - c(C)la(a)ss increment(t)al net(Net)works (CatNet), which considers temporal information in videos and enables life-long learning for egocentric gesture video recognition by learning the feature representation of an exemplar set selected from previous class samples. Importantly, we propose a two-stream CatNet, which deploys RGB and depth modalities to train two separate networks. We evaluate Cat- Nets on a publicly available dataset - EgoGesture dataset, and show that CatNets can learn many classes incrementally over a long period of time. Results also demonstrate that the two-stream architecture achieves the best performance on both joint training and class incremental training compared to 3 other one-stream architectures. The codes and pre-trained models used in this work are provided at https://github.com/villawang/CatNet.en
dc.language.isoenen
dc.relation.urihttps://v-sense.scss.tcd.ie/wp-content/uploads/2020/11/Wang_CatNet_Class_Incremental_3D_ConvNets_for_Lifelong_Egocentric_Gesture_Recognition_CVPRW_2020_paper.pdfen
dc.rightsYen
dc.subjectVideosen
dc.subjectTask analysisen
dc.subjectThree-dimensional displaysen
dc.subjectComputer architectureen
dc.subjectTrainingen
dc.subjectSpatiotemporal phenomenaen
dc.subjectGesture recognitionen
dc.titleCatNet: Class Incremental 3D ConvNets for Lifelong Egocentric Gesture Recognitionen
dc.title.alternativeConference on Computer Vision and Pattern Recognition 2020 (CVPR 2020), 2020.en
dc.typeConference Paperen
dc.type.supercollectionscholarly_publicationsen
dc.type.supercollectionrefereed_publicationsen
dc.identifier.peoplefinderurlhttp://people.tcd.ie/smolica
dc.identifier.rssinternalid225562
dc.identifier.doi10.1109/CVPRW50498.2020.00123
dc.rights.ecaccessrightsopenAccess
dc.relation.doi10.1109/CVPRW50498.2020.00123en
dc.relation.citesCitesen
dc.relation.citesCitesen
dc.subject.TCDThemeCreative Technologiesen
dc.subject.TCDThemeDigital Engagementen
dc.subject.TCDTagData Analysisen
dc.subject.TCDTagInformation technology in educationen
dc.subject.TCDTagMultimedia & Creativityen
dc.identifier.rssurihttps://v-sense.scss.tcd.ie/wp-content/uploads/2020/11/Wang_CatNet_Class_Incremental_3D_ConvNets_for_Lifelong_Egocentric_Gesture_Recognition_CVPRW_2020_paper.pdf
dc.subject.darat_impairmentOtheren
dc.status.accessibleNen
dc.contributor.sponsorScience Foundation Ireland (SFI)en
dc.contributor.sponsorGrantNumber15/RP/2776en


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record