Show simple item record

dc.contributor.authorCastro, F.M.
dc.contributor.authorMarín-Jiménez, M.J.
dc.contributor.authorGuil, N.
dc.contributor.authorPérez de la Blanca, N.
dc.date.accessioned2017-12-05T08:32:16Z
dc.date.available2017-12-05T08:32:16Z
dc.date.issued2017-12-05
dc.identifier.urihttp://hdl.handle.net/10396/15639
dc.description.abstractThis work targets people identification in video based on the way they walk (i.e. gait). While classical methods typically derive gait signatures from sequences of binary silhouettes, in this work we explore the use of convolutional neural networks (CNN) for learning high-level descriptors from low-level motion features (i.e. optical flow components). We carry out a thorough experimental evaluation of the proposed CNN architecture on the challenging TUM-GAID dataset. The experimental results indicate that using spatio-temporal cuboids of optical flow as input data for CNN allows to obtain state-of-the-art results on the gait task with an image resolution eight times lower than the previously reported results (i.e. 80 60 pixels).es_ES
dc.format.mimetypeapplication/pdfes_ES
dc.language.isoenges_ES
dc.rightshttps://creativecommons.org/licenses/by-nc-nd/4.0/es_ES
dc.sourcearXiv:1603.01006
dc.subjectGait recognitiones_ES
dc.subjectPeople identificationes_ES
dc.subjectConvolutional neural networkses_ES
dc.titleAutomatic learning of gait signatures for people identificationes_ES
dc.typeinfo:eu-repo/semantics/conferenceObjectes_ES
dc.relation.publisherversionhttp://arxiv.org/abs/1603.01006es_ES
dc.relation.projectIDJunta de Andalucía. TIC-1692es_ES
dc.rights.accessRightsinfo:eu-repo/semantics/openAccesses_ES


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record