Show simple item record

dc.contributor.authorKOKARAM, ANILen
dc.contributor.authorPITIE, FRANCOISen
dc.date.accessioned2015-02-13T15:05:53Z
dc.date.available2015-02-13T15:05:53Z
dc.date.created6-7 Noven
dc.date.issued2013en
dc.date.submitted2013en
dc.identifier.citationRaimbault, F., Pitié, F., Kokaram, A., User-assisted sparse stereo-video segmentation, ACM International Conference Proceeding Series, ACM International Conference, 6-7 Nov, 2013, a3-en
dc.identifier.otherYen
dc.identifier.urihttp://hdl.handle.net/2262/73199
dc.descriptionPUBLISHEDen
dc.description.abstractUser-assisted Sparse Stereo-video Segmentation Félix Raimbault, François Pitié and Anil Kokaram Sigmedia Group Dept. of Electronic and Electrical Engineering Trinity College Dublin, Ireland {raimbauf, fpitie, anil.kokaram}@tcd.ie ABSTRACT Motion-based video segmentation has been studied for many years and remains challenging. Ill-posed problems must be solved when seeking for a fully automated solution, so it is increasingly popular to maintain users in the processing loop by letting them set pa- rameters or draw mattes to guide the segmentation process. When processing multiple-view videos, however, the amount of user in- teraction should not be proportional to the number of views. In this paper we present a novel sparse segmentation algorithm for two- view stereoscopic videos that maintains temporal coherence and view consistency throughout. We track feature points on both views with a generic tracker and analyse the pairwise affinity of both tem- porally overlapping and disjoint tracks, whereas existing similar techniques only exploit the information available when tracks over- lap. The use of stereo-disparity also allows our technique to process jointly feature tracks on both views, exhibiting a good view consis- tency in the segmentation output. To make up for the lack of high level understanding inherent to segmentation techniques, we allow the user to refine the output with a split-and-merge approach so as to obtain a desired view-consistent segmentation output over many frames in a few clicks . We present several real video examples to illustrate the versatility of our technique.en
dc.description.sponsorshipWe would like to thank the anonymous reviewers for their help- ful comments. This work has been funded by Science Foundation of Ireland (SFI) as part of Project 08/IN.1/I2112, Content Aware Media Processing (CAMP)en
dc.format.extenta3en
dc.language.isoenen
dc.rightsYen
dc.subjectMotion-based video segmentationen
dc.subject.lcshMotion-based video segmentationen
dc.titleUser-assisted sparse stereo-video segmentationen
dc.title.alternativeACM International Conference Proceeding Seriesen
dc.title.alternativeACM International Conferenceen
dc.typeConference Paperen
dc.type.supercollectionscholarly_publicationsen
dc.type.supercollectionrefereed_publicationsen
dc.identifier.peoplefinderurlhttp://people.tcd.ie/akokaramen
dc.identifier.peoplefinderurlhttp://people.tcd.ie/pitiefen
dc.identifier.rssinternalid100646en
dc.identifier.doihttp://dx.doi.org/10.1145/2534008.2534027en
dc.rights.ecaccessrightsopenAccess


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record