dc.contributor.author | KOKARAM, ANIL | en |
dc.contributor.author | PITIE, FRANCOIS | en |
dc.date.accessioned | 2015-02-13T15:05:53Z | |
dc.date.available | 2015-02-13T15:05:53Z | |
dc.date.created | 6-7 Nov | en |
dc.date.issued | 2013 | en |
dc.date.submitted | 2013 | en |
dc.identifier.citation | Raimbault, F., Pitié, F., Kokaram, A., User-assisted sparse stereo-video segmentation, ACM International Conference Proceeding Series, ACM International Conference, 6-7 Nov, 2013, a3- | en |
dc.identifier.other | Y | en |
dc.identifier.uri | http://hdl.handle.net/2262/73199 | |
dc.description | PUBLISHED | en |
dc.description.abstract | User-assisted Sparse Stereo-video Segmentation
Félix Raimbault, François Pitié and Anil Kokaram
Sigmedia Group
Dept. of Electronic and Electrical Engineering
Trinity College Dublin, Ireland
{raimbauf, fpitie, anil.kokaram}@tcd.ie
ABSTRACT
Motion-based video segmentation has been studied for many years
and remains challenging. Ill-posed problems must be solved when
seeking for a fully automated solution, so it is increasingly popular
to maintain users in the processing loop by letting them set pa-
rameters or draw mattes to guide the segmentation process. When
processing multiple-view videos, however, the amount of user in-
teraction should not be proportional to the number of views. In this
paper we present a novel sparse segmentation algorithm for two-
view stereoscopic videos that maintains
temporal coherence
and
view consistency
throughout. We track feature points on both views
with a generic tracker and analyse the pairwise affinity of both tem-
porally
overlapping
and
disjoint
tracks, whereas existing similar
techniques only exploit the information available when tracks over-
lap. The use of stereo-disparity also allows our technique to process
jointly feature tracks on both views, exhibiting a good view consis-
tency in the segmentation output. To make up for the lack of high
level understanding inherent to segmentation techniques, we allow
the user to refine the output with a
split-and-merge
approach so as
to obtain a desired view-consistent segmentation output over many
frames in a
few clicks
. We present several real video examples to
illustrate the versatility of our technique. | en |
dc.description.sponsorship | We would like to thank the anonymous reviewers for their help-
ful comments. This work has been funded by Science Foundation
of Ireland (SFI) as part of Project 08/IN.1/I2112, Content Aware
Media Processing (CAMP) | en |
dc.format.extent | a3 | en |
dc.language.iso | en | en |
dc.rights | Y | en |
dc.subject | Motion-based video segmentation | en |
dc.subject.lcsh | Motion-based video segmentation | en |
dc.title | User-assisted sparse stereo-video segmentation | en |
dc.title.alternative | ACM International Conference Proceeding Series | en |
dc.title.alternative | ACM International Conference | en |
dc.type | Conference Paper | en |
dc.type.supercollection | scholarly_publications | en |
dc.type.supercollection | refereed_publications | en |
dc.identifier.peoplefinderurl | http://people.tcd.ie/akokaram | en |
dc.identifier.peoplefinderurl | http://people.tcd.ie/pitief | en |
dc.identifier.rssinternalid | 100646 | en |
dc.identifier.doi | http://dx.doi.org/10.1145/2534008.2534027 | en |
dc.rights.ecaccessrights | openAccess | |