Learning Dense Convolutional Embeddings for Semantic Segmentation

Adam W. Harley, Konstantinos G. Derpanis, and Iasonas Kokkinos.

Abstract

This paper proposes a new deep convolutional neural network (DCNN) architecture that learns pixel embeddings, such that pairwise distances between the embeddings can be used to infer semantic similarity of the underlying regions. That is, for any two pixels on the same object, the embeddings are trained to be similar; for any pair that straddles an object boundary, the embeddings are trained to be dissimilar. Experimental results show that when this embedding network is used in conjunction with a DCNN trained on semantic segmentation, there is a systematic improvement in per-pixel classification accuracy. This strategy is implemented efficiently as a set of layers in the popular Caffe deep learning framework, making its integration with existing systems very straightforward.

Paper

Learning Dense Convolutional Embeddings for Semantic Segmentation

Citation

Harley, A. W., Derpanis, K. G., Kokkinos, I. (2016). Learning Dense Convolutional Embeddings for Semantic Segmentation. International Conference on Learning Representations (Workshop track).

Bibtex format:

@inproceedings{harley2016iclr,
    title = {Learning Dense Convolutional Embeddings for Semantic Segmentation},
    author = {Adam W Harley and Konstantinos G Derpanis and Iasonas Kokkinos},
    booktitle = {{International Conference on Learning Representations (ICLR)}},
    year = {2016}
}

Code

See our follow-up work, Segmentation-Aware Convolutional Networks with Local Attention Masks!