Deep Learning-based Image Synthesis using Sketching and Example-based Techniques

Portenier, Tiziano (2019). Deep Learning-based Image Synthesis using Sketching and Example-based Techniques (Unpublished). (Dissertation, University of Bern, Philosophisch-naturwissenschaftliche Fakultät)

[img] Text
diss_portenier.pdf - Published Version
Restricted to registered users only
Available under License Publisher holds Copyright.

Download (236MB) | Request a copy

The large amount of digital visual data produced by humanity every day creates demand for efficient computational tools to manage this data. In this thesis we develop and study sketch- and example-based image synthesis and image-retrieval techniques that support users to create a photorealistic image, given a visual concept in mind.

The sketch-based image retrieval system that we introduce in this thesis is designed to answer arbitrary queries that may go beyond searching for predefined object or scene categories. Our key idea is to combine sketch-based queries with interactive, semantic re-ranking of query results by leveraging deep feature representations learned for image classification. This allows us to cluster semantically similar images, re-rank based on the clusters, and present more meaningful results to the user. We report on two large-scale benchmarks and demonstrate that our re-ranking approach leads to significant improvements over the state of the art. A user study designed to evaluate a practical use case confirms the benefits of our approach.

Next, we develop a representation for fine-grained, example-based image retrieval. Given a query, we want to retrieve data items of the same class, and, in addition, rank these items according to intra-class similarity. In our training data we assume partial knowledge: class labels are available, but the intra-class attributes are not. To compensate for this knowledge gap we propose using autoencoders that can be trained to produce features both with and without labels. Our main hypothesis is that network architectures that incorporate an autoencoder can learn features that meaningfully cluster data based on the intra-class variability. We propose and compare three different architectures to construct our features. We perform experiments on four datasets and find that these architectures indeed improve fine-grained retrieval. In particular, we obtain state of the art performance on fine-grained sketch retrieval.

In the second part of this thesis we develop systems for interactive image editing applications. First, we present a novel system for sketch-based face image editing, enabling users to edit images intuitively by sketching a few strokes on a region of interest. Our interface features tools to express a desired image manipulation by providing both geometry and color constraints as user-drawn strokes. The proposed interface runs in real-time and facilitates an interactive and iterative workflow to quickly express the intended edits.
Our system is based on a novel sketch domain and a convolutional neural network trained end-to-end to automatically learn to render image regions corresponding to the input strokes. To achieve high quality and semantically consistent results we train our neural network on two simultaneous tasks, namely image completion and image translation. Our results show that the proposed sketch domain, network architecture, and training procedure generalize well to real user input and enable high-quality synthesis results without additional post-processing.

Finally, we propose novel systems for smart copy-paste, enabling the synthesis of high-quality results given a masked source image content and a target image context as input. Our systems naturally resolve both shading and geometric inconsistencies between source and target image, resulting in a merged output image that features the content from the pasted source image, seamlessly pasted into the target context. We introduce a novel training image transformation procedure that allows to train a deep convolutional neural network end-to-end to automatically learn a representation that is suitable for copy-pasting. Our training procedure works with any image dataset without additional information such as labels, and we demonstrate the effectiveness of our systems on multiple datasets.

Item Type:

Thesis (Dissertation)

Division/Institute:

08 Faculty of Science > Institute of Computer Science (INF) > Computer Graphics Group (CGG)
08 Faculty of Science > Institute of Computer Science (INF)

UniBE Contributor:

Portenier, Tiziano, Zwicker, Matthias

Subjects:

000 Computer science, knowledge & systems
500 Science > 510 Mathematics

Language:

English

Submitter:

Tiziano Portenier

Date Deposited:

07 Jun 2019 15:01

Last Modified:

05 Dec 2022 15:28

Uncontrolled Keywords:

deep learning, autoencoder, convolutional neural network, clustering, sketch-based image retrieval, example-based image retrieval, fine-grained image retrieval, sketch-based interface, image processing, image editing, image harmonization

BORIS DOI:

10.7892/boris.130482

URI:

https://boris.unibe.ch/id/eprint/130482

Actions (login required)

Edit item Edit item
Provide Feedback