FrankenGAN: Guided Detail Synthesis for Building Mass Models using Style-Synchronized GANs

SIGGRAPH-Asia 2018

Tom Kelly, Paul Guerrero, Anthony Steed, Peter Wonka & Niloy J. Mitra


Coarse building mass models are now routinely generated at scales ranging from individual buildings through to whole cities. For example, they can be abstracted from raw measurements, generated procedurally, or created manually. However, these models typically lack any meaningful semantic or texture details, making them unsuitable for direct display. We introduce the problem of automatically and realistically decorating such models by adding semantically consistent geometric details and textures. Building on the recent success of generative adversarial networks (GANs), we propose FrankenGAN, a cascade of GANs to create plausible details across multiple scales over large neighborhoods. The various GANs are synchronized to produce consistent style distributions over buildings and neighborhoods. We provide the user with direct control over the variability of the output. We allow her to interactively specify style via images and manipulate style-adapted sliders to control style variability. We demonstrate our system on several large-scale examples. The generated outputs are qualitatively evaluated via a set of user studies and are found to be realistic, semantically-plausible, and style-consistent.

Images

 

Resources

 

Acknowledgements

This project was supported by an ERC Starting Grant (SmartGeometry StG-2013-335373), KAUST-UCL Grant (OSR-2015-CCF-2533), ERC PoC Grant (SemanticCity), the KAUST Office of Sponsored Research (OSR-CRG2017-3426), Open3D Project (EPSRC Grant \\ EP/M013685/1), and a Google Faculty Award (UrbanPlan).

Papers

T. Kelly, P. Guerrero, A. Steed, P. Wonka, and N. Mitra, FrankenGAN: guided detail synthesis for building mass models using style-synchonized GANs, ACM Transactions on Graphics, vol. 37, iss. 6, 2018.
Abstract | Bibtex | DOI | PDF
Coarse building mass models are now routinely generated at scales ranging from individual buildings to whole cities. Such models can be abstracted from raw measurements, generated procedurally, or created manually. However, these models typically lack any meaningful geometric or texture details, making them unsuitable for direct display. We introduce the problem of automatically and realistically decorating such models by adding semantically consistent geometric details and textures. Building on the recent success of generative adversarial networks (GANs), we propose FrankenGAN, a cascade of GANs that creates plausible details across multiple scales over large neighborhoods. The various GANs are synchronized to produce consistent style distributions over buildings and neighborhoods. We provide the user with direct control over the variability of the output. We allow him/her to interactively specify the style via images and manipulate style-adapted sliders to control style variability. We test our system on several large-scale examples. The generated outputs are qualitatively evaluated via a set of perceptual studies and are found to be realistic, semantically plausible, and consistent in style.
@article{wrro138256,
volume = {37},
number = {6},
month = {December},
author = {T Kelly and P Guerrero and A Steed and P Wonka and NJ Mitra},
note = {{\copyright} 2018 Copyright held by the owner/author(s). Publication rights licensed to ACM. This is an author produced version of a paper published in ACM Transactions on Graphics. Uploaded in accordance with the publisher's self-archiving policy.},
title = {FrankenGAN: guided detail synthesis for building mass models using style-synchonized GANs},
publisher = {Association for Computing Machinery},
doi = {10.1145/3272127.3275065},
year = {2018},
journal = {ACM Transactions on Graphics},
url = {http://eprints.whiterose.ac.uk/138256/},
abstract = {Coarse building mass models are now routinely generated at scales ranging from individual buildings to whole cities. Such models can be abstracted from raw measurements, generated procedurally, or created manually. However, these models typically lack any meaningful geometric or texture details, making them unsuitable for direct display. We introduce the problem of automatically and realistically decorating such models by adding semantically consistent geometric details and textures. Building on the recent success of generative adversarial networks (GANs), we propose FrankenGAN, a cascade of GANs that creates plausible details across multiple scales over large neighborhoods. The various GANs are synchronized to produce consistent style distributions over buildings and neighborhoods. We provide the user with direct control over the variability of the output. We allow him/her to interactively specify the style via images and manipulate style-adapted sliders to control style variability. We test our system on several large-scale examples. The generated outputs are qualitatively evaluated via a set of perceptual studies and are found to be realistic, semantically plausible, and consistent in style.}
}

Authors from VCG

tom kelly

Partners

"ERC"
"KAUST"
"University College London"