Nvidia researchers recently developed a new generative adversarial network methodology intended for generating realistic landscape photos from a segmentation map or rough sketches.
And even though it is not yet perfect, it has made considerable efforts to help individuals to develop their synthetic landscape.
Initially, the GauGAN model was said to be a tool for helping architects, game designers, and urban planners to rapidly come up with synthetic images.
The model was trained on more than one million images, comprising of 41,000 images from Flickr, with researchers saying that it serves as a “smart paintbrush” since it helps in filling in the details on the rough sketches.
READ MORE: NVIDIA and IBM Partner for Open Source Machine Learning Tools
READ MORE: IBM Turns to Nvidia to Address Enterprise AI Workloads
READ MORE: Nvidia Uses Neural Networks to Produce Synthetic Scans of Brain Cancer
“It’s like a coloring book picture that describes where a tree is, where the sun is, where the sky is,” Nvidia vice president of applied deep learning research Bryan Catanzaro said.
“And then the neural network is able to fill in all of the detail and texture, and the reflections, shadows, and colors, based on what it has learned about real images.”
In a recent demonstration at the company’s GTC conference, the researchers were able to show how GauGAN works, and how it renders images in real-time, changing the styling between several seasons, and how water interacted with and reflected the particular landscape.
Even though the machine used included a recently unveiled Titan RTX, Catanzaro claimed that running a similar application on a CPU, especially if the image rendering was created on-demand was possible.
“This technology is not just stitching together pieces of other images, or cutting and pasting textures,” Catanzaro said. “It’s actually synthesizing new images, very similar to how an artist would draw something.”
In a research document to be presented at the CVPR conference later in June, the researchers claimed that using human testing through Mechanical Turk indicated that its images were more on-demand compared to those produced by SIMS, pix2pixHD, and CRM algorithms.
According to Catanzaro, GauGAN, in comparison to other algorithms, featured a better vocabulary and required lesser parameters.
At the close of 2018, a research team comprising of Catanzaro presented a research paper on forecasting future video frames, particularly for synthesized city images.
Nvidia also utilized generative adversarial networks in developing artificial brain MRI images, to assist in overcoming a shortage in brain images for training networks.
Diversity is important as far as the success of training neural networks is concerned even though medical imaging data is normally imbalanced, “Hoo Chang Shin, a senior research scientist at Nvidia said.
”There are so many more normal cases than abnormal cases, when abnormal cases are what we care about, to try to detect and diagnose.”
Nvidia is a Santa Clara, California-based technology company that is involved in the designing of graphics processing units (GPUs), specifically for the professional and gaming markets.
The company also designs systems on a chip unit or SoCs, particularly for the automotive and mobile computing market.
Aside from making GPUs, Nvidia provides scientist and researchers parallel processing abilities that enable them to operate high-performance apps efficiently.