Tech

From action movies to urban planning, new method for creating large 3D models of urban areas is faster and cheaper

Share
Share
From action movies to urban planning | Waterloo News
Ground truth, generated images, and visualization of Gaussian means of our Waterloo scene at different altitudes and orientations. (Left) Waterloo scene ground truth. (Middle) Waterloo scene 3DGS generated image. (Right) Waterloo scene visualization of the location of each 3DGS Gaussian, i.e., 3D positional mean of each Gaussian. These points were then extracted as point clouds. Credit: Kyle Gao et al, Enhanced 3-D Urban Scene Reconstruction and Point Cloud Densification Using Gaussian Splatting and Google Earth Imagery (2025)

A research team led by Waterloo Engineering has developed a faster, cheaper way to create large-scale, three-dimensional (3D) computer models of urban areas, technology that could impact fields including urban planning, architectural design and filmmaking.

A paper on the research, titled “Enhanced 3D Scene Reconstruction and Point Cloud Densification using Gaussian Splatting and Google Earth Imagery,” appears in IEEE Transactions on Geoscience and Remote Sensing.

The system can generate 3D models of entire cities using only 2D aerial photographs, automating a time-consuming manual process that previously required specially trained 3D artists and computer graphics programs.

“Think about all the time and labor involved in manually creating a digital 3D model of New York City for a new Spiderman movie,” said Kyle Gao, a Ph.D. student in systems design engineering.

“With our system, it can be done using a few hundred aerial images—satellite images from Google Earth, for example—to train the model for a couple of hours in an automated process.”

The technology is built upon a method known as Gaussian Splatting, which uses millions of tiny ellipsoids, each with their own color and lighting detail, to automatically create 3D digital assets out of 2D aerial photographs.







Credit: University of Waterloo

“In the same way the human body is made up of tiny atoms, large-scale 3D objects are built from small 3D geometric ellipsoids,” said Gao. “Or you can imagine blobs of ink getting ‘splatted’ onto a 2D image.”

The technology is particularly well-suited to the production of computer-generated images and computer-generated graphics, including fast, photographic-quality images of urban environments.

Gao said an urban planner could use it to create 3D digital models of a neighborhood to help study a development proposal or generate impressive fly-through video of the area to give residents at a public meeting an immersive look at the plan. Architects could use the technology to visualize and obtain measurements of buildings near a new project without leaving their desks or create a 3D model of an existing building as the starting point for design work.

The multidisciplinary research team, which included members from the engineering and environment faculties at the University Waterloo, and Jimei University in China, is now considering commercialization possibilities and exploring the addition of data analysis capabilities to the system using geospatial artificial intelligence (AI).

“We are examining areas including traffic analysis, solar potential and electricity cost analysis, air quality analysis and weather forecasting,” Gao said. “We’re eager to find out what this can and can’t do.”

Gao is supervised by Dr. Jonathan Li, a cross-appointed professor of systems design engineering, and geography and environmental management, and the director of the Geospatial Intelligence and Mapping (GIM) Lab at Waterloo.

More information:
Kyle Gao et al, Enhanced 3-D Urban Scene Reconstruction and Point Cloud Densification Using Gaussian Splatting and Google Earth Imagery, IEEE Transactions on Geoscience and Remote Sensing (2025). DOI: 10.1109/TGRS.2025.3536169

Provided by
University of Waterloo


Citation:
From action movies to urban planning, new method for creating large 3D models of urban areas is faster and cheaper (2025, May 6)
retrieved 6 May 2025
from

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
You can now edit images in Gemini directly
Tech

You can now edit images in Gemini directly

Google’s Gemini can now edit both AI-generated and personal images using text...

Research reveals hidden gifts of the ‘black box’ for modeling grid behavior
Tech

Research reveals hidden gifts of the ‘black box’ for modeling grid behavior

ORNL’s “black box” grid modeling method protects proprietary information about the inner...