Demajh logo Demajh, Inc.

Few-Shot Learning by Explicit Physics Integration: An Application to Groundwater Heat Transport: what it means for business leaders

By fusing lightweight numerical solvers with neural networks, this method yields city-scale subsurface temperature forecasts from only a handful of simulations—empowering utilities and planners to iterate designs in near real time.

1. What the method is

The study presents the Local-Global Convolutional Neural Network (LGCNN): a three-step surrogate that first predicts groundwater velocities with a shallow UNet, then traces those velocities through a fast streamline solver, and finally refines diffusion effects with a second UNet. Together, these components transform sparse hydro-geological inputs and dozens of heat-pump sources into high-resolution temperature fields across entire metropolitan aquifers—executing in milliseconds instead of hours.

2. Why the method was developed

Full-physics groundwater simulators are too slow for optimisation loops, while data-only surrogates demand prohibitive training sets. Embedding physics directly into the network architecture slashes data requirements to just three labelled domains and preserves accuracy when scaling to regions four times larger—unlocking interactive scenario planning for energy regulators and climate-resilient infrastructure teams.

3. Who should care

• Urban-energy planners designing geothermal heat-pump networks.
• Digital-twin software vendors seeking faster environmental models.
• Sustainability investors evaluating scalable, regulator-friendly climate tech.
• Utility operators and engineering consultants running large-scale what-if studies.

4. How the method works

Step 1 converts permeability and pressure-gradient maps into local velocity vectors via a compact UNet. Step 2 feeds these velocities into a streamlined particle tracer that cheaply captures long-range advection. Step 3 concatenates original inputs, velocity, and streamline context into a diffusion-aware UNet that outputs steady-state temperature fields. Training minimises a blend of MAE, MSE, and structural-similarity losses across roughly 1.2 million synthetic patches, yet inference needs only a single forward pass per scenario.

5. How it was evaluated

LGCNN was benchmarked against vanilla UNet and domain-decomposed UNet on synthetic landscapes and real Munich aquifer data. Key metrics—MAE, worst-case error, SSIM, and percentage of cells exceeding 0.1 °C—were recorded, alongside inference latency and cross-domain generalisation tests that quadrupled spatial extent without retraining.

6. How it performed

With only three training cut-outs, LGCNN cut mean-absolute temperature error to 0.05 °C—up to 75 % better than baselines—maintained over 99 % structural similarity on a 51 km² mesh, and ran roughly 1 000× faster than conventional solvers, enabling interactive optimisation of pump layouts and compliance checks. (Source: arXiv 2507.06062, 2025)

← Back to dossier index