When gold doesn’t follow the rules, machine learning finds the clues and a folded dome reveals why your domains were lying to you all along

Geologist Jordan McDivitt presents at MREC 2025 on using machine learning and structural modelling to define complex domains at the Oberon gold project in the NT.

At the 2025 AusIMM Mineral Resource Estimation Conference in Perth, geologist Jordan McDivitt delivered a sharp, technically grounded presentation on how machine learning and structural geology can work hand in hand to model complexity in orogenic gold systems. Drawing on work completed during his tenure at Newmont—now presented in a personal capacity—Jordan unpacked a compelling case study from the Oberon deposit in the Tanami region of the Northern Territory.

Oberon, a structurally complex, pre-production asset near the tier-one Dead Bullock Soak operation, challenged conventional domaining due to its stockwork-style veining, lack of strong lithological contacts, and localised high-grade ore shoots. The outcome? A full re-think of how domains are defined and how estimation workflows can adapt to variability at scale.

The Challenge: When Structure Overrides Stratigraphy

Jordan’s study began with a peer review in 2021 that questioned the validity of the mineralised domains being used. The problem wasn’t grade control—it was geometry. Mineralisation didn’t follow the contacts. Instead, it cut across them, shaped by folds, intrusions, rheological contrast, and subtle chemical shifts.

“You look at it in section and think you’ve got a clean strat model,” Jordan said. “But once you start digging into the data, you realise you’re chasing plunging shoots through a folded, anisotropic rock mass. It became clear that the traditional approach wasn’t going to cut it.”

Data-First Domain Refinement

Jordan’s team started by combining standard drillhole logging with high-dimensional data: passive downhole gamma, oriented core measurements, and both gold-pathfinder assays (As, S) and ICP multi-element data. These were used not only to validate the existing stratigraphy but to go further—into true domain refinement.

Machine learning played a key role, with K-means clustering applied to ICP data subsets within logged geological units like the Europa Beds. One of the key outcomes was identifying a lower geochemical subdomain within the Europa sequence that wasn’t visually discernible from core logging alone.

“Without multi-element data and clustering,” Jordan explained, “we would’ve missed the lithogeochemical transition that became really important to the ore model.”

When One Algorithm Isn’t Enough

However, not all parts of the deposit responded well to K-means. In intrusive and more geochemically variable units, K-means began producing artificial clusters. The team pivoted to HDBSCAN, a density-based clustering method better suited to handling noise and variable cluster density.

Jordan emphasised the importance of remaining flexible with tools and workflows. “One algorithm won’t solve every problem. We had to pivot constantly based on feedback—both from the data and the geology.”

Structural Interpretation, Reimagined

Orientation data—typically a headache when it’s noisy or inconsistently logged—was also put to work. The team used HDBSCAN to de-noise the oriented core data and isolate dominant structural trends. K-means was then applied a second time to sub-cluster these into high-resolution families of orientation.

The result? A spatial map of preferential structural trends across the deposit, which became critical in guiding anisotropy and variogram modelling.

“It gave us a clean, quantitative way to track plunge and strike variations deposit-wide,” Jordan said.

Clustering the Invisible: Defining Mineralisation with GMM

In the absence of clear lithological boundaries, defining mineralised zones becomes a statistical and interpretive exercise. Rather than relying on a single Au threshold, Jordan’s team used Gaussian Mixture Models (GMM) to cluster assay data (Au, As, S) into latent background and mineralised populations.

This approach produced a 70 percent match to the deterministic domains—and the ability to spatially visualise cluster probabilities in 3D. It was the kind of validation Jordan called “just enough” to reinforce confidence without relying on a binary thresholding model.

Secondary Indicators: Creating Variables Geologists Can Trust

In addition to clustering, the team implemented a secondary indicator scoring system, combining variables like arsenic, sulphur, vein density, and alteration intensity. Each sample was assigned a cumulative score based on the number of indicators present.

“This score became one of our best tools for refining domains,” Jordan explained. “It forced us to ask: how many signs are actually pointing to mineralisation here?”

Modelling: From Nested Domains to Localised Variograms

The resource estimation approach mirrored the complexity of the geology. Rather than trying to flatten geometry into a single global model, Jordan described a nested modelling strategy:

  • Planar mineralised zones were modelled with oblate variograms.
  • Internal high-grade ore shoots were treated as prolate domains with plunging continuity.

All of this was managed using Locally Varying Anisotropy (LVA) to guide search orientations and variogram ellipses.

And instead of global capping, the team applied local capping, evaluating outliers based on their impact within individual search neighbourhoods.

“This meant we didn’t arbitrarily suppress valid high grades where they were geologically supported,” said Jordan.

Practical Innovation Over Hype

One of the more refreshing aspects of Jordan’s presentation was its balance. Yes, machine learning was central—but it wasn’t hero-worshipped.

“We didn’t let the model run wild,” he said. “We stayed grounded in the geology at every step. Machine learning helped us see patterns we might have missed—but it wasn’t a replacement for domain knowledge.”

This philosophy extended to their use of LVA in variography. Instead of flattening geometries (and risking sample distortion), Jordan used LVA directly in experimental variogram calculations, improving structural fidelity.

Takeaways for the Broader Industry

Jordan ended with several practical lessons:

  • High-dimensional data adds real value, even in gold-only systems.
  • Geological context must remain the foundation—unsupervised algorithms work best when deployed semi-supervised.
  • Flexibility is key: One workflow won’t work everywhere. Be prepared to pivot.
  • Local methods for local geology: From domain capping to variogram orientations, adapting to structure and plunge improves model fidelity.
  • Don’t assume simplicity where none exists: Stockwork systems don’t fit the same estimation moulds as stratabound ore bodies.

Final Thought

In Jordan’s words, “Just because it’s gold-only doesn’t mean your data should be.”

His case study at Oberon demonstrates a new path forward for resource geologists working in complex orogenic systems: one where deep domain knowledge, machine learning, and structural modelling aren’t in conflict—but in conversation.

Article Enquiry Form