AI in mining will only earn its keep when it is built on transparency tested to standards and trusted by the engineers who stake their decisions on it

Dr Penny Stewart presenting at APCOM 2025 on how accountable and explainable AI can build trust in mining operations

AI can transform mining operations, but as Dr Penny Stewart warns, its real value will only be unlocked if the technology is transparent, tested and trusted.

Speaking to a packed audience at APCOM 2025 in Perth, the CEO of Petra Data Science  didn’t mince words about the gap between AI’s technical capabilities and its acceptance in production environments.

“It’s not enough for an algorithm to work in a research paper or a case study,” she said. “When you put AI into operations, you’re asking engineers, geologists, and metallurgists to rely on it for high-stakes decisions. They need to understand it, trust it, and know it’s been tested for their specific context.”

Penny, a mining engineer and Fellow of the Australian Academy of Technical Sciences and Engineering, has spent her career bridging that gap between innovation and operational reality. She founded Petra in 2015 to extract value from mining data, building solutions like the company’s flagship MAXTA™ digital twin for value chain optimisation. In 2016, Petra collaborated with Newcrest Mining to deploy some of the first machine learning algorithms in the industry. Since then, her team has worked across continents, embedding AI tools into mine sites from Papua New Guinea to Chile, Scotland, and the United States.

The perception problem

One of Penny’s key points was that AI adoption is as much about language and framing as it is about algorithms. She took aim at the “personification” of AI - the habit of using terms like trustworthy, hallucination, or agent in ways that can obscure rather than clarify what’s actually happening.

“It might be fine to talk about ‘trustworthy AI’ at a conference or in the general media,” she said, “but when you’re speaking to a mining engineer who’s accountable for safety and production outcomes, they want specifics. What standards have you used? How have you tested it? Is it secure? What biases could be present? They want concrete answers, not marketing terms.”

Similarly, she questioned the industry’s casual use of the word hallucination to describe AI output errors. “Hallucination makes it sound mysterious or even excusable,” Penny noted. “In engineering terms, it’s just wrong - it’s an error. Calling it what it is builds more credibility.”

For her, aligning AI terminology with established engineering language is critical to building trust. That means talking about predictions or estimates instead of “responses,” and explaining recommendations in terms of optimisation logic rather than opaque machine judgment.

Dr Penny Stewart argues that AI in mining will only succeed when it is accountable, explainable and trusted by the engineers who rely on it in operations.

The stakes in production environments

Penny’s message carried the weight of someone who has seen firsthand what it takes to deploy AI in production. She recalled the launch of Petra’s first AI system at a major gold mine in Papua New Guinea in 2016 - a drill and blast optimisation model built on site-specific machine learning.

The site, like many in Australia and elsewhere, faced a shortage of experienced drill and blast engineers. Often, relatively junior staff were responsible for designing blast patterns, with limited access to senior technical guidance. Petra’s system ingested vast quantities of geological, operational, and material handling data, then surfaced the optimal blast designs for the site’s conditions.

It was a breakthrough - but also a litmus test for AI’s acceptance on site. “Imagine being the production engineer who has to sign off on a million-tonne blast based on what a piece of software is telling you,” Penny said. “You’re going to want to know exactly how that recommendation was generated, how it’s been tested, and whether you can override it if needed.”

This is where Penny’s philosophy of “accountable AI” comes in. Her team builds systems with flexible user experiences that let engineers choose how they interact with the model - from automated recommendations tied to polygons, to a full set of historical patterns for manual optimisation. This adaptability respects the professional judgment of the people using the tool, while still delivering the efficiency and precision gains AI can offer.

Explainability as a design principle

Central to accountable AI is explainability - the ability to show not just what the model predicts, but why. Penny acknowledged that this has been a challenge, with the mining sector often relying on tools like SHAP plots that, while useful, have limitations in conveying causal relationships.

“There’s a big difference between correlation and causation,” she said. “Engineers don’t just want to know which variables are important - they want to know the causal pathways that led to a recommendation.”

To address this, Petra is exploring causal AI techniques that integrate human understanding with machine learning outputs. One example is the use of fuzzy cognitive maps - essentially structured representations of domain knowledge - to inform and interpret model behaviour. This hybrid approach, she explained, allows the AI to be tested against known cause-and-effect relationships, making its recommendations more transparent and easier to validate.

In mining, where operational changes can have cascading impacts across the value chain, this kind of causal insight is particularly valuable. “It’s not enough to say, ‘the model says this will work’,” Penny stressed. “You need to be able to articulate why it will work, based on both data and domain expertise.”

The integration challenge

Beyond explainability, Penny pointed to another barrier to AI adoption: integration with production systems. She noted that many AI tools in mining still function as “point solutions” - standalone applications that require manual data handling and are disconnected from live operational systems.

“When we built digital twin models in 2018, the vision was for them to be fully integrated into production,” she said. “But in practice, the data often isn’t production-ready, or there are cybersecurity concerns about exposing it.”

Her goal now is to close that loop - creating AI systems that are continuously fed by live data, integrated into operational workflows, and capable of delivering decision support (or automation) in real time. Achieving this, she believes, will be essential for AI to move from an advisory role to an operational one.

A standards-based approach

Throughout her keynote, Penny reinforced the importance of standards in building trust. That means adhering to established protocols for data security, model testing, retraining, and version control - and being transparent about those processes with end users.

For instance, Petra has moved away from fully automated model retraining, which was once considered best practice, in favour of human-in-the-loop validation before any retrained model goes into production. “International standards for industrial AI now recognise the need for human oversight,” she said. “It’s about balancing automation with control.”

She also emphasised contractual clarity - defining who is responsible if data is lost or if a model underperforms, and setting out warranties and liabilities in commercial agreements. These measures, she argued, give operational teams the confidence that there is accountability not just within the software, but within the business relationships that support it.

Preparing for the next step

While much of her talk focused on the here-and-now of AI in mining, Penny acknowledged that the sector is on a path toward greater automation - including AI systems that could one day replace entry-level engineering roles. She believes this transition will only happen if AI earns the trust of those currently in those roles.

“Technically, we could upload optimal blast designs directly to autonomous drills today,” she said. “The barrier isn’t the technology - it’s whether people believe the system is reliable enough to make those decisions without human oversight.”

In her view, building that trust will require a staged approach: starting with decision-support tools that are transparent and user-controllable, then gradually increasing the level of automation as confidence grows.

The bottom line

Penny’s keynote was a timely counterbalance to the prevailing hype around AI in mining. While she is a strong advocate for its potential - and has built a career on delivering practical AI solutions - she is equally clear about the need for rigor, transparency, and respect for the professionals on the front line of operations.

“Accountability is not optional,” she concluded. “If we want AI to be more than a buzzword in mining, we have to hold it to the same standards we apply to any critical technology. That means clear language, clear processes, and clear responsibilities. Only then will AI be trusted to do the jobs we’re asking it to do.”

For mining companies weighing their next investment in digital transformation, her message was simple: the technology is ready - but trust must be earned.

Article Enquiry Form