AI generated image of Pope Francis in a Balenciaga jacket

The AI Climate Tools You Can Actually Trust: ChatNetZero and ChatNDC Explained

We’ve built AI that can write poetry, mimic voices, beat humans at strategy games, and generate images of the Pope in Balenciaga. But when it comes to something as consequential as climate policy, when entire net zero pledges hang in the balance, when decisions cascade through billions in funding, supply chains, and national strategies, suddenly, accuracy becomes more important than entertainment.

And yet, the most widely used AI tools today don’t just get climate facts wrong, they make them up.

That’s not a dig at ChatGPT or Gemini. It’s simply what they were designed to do: predict what sounds like a plausible response based on patterns in massive amounts of data. Truth and precision? Optional extras.

But when you ask, “Can a company still use fossil fuels and claim a credible net zero target?”, you don’t want a plausible-sounding answer. You want the one backed by the IPCC.

This is why the work of Dr. Angel Hsu and her team at the Data-Driven EnviroLab is so vital. They’ve created two AI tools, ChatNetZero.ai and ChatNDC.org, that are laser-focused on trustworthy climate data. These are not generalist bots riffing on Reddit threads. These are domain-specific, hallucination-resistant tools designed to support rigorous climate decision-making.

Because let’s be honest, climate policy doesn’t need more vibes. It needs verification.


Climate Action Needs Credibility

Let’s step back. Over 140 countries have now pledged some form of net zero commitment. Over 5,000 companies have made sustainability pledges. But how many of them are credible?

The UN High-Level Expert Group on Net-Zero Emissions Commitments was blunt in its 2022 report when it said that too many of the pledges are not aligned with the science. Too many rely on vague offsetting schemes. Too many lack interim targets or transparent reporting.

Translation? We’ve got a credibility gap the size of the carbon budget. And this gap is being widened, not closed, by AI tools that confidently regurgitate out-of-date or inaccurate climate information.

In a study Angel’s team conducted, they compared answers from ChatNetZero to those from ChatGPT, Gemini, and other general models. Even though the domain-specific tool was far more accurate, climate experts still preferred the generic bots’ answers, because they were longer, more eloquent, more confident. This isn’t just an AI problem; it’s a human psychology problem.

We like our lies polished. But the planet needs the truth – raw, referenced, and reliable.


The Invisible Made Visible

ChatNetZero is trained specifically on the Net Zero Tracker project, which evaluates over 4,000 actors, from countries to cities to corporations, based on whether their pledges are real or greenwash. It scores them on whether they have near-term targets, transparent plans, and limited use of questionable offsets.

Ask it: “Is Apple’s net zero target credible?” It won’t guess. It’ll pull the answer from the actual documents, give you the scorecard, and cite the source, right down to the page number.

ChatNDC does the same for the Nationally Determined Contributions (NDCs) that governments submit under the Paris Agreement. It answers questions like: “Has Egypt submitted a net zero target?” and “How has the UK’s NDC evolved over time?” Again, with citations. No hallucination, no invented policies, no fantasy pledges from imaginary government memos.

These tools are built using retrieval-augmented generation (RAG) – a technique that constrains the model to only pull from a verified, vetted dataset. That may sound less exciting than “train on the entire internet,” but that’s exactly the point. AI for climate isn’t about being exciting. It’s about being right (let’s be honest, there’s a lot of BS on the internet!).


Garbage Data In, Garbage Policy Out

Climate policy decisions are only as good as the data that underpins them. But the volume of climate-related data today is absolutely overwhelming. During the last global stocktake under the Paris Agreement, over 300,000 pages of documentation were submitted by governments, companies, and civil society actors. Synthesising that manually? Impossible.

We need tools that can process, cross-reference, and validate these documents at scale. But more importantly, we need those tools to be transparent about their sources and limitations.

Too often, corporate climate strategies rely on opaque consultants using proprietary models. Or policymakers are left to Google and guess from half-updated PDFs. This is where AI, if trained carefully and responsibly, can shine. Not as a substitute for expert judgement, but as a scalpel to cut through the fog.

And as Angel pointed out in our conversation, this isn’t theoretical. Her team is already using these tools to accelerate manual verification tasks, like identifying offset usage in corporate climate disclosures. Volunteers used to spend hours reading and scoring these documents by hand. Now, AI does the first pass, and a human double-checks the highlighted segments. That’s what a human-in-the-loop climate AI workflow looks like.


The Risk of Getting This Wrong

Here’s the risk: if generic AI tools continue to be the default interface for climate questions, we could end up turbocharging misinformation.

Imagine a policymaker in a small island nation asking ChatGPT whether their country is eligible for loss and damage funds. Or a corporate sustainability officer asking whether they can continue burning gas and still claim carbon neutrality. If the AI’s answers are “plausible” but wrong, and no one checks the sources, we could see billions misallocated, targets misunderstood, and fossil fuel lock-in extended under a false banner of climate compliance.

Worse still, these tools often cite fake sources – something known as “AI hallucination.” Angel gave the example of ChatGPT confidently claiming that Egypt had a net zero target and citing specific documents and years, none of which existed.

This is why AI for climate must be narrow, grounded, and boring. It must cite its work. It must be fact-checked. It must favour truth over fluency. If you want a bot that sounds smart, use ChatGPT. If you want one that is smart, about climate policy, use one trained by people like Angel Hsu.


Scaling Solutions, Not Snake Oil

There’s a fantastic phrase which, if I remember correctly comes from the Princeton computer scientists who wrote the book AI Snake Oil: not everything that sounds intelligent is useful, and not everything useful sounds intelligent.

That’s the hard truth for AI in climate. It’s not glamorous. It’s not ChatGPT spitting Shakespearean sonnets about solar panels. It’s structured data extraction. It’s automated policy tracking. It’s verifiable scorecards. And yes, it’s citing the bloody page number in the IPCC report.

But here’s the exciting bit: get it right, and we could finally have the tools to bridge the credibility gap in climate pledges. We could turn vague net zero announcements into measurable action. We could empower policymakers to make evidence-based decisions without wading through thousands of PDFs. And we could equip civil society to call out greenwashing, armed with AI-powered facts.

In short, we could shift climate policy from faith-based to data-driven.


Listen, Learn, Act

If any of this has sparked your curiosity, if you want to hear more about how these tools were built, who they’re for, and what they unlock, then listen to the full conversation with Dr. Angel Hsu on the Climate Confident Podcast. It’s an eye-opener, and one of the most important episodes I’ve published this year.

🔗 ChatNetZero: https://chatnetzero.ai
🔗 ChatNDC: https://chatndc.org
🎧 Full episode: Listen here

This isn’t just about climate data. It’s about restoring trust. And that, more than anything, is what’s needed to turn pledges into progress.


Discover more from Tom Raftery.com

Subscribe to get the latest posts sent to your email.