Alliander is the Dutch grid operator. I worked with the group on two separate assignments, first at Alliander itself, then at Liandon, their high-voltage subsidiary. The problems differed in scale and character, but the underlying challenge was the same: unreliable data.

Alliander — Meter Data at Scale

The first assignment started in the meter domain. Alliander manages gas and electricity infrastructure for a large part of the Netherlands, and the meter data underpinning billing and grid operations had accumulated quality issues over time — mismatches, outdated records, gaps. The volume was significant: hundreds of thousands of records across gas and electricity.

I started as a data engineer. The work was investigative: tracing quality problems back to their sources, understanding which business rules were being violated and why, then writing SQL and PL-SQL scripts to correct records at scale. Fixes were iterative — each round of corrections surfaced the next layer of problems. The work fed directly into a data quality dashboard built by the team, making quality visible to the organisation for the first time. The measurable results were fewer billing errors and improved customer satisfaction.

Over time the role grew. I moved into managing the Data Loket — Alliander's internal data service, responsible for delivering reports and dashboards across the organisation in formats tailored to each department's needs. That meant planning, stakeholder management, and coordinating with the data warehouse release cycle, in an environment that was rarely calm.

Liandon — Back to the Grid

Later I came back to the Alliander group — this time at Liandon, the high-voltage subsidiary. After a period of significant growth through mergers and system migrations, the organisation had reached a point of consolidation, and that meant taking a close look at the data landscape that had formed across multiple source systems.

High-voltage grid assets carry significant consequences downstream: maintenance planning, regulatory reporting, operational decisions. With multiple systems holding asset data — substations, cable connections, transformer units — the opportunity was to establish a single authoritative picture and build the quality programme that would keep it that way.

Mapping the Landscape

The first step was a thorough analysis of the IT landscape. I mapped every source system that held asset data, and for each data point established which system could be considered the authoritative source. This wasn't a technical exercise — it required conversations with domain specialists across the organisation. Who actually maintains this data? Where does it originate? What happens to it downstream?

Through this process I also extended and formalised the data definitions. Where fields had an understood meaning in practice that hadn't been fully documented, I wrote it down — informed by the people who worked with it daily. Lineage diagrams tracked where each data point was created, what rules it had to satisfy, and where it flowed from there. The result was a unified view of the data architecture: explicit, agreed, and usable.

Building the Reconciliation System

With the landscape mapped, we built the reconciliation system. Working in a team of three, we designed and implemented a central process that pulled data from all source systems, matched records across systems using the authoritative source logic we'd established, detected inconsistencies, and generated the corrections needed to bring each system back in line. Corrections were pushed back to the relevant systems.

The technical complexity was manageable. The harder part was precision: identifying which differences were genuine data errors versus legitimate system-level divergence. A record that looked inconsistent might reflect a real-world transition — an asset being transferred, a maintenance activity in progress.

We tracked quality over time using an Oracle APEX dashboard. This gave the organisation continuous visibility: not just a consolidation effort with a defined end point, but an ongoing measurement of how well quality was being maintained as new records entered and existing ones evolved.

Data Ownership and Change

A reconciliation system without ownership structures is a temporary fix. Data will degrade again the moment no one is responsible for it. The technical work and the change work had to happen together.

We ran the change programme in parallel with the system build — working with domain specialists to define who owned which data domains, what the quality obligations of that ownership looked like, and how quality issues would be escalated and resolved. Domain specialists had deep knowledge of their own areas; the task was to make that knowledge the foundation of the central system, rather than something that lived alongside it.

The approach was direct: involve domain specialists in the design from the beginning, make their knowledge visible in the data dictionary and lineage documentation, and give them tools (the dashboard) that made their ownership meaningful. By the time the system went live, the people who needed to use it had helped build it.

Measuring Quality

The Oracle APEX dashboard was the closing piece. It tracked data quality metrics over time across all asset domains — completeness, consistency, and conformance to the defined rules. Quality checks ran automatically as data changed. The dashboard gave management visibility into trends, gave domain owners visibility into their own areas, and gave the project team a continuous feedback loop.

Quality wasn't a project deliverable that you tick off and move on. It was now a measurable, ongoing property of the organisation's data. That shift — from consolidation exercise to quality programme — was the real outcome of the work.

If your organisation is at a consolidation point and wants to turn that moment into a durable data quality capability, let's talk.