These smaller operators have been rapidly acquiring dozens of small and medium size fields and represent an exciting future for these mature basins and the sector at large. Some, like Harbour, Spirit and Ithaca have grown fast through acquisition and are already delivering daily production knowledge graph figures in the hundreds of thousands barrels of oil equivalent per day.

But they face a number of growing pains. Not least because the supermajors that originally developed their newly acquired assets, followed different field development practices and operational strategies. They installed, and in many cases designed and built their own, systems and applications.
The result is complex, disconnected and heavily bespoke systems and data structures – per asset in their portfolios.
As one medium sized operator told us – “we need a layer that abstracts and contextualises data across all our assets and a front-end that sits on top that integrates everything around the user”.
But stopping to rewire isn’t an option when you’re under pressure to deliver. Nor is building a blank canvas; they haven’t the time or capital to start again.
Over the past fifteen years, we have had success helping fast-growing operators as they seek to integrate their portfolios to maximise value. Here are four lessons we have learnt:
1. Truth is dynamic: it’s tempting, faced with the challenge of integrating data from multiple applications and systems, from multiple assets, to move all your data to a single place, like a data lake, and believe everything will be simpler. The proposition is compelling: copy all the data to the lake; normalise the context and ontology, and have all your data at arm’s length, to guarantee fast access no matter the complexity of the queries. But, operators who have gone down this route, tell us the reality is less compelling: “you can’t create a reliable self-contained data repository for something as dynamic as live operations, particularly in a complex domain like oil and gas”.
Certainly not coupled with an aggressive acquisition strategy – the goal posts are moving all the time. And the first time that the quality of the data from the lake is questioned, confidence fades and everything begins to unravel. Nothing replaces the original systems of record for data fidelity.

2. Model the brain: millions of years of human evolution have shown how knowledge is acquired and processed via networks that can scale without boundaries – not by defining those boundaries up front. The combination of cloud and knowledge graph technology enables fast build of context-based knowledge that can scale over time. No data is copied; all data remains in the original systems of record, fully featured and rich with metadata. The knowledge graph provides the context and the standardised ontology, so that a top layer can reference objects in a standard way. Read how we used the knowledge graph technology embedded in our Eigen Analytics Platform to radically reduce the time engineers spend verifying the safe depressurisation of blowdown events.
3. Experiment more: As data architects we can be tempted to solve all problems at once. Far better to invest in building vertically integrated capabilities that deliver one particular use case. This will help to prove value and utility case by case, which is vital for sceptical clients and to ensure the support of the project sponsors. Knowledge graphs are quick to build and expand in small increments to support more use cases. See a demo of what we built for an 8,000 well model in just a few days.
4. One Front-End to view it all: Many vendors offer a versatile front-end tool that connects to their data source and manages their typical data types. However, in today’s world, data comes in the form of real time process data, relational databases, documents, series embedded in Excel sheets, API feeds, and many other forms. The problem is that in general, these front-end tools are tied to one vendor and a limited number of data types. For many years, real time process data users relied on OSISoft’s Processbook – soon to become obsolete – and have found themselves forced to shift to another solution at considerable cost.

This new generation of fast growing operators, with multiple system vendors across multiple late-life assets need system-agnostic front end tools that can communicate with the data abstraction layer to seamlessly access and combine data from different data sources, including real time, enable collaboration, co-editing of content and the capacity to support a growing industrial-scale operation.
A word on PowerBI
To fulfil these requirements many operators resort to what feels like a safe bet for IT departments: Microsoft PowerBI.
PowerBI is a great tool when applied to the type of visualisation problems it was designed to solve, i.e. democratising the building and deployment of dashboards that access predefined datasets by visually configuring quite complex queries. At Eigen, we frequently find PowerBI excellent for fairly simply use cases or to build an MVP that we can demo to leadership.
But when it comes to more complex requirements, common in real-time domains, like oil and gas, PowerBI falls short in the support of interactive process graphics, click and point trending and visual trend analysis, manual data entry and the configuration of user-defined alerts.
Moreover, valuable data transformations such as sliding aggregates and totalisers are non-existent, nor is automation of report generation and distribution functions.
And then there is the issue of scalability: You cannot have multiple users concurrently edit the same PowerBI file because any changes require downloading the file to a Windows-only laptop running the PowerBI desktop app, making the changes and then re-publishing.
Access to large datasets is also restricted: API responses are limited to 100k items: that’s just 10,000 rows for a 10-column table, so any long-term trend analysis of timestamped data sets can be severely limited. This is equivalent to less than 3 hours for data sampled every 5 seconds, or just over 1 year for data sampled every hour. Not very “big data”.
As always, follow the old adage ‘right tool, right job’!
Fast growing operators may feel quite naturally overwhelmed by the challenge of integrating diverse asset portfolios, but help is at hand. Technology offers choices without lock-ins and without losing control; it’s no longer necessary to see copying data into a data lake as the only way to ‘liberate’ and access all your data – and indeed it might be less flexible and less performant.
And that’s the same whether you’re a supermajor divesting its portfolio or a fast growing operator on the acquisition trail.
If we can help, please get in touch.
Eigen Ingenuity front-end visualisation and the Eigen Analytics Platform has been developed to fulfil the needs of engineers analysing datasets from multiple data sources and configuring automated workflows across data silos. Its underlying knowledge graph provides a standardised way to access all the data, independent of source system technology.