How do we find the balance between drowning in data and operating in a data driven world?

How do we find the balance between drowning in data and operating in a data driven world?

In the last of four exclusive essays for service leaders authored by Field Service News, Editor-in-Chief, Kris Oldland, we look at how we navigate an ever-increasing ocean of data?

 

As we move into a new era of field service operations, data is undeniably set to become the single most important tool of modern service operations.

 

My position on this particular line of thinking is very straightforward. Put simply; I am of the opinion that those who embrace data-driven field service operations will flourish in the new world.

 

As for those that don’t? I believe that no matter how robust their processes are, no matter the strength of their brand and the loyalty of their customer base, without an effective data strategy, they will eventually be overtaken by more forward-looking competitors.

 

To put it as bluntly as I can, data will be at the heart of success in our industry – essentially, it already is. The journey our industry will take to being a data-led sector already began some time ago, and for those organisations who have yet to realise this, time is now starting to run out.

 

Yet, there remains a critical paradox that has ensnared so many field service organisations. This is because while they see the vital importance of data in the future of their service operations, in the face of the tsunami of new data being created each year, they cannot even begin to comprehend how to find meaningful and actionable insight from it.

 

If we look at the data created in zettabytes per year, there will be an estimated 97 Zb this year. This is compared to 6.5Zb, created only ten years ago in 2012. Further, estimates suggest that the amount of data created in 2022 will almost double by just 2025 to 181 Zb. Source https://www.statista.com/ statistics/871513/worldwide-data-created/

 

Current data growth is genuinely exponential and comes with a truly revolutionising power to transform our industry. Yet, with so much data being generated, it is little wonder that many field service leaders find themselves frozen like rabbits in the headlights with no idea what to do with it all.

 

I suggest the starting point for this discussion should be to identify how data offers value.

The common phrase often dropped into conversations when this topic comes up is that data is the new oil. At times, both in my writing and when discussing this on stage at various conferences, I pushed back a little on this metaphor. My reason for doing so was that, in and of itself, data has no inherent value. It is just numbers on a screen – and increasingly so, an incomprehensible mass of numbers on a screen that can quickly overwhelm us.

 

However, the more I gave this position thought, the more I realised that actually, while it is casually thrown out as a smart soundbite, the analogy to oil and data holds the more closely we examine it.

Data usage note: By accessing this content you consent to the contact details submitted when you registered as a subscriber to fieldservicenews.com to be shared with the listed sponsor of this premium content ServiceNow who may contact you for legitimate business reasons to discuss the content of this briefing report.

This content is available for FSN PRO members and also for a limited period for FSN FREE members. Please make sure you are logged in to access this content. 

Not yet subscribed? Instantly unlock this content and more on our forever-free subscription tier FSN FREE

Unlock with FSN FREE

"Like oil, raw data doesn’t hold any value. It requires processing and refining. Data, in and of itself, offers us very little. However, when we can begin to interrogate that data effectively, we can draw insights, and we begin to see some value."

The primary purpose of the statement is, of course, to highlight the value of data. We naturally think of oil as a valuable commodity, and those that put forward the metaphor are generally trying to alert us to how valuable data truly is. Equally, for the last 100 years, our global economy has been built around oil, and data certainly has the capacity to sit at the heart of the worldwide economy and play a central role in the industry in a similar manner.

 

However, for me, the lightbulb moment was when I stopped and reflected on my initial pushback that data in and of itself held no value. Like oil, raw data doesn’t hold any value. It requires processing and refining. Data, in and of itself, offers us very little. However, when we can begin to interrogate that data effectively, we can draw insights, and we begin to see some value.

 

However, it is the next step where we can start to see a layer of value that aligns with the grandiose statements such as those I placed at the start of this essay. That next step is to turn those insights into actionable, data-driven strategies both on the micro and macro level within service operations and beyond.

 

So the question that sits at the heart of this digital transformation we as an industry are going through could be boiled down to the following challenges:

 

With so much data being created, how do we define what data will potentially hold the greatest value for our business moving forward?

 

How can we extract insights that can lead to actionable data-driven strategies from the vast amounts of raw data we now generate?

 

How do we integrate these new data-driven actionable strategies into our processes while still managing the mission-critical nature of service operations with limited disruption?

 

I would put forward that when faced with what, on its surface, is a highly complex challenge, i.e. to find the aspects of data that can be refined into valuable, actionable insights while facing a deluge of data, then the approach must be reverse engineered.

 

Finding a needle in a haystack is challenging if you do not know what the needle is made of or looks like. However, when we change our task to searching for a metallic object amongst a large cluster of organic material, we can apply the right tools to achieve our job much more efficiently.

 

Similarly, if we are to work back from our end objective, we can identify the type of data we need.

 

Let us take a simple hypothetical example of a train operator who wants to minimise delayed trains. This is their top-line objective.

 

One of the most common failures that they face is automatic door failure. So better diagnosing this potential fault ahead of failure becomes a secondary objective within that top-line objective.

 

Across their fleet of trains, sensors on their automated door mechanisms identify how quickly a door opens and closes. The train operator can now interrogate this specific data set across the fleet to see the standard parameters for door opening and closing timings.

 

If they were to map this data set against mechanism failure, they would be now able to predict how soon after falling out of the standard parameters, the automated closing mechanism will fail.

 

This simple interrogation of two data sets allows them to generate a predicted mean time to failure on a common fault that is a barrier to their top-line objective.

 

Great. This now allows the train operator to have a clear window of opportunity to bring the train out of operations at a convenient time ahead of failure so that the door mechanism can be serviced before it fails. This allows them to shift their workflow to become more centred around predictive maintenance rather than the more disruptive break-fix, allowing them to nudge the needle in the right direction for their top-line objective.

 

But let’s expand a little further on this hypothetical example.

 

To keep the example simple, let us work on the principle that there are two common causes of failure – one electronic and one mechanical. We can explore the data further for each of these issues.

"I am sure most service leaders reading this essay could picture at least one of their engineers who would be able to look at the door closing slightly slower than normal, perhaps imperceptibly slower to our layman’s eyes, and knowing not only that the door was beginning to fail, but also the likely cause of that failure..."

Let’s say that when the failure is mechanical, the rate of delay in the door is observed throughout the whole transition of door opening (i.e. so the entire process is slightly slower). However, when the failure is electronic, the delay in the door opening is front-loaded, so there is a momentary pause where nothing happens, and then after that pause, the door opens at its standard rate.

 

Now for the sake of the example, let us assume that the electronic fault is a straightforward fix, perhaps a firmware update that can be completed within 15 minutes. While the mechanical fix is a more complex undertaking that requires the door to be removed and bearings replaced. This job requires two engineers to physically remove the door and takes an average of an hour and a half.

 

By interrogating the data further, so they have an even more detailed understanding of the problem, the train operator can now make a further strategic decision. They know from the sensor data on the opening and closing speed of the door that the automated closing of the door will fail within a set period.

 

By applying an additional degree of operational knowledge, they can now also identify with a reasonable probability that the fault is likely either electronic or mechanical. If it is the former, the maintenance can be completed within 15 minutes, and the maintenance could be undertaken while the train is in station at a terminus point as there is a 30-minute turn-around before the train needs to depart in the other direction.

 

Strategically, the train operator can make a decision based on the insight derived from the data, which significantly reduces the chances that the train will need to be removed from service to resolve this future fault. In turn, this reduces potential operational strain on the fleet, making it easier to achieve their top-line objective.

 

Now, this is, of course, a hugely simplified example. Still, it illustrates how working back systematically from the top-line objective to identify multiple secondary goals that allow for success in the top-line objective can give us a route towards identifying what data we need to search for in the vast data lakes we now own.

 

With this in mind, I believe that for effective data-driven service operations, we must have insight from both sides of the aisle.

 

We need input from our engineers in terms of understanding what the common things to look for are.

 

In our example above, I am sure most service leaders reading this essay could picture at least one of their engineers who would be able to look at the door closing slightly slower than normal, perhaps imperceptibly slower to our layman’s eyes, and knowing not only that the door was beginning to fail, but also the likely cause of that failure (or of course a similar scenario with the assets within your install base.)

 

That level of deep-tribal knowledge is, as we all know, hugely valuable.

 

However, suppose we can align that level of knowledge and combine it with our data teams, who can interrogate our data to find the identifiers and markers within the data that allow for the large-scale application of that level of knowledge in a processed predictive manner. In that case, that knowledge and our data become genuinely invaluable.

 

As with all the essays in this series, I shall leave you with a series of questions for your reflection on your own organisation.

 

Further questions for consideration for you on this topic:

 

  • Do you clearly understand what data is valuable to you in terms of service operations?
  • How well does your CDO/CTO/CIO etc, understand the challenges and aims of the service operation within your business? How well do you understand your organisation’s approach to data management?
  • Do you have the technology and systems in place to truly take advantage of digital transformation? If not, do you have an understanding of what is missing?
  • Can you think of a simplified example like the one in this essay where one piece of data, if better understood and married with conventional operational data, could help you achieve a top-line business objective?

Do you want to know more? 

 

For a limited time the white paper this feature is taken from will be available on our forever-free subscription tier FSN FREE as well as being available to all FSN PRO subscribers. 
 
 

If you are already a subscriber you can access the report instantly on the ‘read now’ button below. If the button is showing ‘Join FSN FREE’ please log-in and refresh the page. 

 

If you are yet to subscribe simply click the button below and complete the brief registration form to subscribe and you will get instant access to this report plus a selection of premium resources each month completely free. 

This content is available for FSN PRO members and also for a limited period for FSN FREE members. Please make sure you are logged in to access this content. 

Not yet subscribed? Instantly unlock this content and more on our forever-free subscription tier FSN FREE

Unlock with FSN FREE

Data usage note: By accessing this content you consent to the contact details submitted when you registered as a subscriber to fieldservicenews.com to be shared with the listed sponsor of this premium content ServiceNow who may contact you for legitimate business reasons to discuss the content of this briefing report.

Close