Getting value from data-wrapped products – it takes more than pretty packaging
A colleague recently introduced me to the concept of data wrapping. It describes the augmentation of products with data to make them more valuable (think of how credit card companies tout their automatic fraud protection). The concept was developed at MIT CISR, from research, and it’s a great way to encapsulate a practice some of us are familiar with. But how to make it happen? Too often, organisations decide to follow a technology trend and then fail to account for the original context under which a process was developed.
Other data monetisation strategies like selling data and optimising the processes around it are the stuff of data transformation efforts from the past decade. But as we bring data teams closer into the product development fold, organisations can hit many roadblocks.
Whether you’re an organisational leader thinking about data monetisation or a data professional, it’s important to look beyond the dollar signs and get a little practical. Below are some pitfalls to avoid and a deeper look into a successful introduction during a tough data transformation effort.
Read on, and may your data wrapping efforts bear more gifts than unpleasant surprises.
When data efforts focus on the wrong thing
Technology trends often run into problems when the rubber hits the road – data wrapping is no different.
As a concept, data wrapping is great – the authors’ conclusions about the value that comes from enhancing a product with data isn’t in question. But, as we’ve seen with agile and digital transformation efforts, things can go wrong when they’re doing things in name only. Let’s start by looking a little closer at what we mean by data wrapping.
In their article, the authors describe two things that ‘make data wrapping unique':
- product owners control the wrap
- data wrapping is highly coupled with a core offering
Which are both absolutely correct, but they’re glossed over in favour of data about their usefulness and financial returns. And it’s in this second part about financial returns where things tend to go wrong. With expansive data estates and quagmires of governance and bureaucracy, many enterprise organisations (and even many scale-ups) aren’t in any place for product owners to achieve either of these two prerequisites. What can happen in these cases is that ‘data wraps’ are put in as static/once-and-done artefacts but maintaining or evolving existing ones, or making new ones, can be painfully slow and costly, negating the perceived benefits.
In practical terms, this scenario often plays out as organisational dysfunction with executives, product leaders, and other ‘non-technical’ members of an organisation at odds with my dear friends in the data team. Product teams are left paralysed, the data team is blamed for their slowness. At the executive level, the question then becomes “is the juice worth the squeeze?”
It doesn’t have to be so.
Focusing on outcomes, not technology
Life becomes specialised within the expansive technology estates of large enterprises. Thinking about, tending to, moving, and surfacing data become an all-encompassing, full, time job. Data can become disconnected from context, but it takes time, effort, and often mandate to think about the bigger picture. As data technologists, it can be an uphill battle.
But it’s important to always think about how our customers experience data. Take the credit card fraud example:
You just arrived in a foreign land and could use a meal and a stiff drink after a long flight. You get an alert warning about a strange purchase on your card and with a swipe, you wave it away. Your travelling companion on the other hand just has their card blocked outright and has to call their credit provider. It’s nice to know that your credit card company has got your back, and better that you could deal with it with your fingertip.
As a consumer, do you even think about the data? Likely, the less you think about it the better. We should also recognise that it’s not just data happening in this example. We’ve described automation of hidden business processes – fraud risk management (imagine a room full of risk auditors going through bills to check they’re not out of range and calling you up). Data only plays a small role in the wrap.
So, data wrapping correctly describes the utilisation of data to enhance a product, but it’s only a part of the story. It misses the integration of old or creation of new business processes. No small feat for many a lumbering enterprise.
In our example above, we can already see that two organisations have implemented a fraud-protection data wrap, but one has potentially added to their customer service costs if the algorithm they implemented is catching too many false positives.
So, when we say that well-integrated product development and data teams are a requirement, we’re thinking about the happy path here. Let’s take a look at how you might do it.
Product and data teams working together in harmony (yes, it’s possible)
How does control of data and having product and data coupled closely look? Let’s look at outcomes from a real data transformation effort. In this example, an organisation was struggling across the data front (this may sound familiar to you):
- it was taking weeks to get a new dashboard
- data quality problems were wreaking problems with operations
- their data was in disparate systems
- they had a huge data team, costing millions
As usual, “something” had to be done immediately, but also a total architecture change was happening at the enterprise scale. Product teams were on permanent hold waiting for the data transformation effort to complete while also being pushed to create newer products faster.
Luckily, one of the major architectural changes was to move to an event-driven architecture. This change forced each product team to think about the data that they were producing and ensure that it adhered to a domain model. It also presented an opportunity to combine technology and process to address some of our issues at the source.
Event-driven architecture is like a party where staff independently handle tasks like greeting guests when they arrive or getting them a drink, so guests don’t need to find the bar and wait in line. The staff member can each look at the guest and react to them independently (new arrival to be greeted! needs a drink! get them their schwag bag!) to complete their tasks. The party feels more at ease.
The technology half of the equation was the development of an event testing framework that would send test data to an event stream. Having the facilities to test data at the design stage brings data use to the surface and makes it visible to an organisation. When new data or proposed changes break things, it creates little bits of ongoing friction rather than one big bottleneck that stops everything.
The process part of the solution was to ensure that data science and analytics teams were joined up when products were designed or changed. This meant that every product had to answer some basic questions like measures of success during the design stage and also consider the effects of changes to data. Finally, rather than centrally design and enforce a data domain model, we ensured that teams had representatives to ongoing meetings of the evolving domain model.
And because teams were talking to each other about data changes constantly, the need for new data or changes to existing data were wrangled on the spot, not after a team had made difficult-to-reverse decisions.
Giving visibility to data changes is also valuable because things don’t just stop when things go wrong. People across organisations also have a tendency to dig in when they aren’t sure what’s happening. There’s other the human factors: competing schedules, priorities, and pesky little things like holidays.
With this in mind, we can paint a picture of what good looks like for ownership and coupling of data and products by measuring:
- time from when a product is developed to when the data science & analytics teams can see it
- time from when a new product is going to make a change that affects the organisation and the organisation can respond
- days from launch when success measures can be analysed
This team was able to go from weeks of waiting to start testing their analytics to using them from the day they launched, but it required the right architecture, good (bespoke) tooling, and process change.
As I said, it was a bit of luck. Without the move to event-driven architecture (done for data movement efficiency) it’s not clear how we would have developed the tools. The tools, in turn, made introduction of new process nearly pain-free.
I’ve also been involved in data transformation efforts where architecture and process change were not significantly aligned, and the catch-phrase-outcomes couldn’t be achieved. The difference is significant.
If data is your life, then also always measure what you’re trying to do
Back to Data Wrapping, remember that when we’re talking about value we must also think about measures, and the first measure is always whether you can measure at all. We call that measure friction and it’s always measurable even as a gut feeling – start your discussion here.
Friction is a term that’s used to assess the cumulative time, effort and complexity associated with the delivery of components in their data platforms. Low friction provides a repeatable automated approach using self-service. High Friction requires pre-planning with no self-service.
The risk of trying to apply the data wrap concept, as with a lot of other technology jargon, is that rather than thinking about the group of organisational features, organisations are likely to envision that adding functionality to products is a silver bullet.
Disconnected from context, the friction that can prevent data wraps from providing true value can get buried, so always ensure that you’re thinking about things that create data wrap friction:
- venturing a guess about what extensions to make requires good cross-organisational communication
- utilising data from across the organisation can be difficult for operational and governance reasons
- making changes is painful without good dependency mapping and knowledge of lineage
- getting data from outside the organisation to cross with your data can be frustrating
- changing or creating new business processes don’t have anything to do with data - the most friction
That’s before it’s even up and running. Now get the thing live and try to measure it. More friction. More costs.
Getting to work, start with the risks
It’s fair to say that any data monetisation effort is reliant on minimised friction with data movement and usage. Beyond that, data wrapping will tend to get in trouble when you ignore the existing or new business process behind them (as with our fraud protection example). In all cases, getting continuous value from data is a process that you can measure up-front as friction – you don’t have to wait until the data gets delivered.
What’s difficult for an organisation to understand is when the total costs outweigh the benefits because members only see one part. Whether you’re on the data, product, or leadership team, make sure that you’re thinking about your whole organisation’s capability and measuring that it’s in place and performing.
If you go into your effort with a good model for the risks involved, then it’s easier to discuss them when they come up, and people are less likely to take them personally.
Remember: data monetisation isn’t just something that you do, it’s a measurable organisational capability. To reap the rewards of data wrapping, make sure your organisation has the right technology and processes in place.