How do you make multi-speed BI implementation work?

Originally published March 15, 2020. Updated March 24, 2022
Søren Block Olsen
Søren Block Olsen
4 min read
TopBlogImage

Traditional BI implementation typically follows a straight and narrow path:

  • Establish the data you believe to be most important for analyzing KPIs
  • Validate that data
  • Create a data warehouse
  • And begin modeling and visualizing data from there.

There’s a glaring problem to this strategy though, and it’s often not realized until it’s too late. We have many years of experience of implementing BI solutions, and we so often seen companies struggle here.

 

Have a clear map

 

They map out this process and then once presented with the data sets, they see these are not in fact the right requirements for their BI goals. That’s because the true utility of data requirements can’t really be properly evaluated until users can see the types of results they produce. But significant time and resources have already been pumped into the data warehouse design stage.


This makes the quest for the ideal data warehouse set up an often seemingly endless process of trial and error. I’ve seen BI implementations take so long that before any proper analytics objectives are finally obtained, company priorities have shifted and all BI work done up to that point is rendered useless.

We talk about a bimodal BI environment a lot here at TARGIT, but what I’m proposing as necessary is a bimodal implementation to kick that off. By that I mean the traditional data warehouse is established and one or more modern deployment options, such as Data Discovery or InMemory, are implemented concurrently.

These modern deployment options get data to a usable state significantly faster than a data warehouse. I’m talking hours, not months. And ultimately, the faster users are able to acquire, model, visualize, and share data, the faster they are able to improve it and leverage real value.


Think of multi-speed implementation as a highway

 

Think of BI development as a highway. One lane is reserved for the data warehouse, another for data discovery, and perhaps a third for in-memory capabilities. “Vehicles” in the data warehouse lane are traveling at a steady 30 mph. Those in the other lanes are traveling at 60 and 90 mph. These outer lanes are where the data experimentation takes place with a “fail fast” mentality.

 

This is a multi-speed implementation


Take for example an enterprise retail corporation who needs the structure of a traditional data warehouse back end. There’s a BI development pipeline that extends half a year for new data models. That “slow lane” is fine for monitoring the part of the supply chain that’s hauling goods on a shipping container from across the ocean. You don’t need real-time access to data here.

But say there’s a problem somewhere in the supply chain that results in a drop in inventory of a particular item stocked on store shelves. Employees need real-time insight into the state of their stock, whether high, low, or on target. Users can’t wait six months for a report to be developed via the traditional data warehouse lane. This type of “fast lane” BI is necessary for full insight into the entire supply chain.

Overall, the company’s most important KPIs live in the data warehouse lane for consistent and regular monitoring. But departmental processes such as budgeting and forecasting require experimentation on both existing and new data. Multi-speed BI supports all the needs of a company that must run concurrently: traditional, real-time, and everything in between.

 

How do you make multi-speed BI implementation work?

 

There are three arms to a successful multi-speed BI implementation and bimodal environment:

 

One: The tools

 

As I mentioned above, these are the ad-hoc data discovery and in-memory technologies to enable rapid movement of data to a usable state. These tools allow for exploration of otherwise out-of-reach data to be mashed up with current data, compared, and visualized in new ways. And In-memory technologies allow for massive amounts of data prototyping in very little time so new models can be easily investigated.

Because in-memory does not have strict schemas, you can load data quickly, build a model on top, and start experimenting almost instantly. This is the Ferrari to the traditional data warehouse’s long-haul 18-wheeler.

Getting your hands around data is one thing, but getting your head around it is quite another. Data doesn’t do you any good if you aren’t monitoring the right KPIs. BI users must be able to visualize and understand the data that is relevant to them. Getting data into this usable state is the hard part. So the faster users can prototype by acquiring, modeling, visualizing, and sharing data, the faster your team will get to something useful.


Two: The security

 

I’m talking about data governance. While they key driver in going multi-speed is opening up the highway to work for different tempos of BI, there must always be a sense of order and control. Imagine if actual highways were lawless.

Just as it’s the law for slower cars to keep right and faster cars to pass left (in the U.S., anyway), so must data governance be applied. While data discovery tools are rapidly acquiring, modeling, and visualizing data, it’s understood that data will not be perfect and things will be missed. That’s okay. Data governance serves to sound the siren when something is amiss so once the data has proven to be valuable, the process can be slowed, data can be moved through the right channels and cleaned, and ultimately added to the proper data model within the system.

Data governance ensures the right data makes it to the hands of the right people. It’s a technology, but it’s also a strategy. At TARGIT, we preach often to the importance of improving data quality with sandbox analytics and general data governance best practices. The fail fast mentality must be kept in check with the closed loop of the BI lifecycle.

 

Three: The embrace

 

Getting the data you need into the system is only the first step in propagating a data-driven environment. There must be a strategy set up to embrace the analytical use cases. After the steps of loading and modeling data, it must be shared throughout the organization. To do this, you need a tool that puts data into the hands of decision makers of every department. Know your users and create a BI system that makes most sense to them.


Ride or die

 

This is just the beginning of multi speed implementation. As more and more companies reject the acceptance of slow querying and years-long BI implementation projects, the BI platforms that don’t support the speed and agility companies need simply won’t survive. Many of the big vendor names today offer one or the other, and often with a major price tag attached.

According to research by BARC, slow query speed was the number one reason for BI platform abandonment in recent year. Companies must ride that multi-speed implementation highway or risk the death of their BI project entirely.

Learn the latest trends in BI

ACCESS TRENDS
Originally published December 20, 2016. Updated March 24, 2022

Søren Block Olsen
Written by

Søren Block Olsen

Director of Marketing & Sales Operations