Every MSP has at least some dirty data in their PSA. Some of it matters, some of it doesn’t, but ultimately dirty data is expensive and you’re better off getting rid of it when you get the chance.
The way that the 1-10-100 Rule breaks down is that it’s best to deal with data before it gets dirty, with a little quality assurance up front. In order to do that, you’ll need to understand where the problems come from in the first place.
This one’s not rocket science. Nobody tells you when a record gets old and outdated. It just gets old and outdated. In a perfect world, your client tells you about staff turnover when it happens. In a perfect world, you record changes to records in a timely fashion. We do not live in a perfect world. The more time passes, the more likely stuff just gets outdated.
Major dirty data type: outdated data
When multiple systems feed into your PSA, it’s easy for fields to be either missing or redundant, just because of different nomenclature. If you did this work by hand, you could probably catch this stuff, but in an automated world, having multiple sources feeding into your PSA is going to create some dirty data.
Major dirty data type: duplicate data
Mergers & Acquisitions
Merging businesses is even more chaotic. Between absorbing a new customer base, a different staff and culture, and dozens of systems, PSA dirty data is hardly the biggest priority. That’s why M&A activity ends up being a major source of dirty data. Eventually, however. you’ll have to deal with that the mess of inconsistent data, and trying to figure out what you don’t know.
Major dirty data type: non-standardized data, poor data visibility
Of course, there’s always room for someone entering something incorrectly. In fact, the point of entry is one of the biggest sources of dirty data. Close attention to the data entry process is important here, but until your processes and staff are perfect, dirty data is likely to result from human activity.
Major dirty data type: incorrect or incomplete data
To err is human, to really foul things up requires a computer. If a human error setting up a process isn’t caught up front, and the process is then automated, you could have that error replicated across every record. Worse yet, you can’t always CTRL+Z your way out of it. You could find yourself fixing records individually.
Major dirty data type: mass scale dirty data
Having tidy data makes compliance easier, but when you have to structure your data to conform to multiple different compliance systems, you can lose data, or end up with duplicate records, or worse. If you end up with some data in one system, and complementary data in another system, reconciling all that data into a single coherent record can be a hideous task.
Major dirty data type: insecure or inconsistent data
There’s no time like the present to start thinking about the dirty data you may have built up over the years. Chances are pretty good that you don’t know what you don’t know.
Sign up for regular updates on how Gradient is going to help MSPs of all sizes tame the Data Monster, in all its insidious forms.