The other day, while at dinner, I wasn’t paying much attention to my Diet Coke, reached for it and took a big drink and almost ended up spitting lemonade across the table. It turns out that I grabbed my son’s glass and took a healthy swig. It’s not that I don’t like lemonade, it was just unexpected. My mouth was looking to process Diet Coke and it got lemonade and wanted to reject it.
When dealing with unexpected system results, IT staff will often quip “Garbage in, garbage out.” We’ve all heard that expression and no one wants to learn that you’ve had a “pay impacting incident” or any other incident that has a significant impact on employees. Data errors happen, but where do they come from and how do we avoid them? It all starts with making sure the files we load are what we want.
The fact is that most data errors are driven by change or are a result of incorrect data being contained in the files that are loaded into our HRIS or payroll systems. Well constructed systems controls that look to catch data driven errors before they are able to have significant impact to your end users is the key. Finding the balance between risk and reward can be a challenge though and is unique to most every organization and for each data feed.
There are some basic things that need to be verified for each inbound data feed. First, we need to ask, “Did the sender send me the correct file?” It’s common for an error to result in resending a previous file or sending just plain bad data. Ideally, we would have both a control email and a header or footer in the file with the totals. If the file originator sends us an email with control totals, we can use that as a basis for our down stream checks. The first would be to validate that the control totals in the email and the file footer match.
We home-school our kids. Occasionally, my oldest daughter will try to teach her younger brother something. As a result, he’ll produce work for me that is based completely on misinformation (there are times I’m not sure she doesn’t teach him incorrectly on purpose). With that in mind, I’d like to take a minute to point out that it is important that the control totals, if pulled from the file sent, aren’t truly valid. To ensure validity, we need an independent audit of sorts. There should be a report that is generated at the source separately from the file to validate the control totals in the file.
Getting back to our tests, the next question to be asked is, “Does the file contain what the sender expected.” When the file is generated, the program that is used to export should be keeping a running total of key data elements as each row of data is processed. If we use a the example of a bonus file, you would expect to have a control total for the dollars associated with each earnings code as well as a grand total for the file and the total number of rows. The inbound data processor would then do a preliminary pass to ensure that the data in the file matched what was in the control totals. This may seem like overkill, but I can’t tell you the number of times I’ve seen files that didn’t tie out within themselves for one reason or another.
Once you’ve proven that the file is the file that the sender intended and it contains the data that they intended for it to have, your system can continue with the load. Depending on the type data that is being loaded, there are different things that should be done to validate each step of the load.
Once the data has been loaded, there should be a tie back to the original control report. If they don’t match, there is an issue and the data will need to be backed out or corrected which is never any fun, but it can often be avoided if you take these simple steps to ensure you are loading what you think you are.