It’s the day before your board reporting package is due and your financial team is still manually preparing key data points, slide decks, and discussion topics. You receive an update from the team lead that their spreadsheets cannot handle the millions of records from several systems and that the template that’s been used in prior rounds of data preparation isn’t yielding the typical results.
The monthly or quarterly rinse-and-repeat of this process wears on you, as the issues consistently arise. Your team is exhausted, too, and ready to adjust course – the manual effort continually is too much, and it is negatively impacting timely visibility into your organization’s financial performance.
We have all experienced delays in getting timely information. Maybe you have no real indicator of how the company is performing intra-month and can only identify performance issues in a key business unit upon month-close about 15-25 days after the month ends. There’s no room for agility or responsiveness.
Similar delays can impact all aspects of the organization, limiting the effectiveness of your sales team, procurement processes, fleet tracking and maintenance, and employee utilization. The manual processes required via spreadsheets simply are too burdensome to provide visibility into most of your business functions.
The results? You have no actionable information to operate the business and effect change.
Assuming our scenario above, where we find out after prior month’s close that a key business unit is underperforming, we may take several days to diagnose "why." Any question you have requiring a timely response relies on a hope that your source system provides decent information out-of-the-box. At this point we are a whole month delayed in finding out information that could be made available next day within the month.
Numerous organizations find themselves in this manual reporting process far too often, with systems that are not integrated and no centralized location where data can be stored, cleaned, and treated as an asset. Automated data processing can help ease this burden.
Data automation aims to remove manual processes when possible to provide timely and actionable information when and where you need it. Automation of data processing includes running scheduled jobs that extract data from source systems at a specified frequency.
Typically, any data quality or cleaning tasks are also accomplished in this process, taking the manual nature out of your reporting process. Data across source systems can be combined, and complicated business logic required for reporting can be built into these data jobs so that the outputs are exactly what you need.
The end goal is to automate when and where possible, freeing up time and resources. Some refer to a set of automated data jobs as a “data pipeline”, reminiscent of sources coming together in a pre-determined and precise manner to move information from point to point.
By leveraging a toolset consisting of a database and data orchestration (movement of data) and transformation (structuring of data) tools, you can automate data extraction, load/storage, and transformation to get your critical reports to an as-fresh-as-possible state.
A typical toolset deployed by SVA Consulting includes leveraging Snowflake, Matillion, and Tableau. Snowflake acts as the cloud database that will house our raw, transformed, and structured data to be used in reporting.
Matillion provides a graphical interface to quickly produce data orchestration and transformation jobs that build out the Snowflake database(s).
Tableau acts as a data visualization and exploration tool, allowing interactive dashboard development and ad-hoc exploration of data.
Each tool can be secured and governed to ensure data are protected at each step.
Committing to the strategic investment to centralize data across source systems reduces the processes that a person previously had to do manually, often spending hours in Excel. Examples of manual tasks that are automated in a typical data pipeline build include:
MANUAL TASK |
AUTOMATED |
Download files from numerous source systems |
Connections established to extract data on a schedule |
Cleaning of fields and values in a non-repeatable manner (date formatting, extra spaces, uncontrollable text or numeric data type fields) |
Establish calculations or logic to handle vast quantities of data in a repeatable manner |
Complex lookups, duplicate checks, etc. to ensure data are in a quality state |
In-tool deduplication, record update and insert processes based on logic, and handling of modified or deleted records |
Updating reports, including copying templates and maintaining numerous versions of each file |
Version control, and process establishes single sources of truth for key data |
Resource-constrained processes limit the volume of data that can be processed and stored |
Cloud database technology scales as you do, providing storage and processing capacity on demand |
Now that we’ve identified what data automation is and the generic benefits, let’s discuss specifically the crucial role it plays in financial processes and reporting.
Instead of finding out after the close of a prior month’s financials that a business unit is underperforming, automating data processing can surface timely information. You could know within the month how you are tracking to budget by projecting your current revenue amounts or leveraging budget-to-date values.
Opportunities arise to motivate sales team members to continue closing deals, or for operations team members to increase employee utilization to maximize revenue-generating activities.
Many organizations rely on siloed spreadsheet magic that individuals own, and other people have little to no visibility into. As each finance team member maintains their own spreadsheet with a connection to your financial systems, multiple sources of truth begin to circulate.
These unknown inconsistencies often arise at inopportune moments and can be avoided by centralizing data processing logic within a database. Data prepared for reporting get handled in a consistent manner, and the process of centralizing allows strategic conversations for a finance team to agree upon logic to implement.
The end results? Consistent analyses free from human intervention. (It can be commonplace to leverage human inputs into data processes that are automated, but it ideally occurs in a structured and known manner).
One success story we have helped build for a recent customer in a service-related industry is to centralize data from Salesforce, QuickBooks, Paycom, Excel, Holman, and Verizon Reveal into a Snowflake database using Matillion.
We established a data model where crucial activities and entries from each system could be related together to provide next-day information on revenue and costs at any of the company’s operating locations. Instead of waiting 20+ days after a month closes to identify financial results, they can now see in Tableau how locations are performing and can adjust and refine as needed.
The finance and accounting teams additionally can leverage this timely information in their analyses instead of having to use spreadsheets to combine data, saving countless hours each week.
Do you have current information on if a pricing strategy, product launch, marketing campaign, incentive compensation plan, or employee is performing as expected? If you typically analyze the financial performance of a marketing campaign or product launch weeks after, you are too late to affect change.
Establishing data architecture that permits timely visualization of key initiatives allows you to respond to conditions that may suggest you alter course.
When looking to add automation to your financial processes, several factors should be reviewed to increase the chances of a successful implementation.
Your organization needs to assess the current health of current financial processes. Example questions to ask include:
Knowing where you are starting from and where you aim to get are two important pieces to know before starting a data automation process.
We typically help guide clients through an organization assessment via a Data Strategy engagement to identify key business questions and metrics that make the organization thrive. The focus is typically on prioritizing data that would have the greatest organizational impact.
Once you’ve defined the key elements to include in your data automation build, you typically need a data team in-house or to leverage a consulting partner. Selecting the toolset that is the right fit for your unique scenario is key, and taking time to identify the right solutions is important to ensure fit.
Knowing specific security rules and regulations specific to your data is a key element to consider. Lastly, you will need a champion for the solution who can initiate change within the organization.
Change is part of the data automation process, as many financial team members are comfortable with spreadsheet processes. Consider having a change management plan to rely on for proper and timely communication along the way. The more lead time people have to adjust to new processes the better, and transparently including teams in the process typically permits better adoption and outcomes.
When people see their day-to-day improving with less manual effort and have a say in the process it helps garner ownership. Speak openly about the process and what it means for each team member and provide ample training opportunities in leveraging the new database and visualization tools.
Ensure old reports or processes are directly addressed in some way with a new process or visualization so that teams do not lose information or access but rather gain more timely information.
It is likely that you will encounter barriers when centralizing data and automating processes within your organization. Here’s a few we commonly see and how we typically help the organization efficiently move through them.
COMMON BARRIERS TO AUTOMATION |
OVERCOMING THESE BARRIERS |
Outdated financial (or other) system(s) that prevent proper data capture or do not allow data extraction. |
The process of automating typically unearths pain points. Identify whether the system still supports you as you need it to or leverage a partner to migrate to a newer platform. |
Inconsistent data quality and business logic. |
An automation project is a perfect time to identify cleaning and quality rules for your data, both within source systems and in a database. It’s also an opportune time to clarify business logic within the organization. Address this barrier head-on and early on. |
Scope creep or a sprawling build with little efficiency and focus. |
Identify your data strategy upfront, which defines the key business questions, metrics, and data sources that are in scope for a Phase 1 build. Prioritize with executive leadership and begin. Identify a strong champion stakeholder to guide the data team through roadblocks. |
Low adoption of new processes and visualizations. |
Typically, the prioritization process didn’t happen, or a change management program was not in place. If you’re done building and at this point, it’s time to circle back with executive leadership and your key stakeholders to define the vision, implement training for small groups, and provide regular office hours or check-ins to help adoption proceed. This sometimes requires adjusting what was built to better capture daily needs and use cases. |
Deciding to automate data processing within an organization is a big step with known risks but high rewards. It does take time and investment up front, through the build, and continually after to ensure the organization continues to get quality outputs from the established automation solutions.
Automating financial processes especially can have a large impact on the bottom line, as you permit numerous functions within the organization to readily affect change with timely and actionable information.