How data can help tech companies thrive amid economic uncertainty

| Article

In many ways, tech companies are victims of their own success. They have become synonymous with the type of innovation that reshapes whole industries as well as our daily routines—all while enjoying an extended stretch of impressive growth. However, falling share prices suggest that investors want tech companies to move beyond growth at all costs. Organizations are now seeking more sustainable growth by generating steady cash flows, pursuing cost reductions, improving productivity, and providing a better customer experience. Better data management is a foundational element to excel across all these levers for sustainable growth.

But therein lies the problem. It might seem counterintuitive, but the very tech companies that frequently use customer data to power their business models or enhance their offerings often struggle with managing their own operational data. In fact, they suffer from many of the same issues as organizations in other industries, including poor-quality data and a lack of governance processes. Harnessing organizational data to support more disciplined growth will require tech companies to address the underlying problems hindering their data management.

Four data-driven levers for sustainable growth

Four data-driven areas could help companies generate growth while maximizing productivity and operational effectiveness.

Transforming business models to achieve steady cash flows

Tech companies face ongoing pressure to generate more stable cash flows. As a result, many are considering shifting to a variety of service models that create a reliable stream of monthly income. This approach is being applied not only to infrastructure, platforms, and software but also to devices such as laptops or printers.

These new service models require higher levels of information sharing between customers, external partners, and internal teams. Engagement would need to shift from onetime transactions (such as buying a piece of hardware) to multiple intraday communications (for example, monitoring cloud usage and scaling up or down based on demand). To support this transition, companies would likely need to improve their data-integration capabilities and aggregate data typically trapped in Excel spreadsheets to provide customers with on-demand access to usage data and other information.

Tech companies can pursue two steps to ease this transition. First, they could establish a set of clear data-integration rules (for example, requiring all new applications to develop an API to access data) for internal data-source systems as well as external partners. Some companies go as far as strictly enforcing compliance and not doing business with suppliers that fail to adhere to these data-integration rules. Second, they can move data out of Excel spreadsheets by implementing standardized frameworks for how Excel templates are used and stored within key business processes.

Proactively managing internal costs

With rising inflation and stretched supply chains among the top factors contributing to margin pressure, tech companies are seeking alternative levers to reduce their costs. However, organizations often lack clarity on which data solutions can optimize costs across the value chain.

To create a road map, tech companies can identify the most critical data use cases for internal operations. They can then build momentum and buy-in for their data initiatives by calculating the value at stake and quantifying how much data solutions could affect the cost baseline. For example, an organization could simulate the most efficient configuration for conducting R&D or assess data centers to decrease energy use (for example, by using digital twins for key assets). Data-driven insights can also help an organization determine whether it is overpaying a certain supplier, predict business outcomes more accurately, and optimize resource allocation.

One European hardware manufacturer assessed the value that data solutions could generate and uncovered more than $250 million in annual savings within its supply chain. With these data solutions, the company was able to better predict outsourcing capacity and inventory levels, improving the accuracy of its forecasts by more than 30 percent and reducing inventory levels by 2 to 3 percent. Business leaders used the savings to finance a multiyear data transformation program to completely digitalize the supply chain network.

Raising productivity

Technology manufacturers are identifying process bottlenecks and activities that require excessive manual effort and are exploring the use of robotic process automation (RPA) to free up capacity. These solutions have historically been limited by the accuracy of detailed process-level data.

Companies often do not consider data to be one of the most important enablers for RPA and digitalization. Some tech companies have invested millions in automation solutions that failed to make much of an impact because of incomplete or low-quality data. With proper data management, companies can automate workflows (for example, enterprise-resource-planning [ERP] data entries) to increase efficiency and accuracy while redirecting employees to higher-value tasks.

Improving customer experience through better data security and privacy

In the past three years, a handful of North American technology companies were fined a total of more than $5 billion due to noncompliance with data regulations and experienced more than 100 data breaches.1 In addition, some companies took extended periods of time to provide insight into the size of the breach and the data affected. For example, one organization spent weeks on a manual exercise to determine what types of sensitive information it stored.

These penalties and breaches can take a toll on a company’s reputation and customer base. A McKinsey survey found nearly half of all respondents making a purchase with a company will frequently consider another brand if they are unsure how their data will be used.2Why digital trust truly matters,” McKinsey, September 12, 2022.

One large cloud software company created an internal data privacy framework and technology foundation that could monitor what, where, and how data was being used and flag misconduct automatically. In the event of future leakage, the company would be able to pinpoint in just hours what data was affected. In addition, this framework allowed data owners to be more confident in sharing information that could enable the organization to pursue value-generating solutions—such as a cross-selling engine across multiple business units and product lines.

How to elevate data management

To capture value from the levers noted above, tech companies could strive to become data-driven organizations. A focus on three common areas could help elevate their data management capabilities.

Create a central repository of internal data and curated data products that increases availability without limiting speed

Since the tech industry has been a wellspring of M&A, serial acquirers have often failed to prioritize data harmonization of internal data, such as usage, billing, and servicing. In its place, teams with technologically superior skills and entrepreneurial culture have taken matters into their own hands (for example, by building their own data ecosystem), leaving organizations with disparate and inconsistent sources of internal data.

In contrast, some tech companies have attempted to address this challenge centrally by investing millions of dollars in data lakes that ultimately failed to improve data access for individual business units or functions due to slow time to market, dependencies among data assets, and costs.

Tech companies could create an ecosystem that takes the best of both approaches. Raw data is ingested from priority internal sources only (such as customer-relationship-management [CRM] and ERP systems) into a centralized data lake that is accessible by internal consumers. For example, a hardware manufacturer implemented a central platform that ingested more than 300 sources in the first year. A team of data engineers with deep technical expertise is tasked with building and maintaining reliable data pipelines. With ready access to data, business teams now have the potential to reduce time to market for solutions by 50 percent.

At the same time, companies can construct data products,3 which would be owned and built by teams within business units using best practices and technologies defined for the whole organization by a central data team. Teams across the company can then use these data products for their particular use cases. Data would be available through APIs, and data governance could be built into the design. One telco operator created a product that provides comprehensive data on cellular-network equipment to support investment decisions, scenario planning, and network optimization. The company estimates that the use of the data product across 150 use cases could generate hundreds of millions of dollars in cost savings and additional revenues in its first three years.

Establish a data governance program

Many tech companies have not implemented an enterprise data governance program. Without systematic data governance in place, data security becomes a siloed effort handled by different teams. Compared with more regulated industries such as banking, tech companies are several years behind in implementing advanced data governance processes and tools, such as data ownership, traceability, and catalogs.

To address these shortcomings, tech companies could consider launching or accelerating their data governance initiatives to cover their critical data assets first. This effort involves defining data domains and roles (including data owners) while working to identify the most important data elements for each domain. For the governance effort to be truly effective and able to scale, companies will want to consider deploying digital tools to automate data governance and embed solutions into end-to-end processes.

One financial institution took an aggressive approach to freeing up data by using a data governance framework.4Designing data governance that delivers value,” McKinsey, June 26, 2020. Leaders first determined domains and data elements and then agreed on the sensitivity of each data set. This measure gave all employees access to the roughly 60 percent of low-risk enterprise data. The team then built solutions to monitor how data was being used and flag misconduct automatically. Every team had to use a set of libraries, and code could not be pushed into production if it did not adhere to standards. The company took an end-to-end approach in scaling data governance, which enabled it to increase time to market for data solutions by 40 percent, reduce risk, and maximize innovation opportunities.

Launch ‘data quality at scale’ programs

Organizations often grapple with low-quality data. In a recent McKinsey survey, about 60 percent of tech executives highlighted poor data quality as the main roadblock to scaling data solutions.5 This can have wide-ranging implications. For example, according to one report, data scientists spend 45 percent of their time, on average, preparing data for use rather than building and tuning analytics models, a task they are uniquely trained—and highly paid—to perform.6

To scale data quality efforts, tech companies could prioritize three actions.

  1. Organizations can use AI tech solutions to automatically detect data quality issues and propose fixes. Several organizations have already explored this approach successfully. A medtech healthcare company used data-source triangulation and custom logic trees to detect and remediate quality issues in freight data. These tools increased the accuracy of container weight measurements and dimensions from 60 percent to 90 percent and captured more than $5 million in annual savings.
  2. Companies can implement an incentive model to rally the entire workforce around data quality. One tech company incorporated a data quality score (DQS) for the most important domains as part of its scorecard, and a targeted set of employees received a bonus based on their contribution to DQS improvement. Thanks to this complete alignment, data quality became a priority for the organization, and its DQS rose significantly.
  3. Organizations can enforce quality controls where data is generated to prevent errors from happening at the outset. This approach has a precedent: many tech companies have controls in customer-facing applications (such as ensuring a zip code contains five numbers) but do not enforce those same checks in internal applications. One effective fix can be providing a set of standardized controls to internal IT development teams responsible for applications that generate data.

Tech companies are at a turning point right now. Investing in internal data initiatives provides a significant opportunity to maintain consistent growth, adopt new business models to support cash flow, improve efficiency, and provide the level of data security that customers demand.

Explore a career with us