A simple strategy for standardizing metadata can improve decision-making, data governance and data security Credit: W.Rebel Mobility, cloud and big data all promise to help enterprises increase efficiency and productivity, improve decision-making and lower costs. The laudable goal is to make your business more competitive, but for your IT, legal and compliance teams, these new technologies often lead to increased complexity, loss of control and even increased costs as massive amounts of data now move to an ever-increasing number of endpoints, including mobile devices and third-party hosting services. These challenges can be overcome with a new approach to standardizing information metadata. If IT doesn’t fully understand what data exists and where various types of information are located, then it can’t ensure that the right people have the right access at the right time, and it certainly can’t adequately secure the data against breaches and theft, or delete private information as required by new privacy laws. E-discovery costs can skyrocket as the amount of data that needs to be collected increases. Even business users can suffer as the information they need for daily activities and the data they want to use for big data analytics become harder to find and control, leading to lower productivity and redundant effort, while undercutting the hoped-for improvements in decision-making. To maintain control over their burgeoning data stores, organizations need to develop insight across all data, no matter who creates it, where it lives, and with whom it’s shared. Unfortunately, most companies see this as a hugely expensive and disruptive challenge. However, there is actually a very simple and cost-effective way of doing this, as long as you’re willing to do it over time — which is still far better than not doing it at all. The strategy is based on applying the same metadata standardization typically used on structured databases to all other data across the enterprise, on-premises and in the cloud, including all message types (email, text and SMS messaging, social media, etc.), documents (word processing, spreadsheets, presentations, etc.), and even log files. In some regulated industries, such as financial services, metadata standardization could also be applied to voice communications data, such as recorded conversations and voicemail files. Let’s say you have a master “worker” ID database (e.g., employees, onboarded external personnel). Using this ID to tag every document, message and database record with who created it, who revised it and who deleted it would make it possible at various stages in a range of business processes to relate data back to particular people, no matter whether the data makes its way onto cloud storage or takes a number of trips from mobile device to mobile device. Just this one step could also help make e-discovery processes more efficient and facilitate data protection and privacy efforts. It would also then be possible to identify the complete “data footprint” of every individual across all data sources (applications, shared services, on-premises, cloud, etc.). While standardizing metadata makes it easier to find and retrieve data, it also offers significant value for big data analytics initiatives. For example, if you also begin consistently tagging data involving clients and products with standardized client IDs and product IDs, you’re automatically adding analytical value to it, whether it’s related to identifying market demand for which your firm has no product (yet), improving support for employees who are contributing to revenue-generating products, determining the relationship between client communications and client investment, and many other opportunities that currently may be difficult or impossible to achieve. Enriching the data and reducing or eliminating the normalization, reconciliation, mapping and other very time-and-resource intensive manual work would have very positive effects. Let’s look at another important use case. The migration of data beyond the firewall has exacerbated what was already a major challenge for CIOs: distinguishing valuable information from the approximately 75% of enterprise data in any organization that is useless debris. If you want to manage your data regardless of where it is, and if you want to get rid of data centers and efficiently move data to the cloud, then it’s absolutely vital that you identify what’s there, what’s important, and what lacks any value. Applying standardized metadata to all enterprise data can dramatically improve the identification of key data in conjunction with business, legal, records, compliance and security value to begin to shine a light on the firm’s dark data. Evolution, not revolution The number of tags you need to use to dramatically improve data management and support initiatives around aspects such as e-discovery, regulatory compliance, data debris disposal, and cybersecurity and threat response is not at all insurmountable. As noted above, using employee ID, client ID and product ID might be a great place to start. The key is to establish enough tags to be useful, but not so many that it becomes burdensome to apply them to all types of data in all locations the firm can influence or control. Also, you’ll most likely want to apply standardization over time, evolving systems and user behavior, not disrupting them. One strategy is to evolve with the natural life cycle of IT. Each time you change an application, platform or server, you require that standardized metadata be embedded. Eventually the use of standardized metadata would become habitual, systematic and pervasive. Then, once the value is established and you’ve shown ROI, you can go back and change legacy systems. With a disciplined approach to metadata standardization, you can prepare your company to more effectively take advantage of new mobility, cloud and big data opportunities. Having more complete knowledge of and control over the information, you have will create huge opportunities for action across all business processes, including revenue generation, sustainability, risk and compliance, cybersecurity and e-discovery. Richard Kessler is executive director and head of Group Information Governance at UBS and a faculty member of the Compliance, Governance and Oversight Counsel (CGOC). Related content opinion GenAI is to data visibility what absolute zero is to a hot summer day Given the plethora of privacy rules already in place in Europe, how are companies with shiny, new, not-understood genAI tools supposed to comply? (Hint: they can’t.) By Evan Schuman May 06, 2024 6 mins Data Privacy GDPR Generative AI news How many jobs are available in technology in the US? Tech unemployment was down slightly in April, but AI hiring was up — way up. And job listings showed more signs of a shift in recruiting practices with a growing emphasis on skill-based hiring. By Lucas Mearian May 06, 2024 160 mins Remote Work Salaries Financial Services Industry news With its new iPad, Apple's Empire strikes back Apple is preparing to introduce new iPad Pro and iPad Air models as it seeks to regain momentum in the tablet market. By Jonny Evans May 06, 2024 5 mins iPad Apple Tablets opinion Can AI tools help reduce Zoom fatigue? When it comes to meetings, whether in person or on video, can anything make them better? Yes, but it’s not the technology. By Steven Vaughan-Nichols May 06, 2024 5 mins Augmented Reality Generative AI Zoom Video Communications Podcasts Videos Resources Events SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe