Clarabridge Engages Customer Experience


Analysts have been talking and writing about a “360 degree” view of the customer for years. Our own benchmark research intovr_customer_analytics_05_dissatisfaction_with_customer_analytics_updated customer relationship management shows that only37 percent of organizations are able to produce analysis and reports that yield such a comprehensive view. Other research into next-generation customer analytics reveals that the main issue in this area for nearly two-thirds (63%) of organizations is data availability. To make the situation worse, customer-related data is getting ever more numerous and complex. A principal reason for this growth is the number of communication channels consumers now use to engage with organizations and the type of data these channels produce. It includes call recordings, text messages, email, social media posts, customer feedback surveys, chat scripts and event data such as videos that users download. All of these types of data are unstructured , which makes them harder for conventional analytics tools to access and analyze.

Clarabridge is an established vendor of analytics that over the last few years has focused on helping companies deal with such data. Its portfolio of products is called Clarabridge CX Suite that includes CX Analytics, CX Social and CX Survey. The products capture data from a variety of sources; a big data platform provides the core tools to analyze large volumes of ventanaresearch_technologyinnovationawards_winner2016_whitestructured and unstructured data; analytics tools execute specific types of analysis; and a set of tools enables organizations to take action based on the results of the analysis. The focus on social media engagement with CX Social was recognized with a 2016 Ventana Research Technology Innovation Award.

Clarabridge offers three sets of tools to capture specific categories of data. One captures data from multiple types of surveys such as post-call surveys, NPS surveys, Web-based surveys and employee surveys. A second captures social feedback from Facebook, Twitter, LinkedIn and other platforms. The other set captures interaction and related customer data from email, chat scripts, contact center agents’ notes, voice recordings, CRM data and other sources. Clarabridge calls these tools the “listening layer” because they enable organizations to capture data from these customer-related sources and connect it to a specific customer.

The big data platform and analytics tools are what the company calls its “analyze” layer. An advanced text analytics tool uses natural-language processing and other techniques to extract insights from unstructured text data. It allows users to set up rules to categorize interactions based on words or phrases they include, to derive caller sentiment at a more detailed level than I have seen in other products, and to spot trends. This layer also includes tools that allow users to create their own analysis, using any of the data captured at the listening layer. I especially like the ability to produce customer journey maps that focus on the customer life cycle, as they search for products, acquire products, use products and seek support – in other words, from marketing through sales and service, rather than on channel use, which many other products focus on.

The “act” layer I find to be the most important. It is divided into proactive support of front-line operations and business optimization. In principle these halves provide similar capabilities to put outputs from the analyze layer to use. In terms of front-line operations this goes beyond visualizing the information in different ways for different uses to recommend actions to, for example, contact center agents. From a business optimization perspective, it also goes beyond visualizing the information in different forms to show analysis across multiple data sources, role-based dashboards, side-by-side comparison of information and root-cause analysis. In conjunction these features allow organizations to make use of the insights they gain from using analytics beyond just producing pretty charts.

Clarabridge is cognizant that many advanced analytics tools are not easy for many business people to use. It therefore provides extensive support services that range from setting up access to data sources, customer segmentation and journey mapping; setting up topics, themes and categorization rules; interpreting emotion and sentiment analysis; using root cause analysis; customizing reports and analysis; redesigning interaction processes; to using the outputs to design a customer engagement strategy. Added together these services extend from help in overcoming the initial hurdles of using the tools properly to helping organizations get full business value from the products. These services and the product set provide a firm foundation and an ongoing process for improving business performance.

Our research into next-generation contact centers in the cloud shows that customer vr_ngccc_01_customer_self_service_will_increase_updatedexperience (CX) has become the true business differentiator: 70 percent of participants said that it is the primary way they expect to compete for customers. I believe a comprehensive view of customers that makes use of all available data, their business journeys and the business impact of customer engagement are essential components are starting a CX initiative and gaining maximum business benefit from it. So I recommend that organizations wanting to maximize the value of their customers assess how Clarabridge can help those efforts.

Regards,

Richard Snow

VP & Research Director Customer Engagement

Follow Me on Twitter and Connect with me on LinkedIn

Investigating the Potential of Data Preparation


Data preparation is critical to the effectiveness of both operational and analytic business processes. Operational processes today are fed by streams of constantly generated data. Our data and analytics in the cloud benchmark research shows that more than half (55%) vr_dac_23_time_spent_in_analytics_updatedof organizations spend the most time in their analytic processes preparing data for analysis – a situation that reduces their productivity. Data now comes from more sources than ever, at a faster pace and in a dizzying array of formats; it often contains inconsistencies in both structure and content.

In response to these changing information conditions, data preparation technology is evolving. Big data, data science, streaming data and self-service all are impacting the way organizations collect and prepare data. Data sources used in analytic processes now include cloud-based data and external data. Many data sources now include large amounts of unstructured data, in contrast to just a few years ago when most organizations focused primarily on structured data. Our big data analytics benchmark research shows that nearly half (49%) include unstructured content such as documents or Web pages in their analyses.

The ways in which data is stored in organizations are changing as well. Historically, data was extracted, transformed and loaded, and only then made available to end users through data warehouses or data marts. Now data warehouses are being supplemented with, or in some cases replaced by, data lakes, which I have written about. As a result, the data preparation process may involve not just loading raw information into a data lake, but also retrieving and refining information from it.

The advent of big data technologies such as Hadoop and NoSQL databases intensifies the need to apply data science techniques to make sense of these volumes of information. In this case querying and reporting over such large amounts of information are both inefficient and ineffective analytical techniques. And using data science means addressing additional data preparation requirements such as normalizing, sampling, binning and dealing with missing or outlying values. For example, in our next-generation predictive analytics benchmark research 83 percent of organizations reported using sampling in preparing their analyses. Data scientists also frequently use sandboxes – copies of the data that can be manipulated without impacting operational processes or production data sources. Managing sandboxes adds yet another challenge to the data preparation process.

Data governance is always a challenge; in this new world it has if anything grown even more difficult as the volume and variety of data grow. At the moment most big data technologies trail their relational database counterparts in providing data governance capabilities. The developers of data preparation processes must adapt them to these new environments, supplementing them with processes that support governance and compliance of personally identifiable information (PII), payment card information (PCI), protected health information (PHI) and other standards for the handling of sensitive, restricted data.

In the emerging self-service approach to data preparation, three separate user personas typically are employed. Operational teams need to derive useful information from data as soon as it is generated to complete business transactions and keep operations flowing smoothly. Analysts need access to relevant information to guide better decision-making. And the IT organization is often called upon to support either or both of these roles when the complexities of data access and preparation exceed the skills of those in the lines of business. While IT departments probably welcome the opportunity to enable end users to perform more self-service tasks, they cannot do so to the extent that it ignores enterprise requirements. Nonetheless, the trend toward deploying tools that support self-service data preparation is growing. These two trends can lead to conflict for organizations that want to derive maximum business value from their data as quickly as possible while still maintaining appropriate data governance, security and consistency.

To help understand how organizations are tackling these changes, Ventana Research is conducting benchmark research on data preparation. This research will identify existing and planned approaches and related technologies, best practices for implementing them and market trends in data preparation. It will assess the current challenges associated with innovations in data preparation, including self-service capabilities and architectures that support big data environments. The research will assess the extent to which tools and processes for data preparation support superior performance and determine how organizations balance the demand for self-service capabilities with enterprise requirements for data governance and repeatability. It will uncover ways in which data preparation and supporting technologies are being used to enhance operational and analytic processes.

This research also will provide new insights into the changes now occurring in business and IT functions as organizations seek to capitalize on data preparation to gain competitive advantage and help with regulatory compliance and risk management and governance processes. The research will investigate how organizations are implementing data preparation tools to support all types of operational and business processes including operational intelligence, business intelligence and data science.

Data is an essential component of every aspect of business, and organizations that use it well are likely to gain advantages over competitors that do not. Watch our community for updates. We expect the research to reveal impactful insights that will help business and IT. When it is complete, we’ll share education and best practices about how organizations can tackle these challenges and opportunities.

Regards,

David Menninger

SVP & Research Director

Follow Me on Twitter @dmenningerVR and Connect with me on LinkedIn.