IBM Watson and Cognitive Computing to Streamline Compliance


Today’s proponents of artificial intelligence (AI) tend to focus on its spectacular uses such as self-driving cars and uplifting ones such as medical treatment. AI also has the potential to aid humanity in more modest ways such as eliminating the need for individuals to do tedious repetitive work in white-collar areas. Along these lines, at its recent Vision users conference, IBM displayed an application of its Watson cognitive computing technology designed to automate important aspects of regulatory and legal compliance. Should it prove workable, the application of cognitive computing to compliance could be the first step in achieving what various “Paperwork Reduction Act” legislation has failed to do: substantially cutting the time needed to comply with rules imposed by government entities.

Regulatory compliance requires plenty of effort, especially in heavily regulated industries and especially during periods of rapid change in rules. Regulatory burdens on business in the United States have been increasing and growing more complex. For example, the number of pages added to the U.S. Federal Register, a rough measure of rule-making, grew 38 percent, from 529,223 pages in the 1980s to 730,176 in the 2000s, and that growth is on pace to reach 800,000 for the decade ending in 2019. Not all of these additions apply to a specific company’s business, and not all changes are relevant. But poring through pages of laws, rules and judicial rulings to identify relevant new requirements or changes to existing ones requires expertise and often considerable effort. Determining how to address regulatory changes and ensuring that these requirements are being met also entails knowledge and experience and consumes time. While necessary virtually none of all this work adds to the bottom line (except to the extent that it avoids fines or penalties) or improves a company’s competitiveness.

vr_grc_reasons_for_GRC_initiativesIn concept, cognitive computing is well suited to help manage compliance because it has the ability to continuously scan all sources of rule-making, identify those that may be relevant to an organization, and provide suggestions on how best to comply with rules and oversee the compliance program. It can improve the effectiveness of the compliance process by reducing the risk that a company will overlook regulations that apply to it or will implement a compliance program that does not adequately address requirements. In short, by using automation, cognitive computing can increase the efficiency with which a company addresses its compliance requirements. Our benchmark research on governance, risk and compliance (GRC) finds that this is important: Companies most often focus on GRC to contain overall risk and the risk of failure to comply with regulations (77% and 74%, respectively) and much less often to cut costs (31%).

The primary steps any company faces in addressing regulatory compliance are identifying and understanding regulations that apply to it; determining how to address each of them; creating the appropriate measures and governance to achieve compliance; ensuring that the necessary documentation is created to confirm conformance; and guaranteeing that issues that arise are handled properly. Companies face challenges in doing this correctly and in a timely fashion. The process of interpreting the regulations and linking them to the appropriate controls is difficult and costly. Expertise is necessary, on the part of internal staff, external consultants or legal counsel. Historically companies have devolved responsibility for regulatory compliance to the individual business units most closely affected because it was the practical approach. However, decentralized approaches make it difficult to gauge overall compliance, and as the scope of regulation increases over time they lead to duplicate controls and increased costs of compliance.

IBM Watson is potentially a good fit for managing regulatory compliance because it pools knowledge of a topic. As in the case of medicine, the collective efforts of all companies using Watson to assist in managing regulation help all of the participants. Because their combined learning processes are cumulative, Watson can build a knowledge base fast and absorb new facts and conditions quickly. It’s to all participants’ advantage to expand the capabilities of the system cooperatively. In both disciplines, learning involves mastering a technical language and syntax and being able to link their meaning to specific recommended actions.

Watson’s approach to cognitive compliance starts by parsing the body of regulations in a fashion similar to the work it has done in consuming the scientific literature in the field of medicine. It then would identify all compliance requirements that may be relevant to a specific financial institution. The company would vet the list it produces to arrive at a list of validated compliance requirements. The cognitive compliance system would then use Watson to generate a recommended set of controls and procedures based on accepted practices (which may be rooted in anything from black-letter law to actions taken by similar companies). The user company would select those that it deems appropriate. These decisions would be made by trained individuals – for example, those with compliance responsibilities in a particular area, internal counsel or attorneys specializing in a relevant practice area. Once established, a cognitive compliance system could automate the process of monitoring regulatory actions and rule-making that is relevant to the company and flagging anything that requires review.

IBM intends to focus Watson’s cognitive compliance efforts initially on the financial services sector. In part this is because the company already has a significant presence in this market segment, but the main reason is because in the United States the complexity of the rules governing this industry has mushroomed since the financial crisis of the past decade. For example, the so-called Volcker Rule, intended to prevent banks from engaging in speculations that put government deposit insurance and the financial system at risk, was spelled out in just 165 words in the 2010 Dodd-Frank Act. However, translating that concept into practice required the collaboration of five regulatory agencies: The Federal Reserve, the Securities and Exchange Commission (SEC), the Commodity Futures Trading Commission (CFTC), the Federal Deposit Insurance Corporation (FDIC) and the Office of the Comptroller of the Currency (OCC). It took about five years for this group to assemble a 71-page rule (not written in plain English) that has an 891-page preamble. As to cost of dealing with this complexity, in 2015, the OCC estimated that the cost of complying with Dodd-Frank for the seven largest U.S. banks in 2014 was US$400 million. In another example, 13 Europe-based banks spent between $100 million and $500 million each to achieve compliance with a rule requiring them to create umbrella legal structures for their local operations and take part in the Fed’s annual stress tests. To be sure, the current regulatory conditions affecting banks is an extreme example. However, for that reason it’s an attractive potential market.

If applying cognitive computing to regulatory compliance works for financial services, there are likely to be many other industries in which the regulatory requirements are demanding enough to track and implement to make its use worthwhile. One intriguing possibility for the longer term is Watson’s potential to identify duplicate or conflicting regulations and laws and enable legislators and regulatory bodies to streamline or rationalize them. We recommend that financial services organizations and perhaps others look into this intriguing possibility.

Regards,

Robert Kugel

Senior Vice President Research

Follow Me on Twitter @rdkugelVR and

Connect with me on LinkedIn.

Transforming Tax Departments into Strategic Entities


The steady march of technology’s ability to handle ever more complicated tasks has been a constant since the beginning of the information age in the 1950s. Initially, computers in business were used to automate simple clerical functions, but as systems have become more capable, information technology has been able to substitute for increasingly higher levels of human skill and experience. A turning point of sorts was reached in the 1990s when ERP, business intelligence and business process automation software reduced the need for middle managers. Increasingly, organizations used software to coordinate activities as well as communicate results and requirements up and down the organizational chart. Both were once the exclusive role of the middle manager. Consequently, almost every for-profit organization eliminated management layers so that today corporate structures are flatter than they once were. Technology automation also eliminated the need for administrative staff to perform routine reporting and analysis. Meanwhile, over the course of the 1990s, the cost of running the finance department measured as a percentage of sales was cut almost in half as a result of eliminating staff and because automation enabled companies to scale without adding headcount. During the last recession, companies in North America and Europe once again made deep reductions to their administrative staffs, relying on information technology to pick up the slack.

Given this history, the best career choice that an individual can make today is to stay ahead of the trend. Information technologies, especially cognitive computing, will continue to eliminate relatively high-paying white-collar jobs in corporate life, especially in the finance and accounting function. Executives and others working in tax departments in particular should recognize that a major shift is under way in their field. Automation will transform their work over the next five years, driving a fundamental change in what they do. To succeed (or even survive), they will have to embrace automation.

Spreadsheets are a major impediment to making the tax function more strategic for a company and more remunerative for those working in the department, as I have noted. Our Office of Finance benchmark researchvr_Office_of_Finance_15_tax_depts_and_spreadsheets finds that half (52%) of tax departments use spreadsheets only for tax provisioning and another 38 percent mainly use spreadsheets; just one in 10 utilize a third-party tax application. One well-known issue with spreadsheets is that they are error-prone – not a risk that tax professionals can be comfortable with. To be certain that the tax provision and other tax-related calculations are correct, individuals must double- and even triple-check the numbers. This overlaps with a second major issue with spreadsheets: They are time-consuming. Our spreadsheet research finds that those working heavily with spreadsheets on average spend 18 hours a month (equivalent to more than two full workdays) just maintaining their most important spreadsheet. Spreadsheets as so time-consuming that they prevent individuals from doing more valuable work, in this case tax analysis and planning.

Another related issue is that using spreadsheets for the tax function diminishes visibility into a company’s tax provision in at least two respects. First, using them takes so long that executives get to the numbers late in the financial close process. This matters because of the impact that tax expense has on a company’s profits. Second, spreadsheets are black boxes: That is, they are difficult to control, and it’s difficult for anyone other than the spreadsheet’s owner to understand their construction. Often, assumptions are buried in formulas and therefore hard to uncover. If these formulas are inconsistent or wrong, it’s not easy to spot them. (This was an important factor behind J.P. Morgan’s multibillion dollar trading loss, which I discussed.) When a spreadsheet is constructed with a given formula repeated in multiple cells, each of these must be updated when circumstances change, and it’s difficult to be certain that all of the changes have been made. Even with advanced techniques designed to make updates consistent, it’s hard to be sure that some cell wasn’t overwritten with another number.

Some people who work intensively with spreadsheets still view them as a form of job security because of their opacity. They think they’re indispensable because they are the only one who understands how their spreadsheet works. This is one of several reasons why their use persists in functions where they constitute more of a problem than a solution. However, these spreadsheet jockeys should recognize that their tools’ inherent inefficiency, lack of visibility and proneness to error make them vulnerable to being replaced by better technology. The real value of tax professionals is not their ability to overcome spreadsheet limitations. It’s in their training in understanding income taxes. Once freed from the drudgery of performing computations, massaging data and checking (two or three times) for errors, tax professionals can turn their attention to performing analytical work aimed at optimizing a company’s tax spend – and thus ensuring their value as employees.

Midsize and larger organizations, especially those that operate in multiple direct (income) tax jurisdictions and that have an even moderately complex legal entity structure, must use dedicated software to automate their income tax provision and analysis functions. They must manage their tax-sensitized data using what I call a tax data warehouse of record. Tax departments must be able to tightly control the end-to-end process of taking numbers from source systems, constructing tax financial statements, calculating taxes owed and keeping track of cumulative amounts and other balance sheet items related to taxes. Transparency is the natural result of a having controlled process that uses a unified set of all relevant tax data. An authoritative data set makes tax department operations more efficient. As noted, reducing the time and effort to execute the tax department’s core functions frees up the time of tax professionals for more useful analysis. Having tax data and tax calculations that are immediately traceable, reproducible and permanently accessible provides company executives with greater certainty and reduces the risk of noncompliance and the attendant costs and reputation issues. Having an accurate and consistent tax data warehouse of record enables corporations and their tax departments to better execute tax planning, provisioning and compliance. Using dedicated software today rather than relying on spreadsheets helps the tax department, and those working in it, increase their strategic value today so they won’t be obsolete tomorrow.

Regards,

Robert Kugel – SVP Research