May/June 2019 Edition


Being touted as the answer to all the stresses and dissatisfaction currently found in the workplace, it is being developed as the advancement promised by technology prognosticators who have promised that work can be reimagined to be more flexible, faster and adaptable. A benefit, or so we are told, is the elimination of work that is standardized and repetitious.

So, is this good for mortgage lending?  Reading numerous articles in industry trade magazines and discussing these ideas with those who are currently mortgage lenders, it is apparent that we actually know very little about AI. Every article I have read claims to have found the answerandgives us a short, simplified explanation about what it actually does.  To add more confusion, artificial intelligence technology is sprouting a completely new set of acronyms which are anything but helpful unless you thoroughly understand it.  Another issue I found prevalent in the discussion is the fear about its usage and the elimination of jobs we now hold. But are there any risks in using AI that we don’t know about?  Do we understand how technology can make better decisions that humans?  What do we need to have to make sure the technology works accurately?  How can we implement it into our production and servicing processes?  So, before we rush head-long into the next great thing, we need to take the time to really understand what this technology is and does. 

This series of articles provides some basic definitions utilized by artificial intelligence technology providers, identifies the various iterations now in use and describes what critical elements are necessary for it to do what it claims to do. In addition to the technology itself, the articles will focus on the risks and opportunities associated with AI as well as discussing the potential for process and job function disruptions the industry will face as it is implemented. 

Beginning with the basics

“Work”, the basis of economic betterment, is changing again.  Known as the “Human Economy”, work involves people performing activities that are considered economically productive.  From the earliest days of economic activity humans have experienced the need to improve productivity.  Work, as we think of it today began with the development of wheels, wagons and animals for transportation as well as the use of small hand-tools for creating individual products one at a time. Next came the industrialization of product production, and then the initialization of automation to conduct the actual work. Ultimately this has led to better products, more satisfied customers and greater profitability while providing a better income and more time for leisure and family time for employed individuals. We are now in the initial stages of the next advancement, the use of artificial intelligence, to once again change what work is and it isn’t. Despite numerous nay-sayers, this new technology is bound to have a significant and fundamental change to our business. We, as an industry, need to know more. 

As defined by Paul Daugherty and James Wilson in their book, Human + Machine, Reimagining Work inthe Age of AI,artificial intelligence is a system that extends human capability by sensing, comprehending, acting and learning.  In other words, it is the simulation of intelligent behavior in computers or sometimes referred to as a machine with human cognition and the ability to carry out tasks as a human would. 

Within this field there are various levels of “intelligent” program types. They have names such as Expert Systems, Machine Learning, Deep Learning, Robotic Process Automation (RPA), and Neural Networks. Another common acronym describing how AI works is Natural Language Processing. All of these terms reflect a variation on the basic purpose of AI, that of having machines do the work of humans.  But how it is done and what functions it addresses are quite different.  

Defining Types of Artificial Intelligence

Today, the term is discussed by many in our industry as a single effort of implementing AI, yet they confuse terms like Expert Systems, Machine Learning and Deep Learning, seeing it all as one new technology.  When you hear these buzzwords tossed around in conversations, it is obvious that many do not recognize these terms as very different types of AI.  In order to clear up the confusion a definition for each type must be recognized for what it does and an understanding of how it can be used, beginning with simplest to the most advanced.  

All artificial intelligence products are based on one core requirement; the ability of the system to collect and learn facts.  Just as individuals are not born with the knowledge they have today, a machine must learn what it needs to know.  First these machines must be taught the basic facts. Even once these basic facts are learned, they are not innately able to associate those facts with the problem presented to it. It must be taught how to associate these facts appropriately with the problem it is trying to solve. As the complexity of the problem increases, the need for these machines to conduct this association becomes more complex, thus the varying levels of AI. 

Because AI is a relatively new technology, scientists do not necessarily agree on the terms and definitions utilized in describing the capabilities of current AI products.  The definitions and examples given below are most consistent with the terminology we use today.  In addition, there are terms that are frequently used within the scientific community for describing the abilities of specific AI functionality.  

Expert Systems– The most basic of AI products, this technology is a method of automated reasoning based on a very specific set of facts, rules and principles.  The automated underwriting systems we use today are an example. When we ask a question, such as does this loan meet our credit guidelines, the program takes the facts as provided in the application and from external data, compares them to the facts taught to it as an acceptable loan and filters this data to arrive at an answer. 

If any of the data is inconsistent with the “facts” taught to the machine, or there is no rule, it will be unable to decide.  These systems do not “learn”.  For example, if an AUS system has a rule that limits DTI ratios to 43%, it will not approve one at 44%, regardless of the fact that the human who then reviews the application does so 95% of the time.  The system has not “learned” that 44% is acceptable.  

Robotic Process Automation-RPA was designed and is utilized to automate those processes that are routine and labor intensive.  Today businesses find that work associated with repetitive tasks, such as inputting data, making calculations and answering standard customer questions is work that can be done by basic RPA technology.  Your home-based Alexis system is an RPA devise that answers questions and conducts some limited analyses based on the input.  For example, when you ask Alexis to play music, it will find a music program and begin to play it. The robot has learned what music is, the types of music and what performers are associated with each.  If you do not like the type of music selected you simply tell Alexis and the music is changed.  The same applies to individual songs.  Alexis then “learns” what type or piece of music you do not like and will not play it again.  The “bot” as it is now called, has learned something new and will apply it in the future.  

Today, companies with workflows that are consistently repetitive and simple are using this technology.  For example, a warehouse which contains thousands of products which need to be compiled into one order uses these bots to find and deliver the items to a central location.  These bots, which have the appearance of what we think of as robots, move up and down narrow rows of products and quickly and accurately collect the necessary items. This has resulted in the elimination of personnel who perform this function with much less accuracy and in a greater time period.  It even has the impact of allowing warehouses to be built higher, thereby eliminating the need to have long, low buildings with lower rows of goods so that humans can reach.  

One of the critical features for these bots is the ability for it to recognize what the user is saying in their own vernacular. Known as Natural Language Processing (NLP), its purpose is to allow bots to learn various languages, accents and idioms used by customers. The development of NPL requires that this occurs for industry specific terms as well.  Asking a question of a bot when the terms are used in different scenarios will most likely not get the answer you need.  Again, if the bot has been trained in the Northeast and is asked a question from someone in the South, without NLP adaptation to this accent, it will have problems answering the questions. 

Narrow A.I.- This term has come to be used by many of the individuals working in this field. Like its name suggests, it is focused on executing a single task.  Human interactions with a narrow A.I. are limited because Narrow A.I. can’t think for itself. This is why sometimes you’ll get a nonsensical answer back when attempting to use it because it lacks the ability to understand context.

General A.I. General A.I. or Strong A.I., as it is sometimes called, provides the ability to understand context and make judgments based on that context. Over time it learns from experience and is even able to make decisions, even in times of uncertainty.  Even with no prior available data it can use reason and be creative. Intellectually, these computers operate much like the human brain. This is where we are headed when we talk about its functioning in place of a human.  To understand the abilities of this technology and how it operates, the basic functioning of the human brain needs to be understood.  

Neural Networks. Each individual has within their brain a series of networks which transmit data. This can be a s simple as learning what tastes you like and what you don’t like.  Before you learn, by tasting, that you don’t like fried liver there is nothing in your thought processes that tell you it is not acceptable to you.  However, once you taste it and realize it tastes awful to you, you reject this food choice because your brain has learned from this observational data.  Our internal neural network processes the data we are observing and alerts us to the fact that fried liver tastes awful to us and alerts us to that fact before we make an unacceptable choice.  

Developing these neural networks in our brains takes time. My son recently bought a 2-month-old puppy.  Obviously the first thing he needed to do was house train him and so he has spent a great deal of time making sure Bogey understands what is expected and what he must do when he needs to go.  Developing this neural network in the dog’s mind is not an easy task.  Over time, the training took hold.  In other words, a neural pathway had been developed. 

Machine Learning. Machine Learning, “ML”, technology. This is the field of computer science with algorithms that learn from data that is “taught’ to the technology.  It also uses the data and incorporates algorithms that learn from and make predictionson data.  This technology can learn from humans or it can learn from other data. In other words, this technology is conducting some of the analytic thought used in bringing various components together and allows it, to some extent, to predict a result.  Another utilization of ML is known as “supervised learning”.  In this scenario data is broken into categories such as inputs and outputs and humans use it to develop an expected output from the technology. When the output is inconsistent with what is expected based on the algorithm, it sends out an alert to the user.

One of the most common uses of Machine Learning is found in fraud investigation. In these cases, the machine is feed information and searches for a pattern of data, such as spending habits or income, and utilizes its algorithmic tools to determine if the data is what would be expected.  If in using the previously received data, it determines that the new data does not fit the pattern the output reflects this inconsistency and potential fraud is identified. This is what happens before you get that phone call asking if you had made a recent trip to Saudi Arabia. 

Deep Learning. Deep Learning specifically refers to the ability of the machine to learn from other data. Deep Learning consists of a multi-layered network of algorithms in which data is processed through multiple layers with each layer as individual inputs.  It also allows the data to flow back and forth between these layers thereby expanding the systems knowledge base.    Deep Learning “DL” machines can analyze patterns and people to identify potential problems or opportunities far enough in advance to allow for early intervention or resolution.  Imagine the ability to know the probability that a borrower will default even before the loan has closed and been set up in servicing.  

These types of artificial systems, machine and deep learning, are both predictive systems. These systems find relationships between variables in the historical data, identify a pattern and then develop a model to predict future outcomes.  Developing these patterns requires the use all available data, including borrower information from sources other than the application, property information, both local, state and national, economic conditions, etc., that are now available provide the opportunities to find correlations never before considered.  By understanding these different approaches to artificial intelligence, it is much easier for management to evaluate the type needed to successfully implement such a program.  There are however other issues to be understood and addressed before taking the leap to artificial intelligence.  The next article will explore the risks and opportunities of AI along with new developments addressing these issues.  In addition, it will discuss what it means to “reimagine” how work will be done. Finally, we will discuss what a reimagined mortgage operation looks like and the impact its inclusion will have on the workflow and job functions.  After all, don’t you want to know if a robot will be taking your job?