The New Sheriff In Town
To date, much of the AI adoption and experimentation underway has felt like the Wild West: exploring uncharted territory with seemingly no rules. While many of the discoveries made along the way have been incredibly useful for many industries, it’s clear there’s a need for clearer rules and guidance.
This need has been incredibly clear for the mortgage industry. The first to respond was Freddie Mac, which recently issued Selling Bulletin 2025-16, establishing new requirements for Freddie Mac sellers who are leveraging AI and Machine Learning systems in connection with any loan sold to the GSE, effective March 3, 2026. The message to lenders: there’s a new sheriff in town, and anything you let AI do for you is still your responsibility.
This requirement is a good reminder for lenders that AI needs a mindful approach, as this new sheriff will likely have some deputies that follow suit. The right governance has to be in place for any AI effort to be truly successful, so what does it look like, and how should lenders be thinking about risks?
Build Your Governance Framework
Compliance in mortgage is all about showing your work. Each decision needs to come with a clear reason to back it up. While people can explain their reasoning for certain decisions, the same can’t always be said for AI. Lenders have to put the right guardrails and guidelines in place.
AI must be two things to work well for lenders – or anyone, for that matter. First, it must be predictable. Lenders should be confident that their AI can arrive at the same conclusion or produce the same product twice. If it results in different credit decisions for two very similar borrowers, that can be a significant issue. Just like they expect employees to be consistent and dependable, lenders should require the same of their AI.
Next, it has to be transparent. This goes hand in hand with predictability. Humans can walk through their reasoning step by step to show their work. If AI is not able to explain how or why it came to a certain credit decision, that’s risky for the lender, especially if it has made different decisions in similar situations.
Weigh Your Risks
When implementing AI, lenders should always seek to balance risk with reward. What do you stand to lose by using AI, and what do you stand to gain? There are low-risk areas where AI can step in and have a huge payoff for the lender, and there are also high-risk areas where AI can deliver a lot of benefit. The real question is which risks are worth taking.The areas with low reward are ones lenders might consider staying away from, but a series of small wins in low-risk areas can also add up over time.
It’s not always about what you’re gaining, but what you’re giving up. AI is often approached as a time-saver, something that can save time through automating something time-consuming. For example, it can save time by contacting borrowers on the lender’s behalf. However, looking through the lens of what a lender is giving up, turning communication over to AI takes away an important relationship-building touchpoint. Yes, there is efficiency gained, but at the expense of personal connection. For most lenders, relationships are their key differentiators. Without their unique brand of personal service, they’ve stopped standing out from the competition. Anyone considering AI should always make sure what they gain is more important than what they’re giving up.
Consider Your Options
Let me be clear, there are great use cases for AI. We don’t need to run for the hills simply because there are risks involved. However, the way we approach those risks has to be carefully thought out.
It’s easy to get caught up in AI hype and forget that there are times when it may not be the best tool for the job. While AI can help lenders in many areas, they should also weigh every option. There are so many helpful technologies and tools beyond AI and machine learning. And while technology is incredibly useful, sometimes inefficiencies are simply due to training issues, communication breakdown or processes that need improvement.
Ultimately, this new sheriff in town isn’t here to crack down just yet. The ultimate goal is to ensure that any lenders using AI do so mindfully and use it where it really matters. The industry will see the most value from AI when we maximize where it helps and minimize where it hurts. It won’t be the Wild West for long, so the time to start thinking about AI governance is now.

John Haring is Head of Compliance at Wilqo, a mortgage production optimization provider. Wilqo is redefining loan manufacturing with its Production Optimization Platform (POP), named Charlie, designed to help lenders close more loans with less effort. Built on a modern, scalable architecture, Charlie combines loan origination, point-of-sale, automation, and analytics into one seamless platform. With Wilqo’s recent acquisition of Brimma Tech, the platform has the ability to integrate even more AI-powered tools to drive lender efficiency and profitability.
