AI for People Who Use WhatsApp
Our AI is used by people whose entire digital life is WhatsApp and Instagram. They disburse $100M in loans every month. Here is what we learned building the human side of the harness.
AI for People Who Use WhatsApp
The Loan Officer in Sangli
She has never written a prompt. She does not know what a token is. Her entire digital life is WhatsApp and Instagram. She works at a bank branch in a small town in Maharashtra, and every month she and people like her use our AI to approve loans worth $100M.
The AI runs the credit analysis, verifies the documents, flags the risks, and presents her with a judgment call. She makes that call. The loan goes out. This is not "AI-assisted" or "AI-suggested" in the way the industry uses those terms. This is AI doing the work and the human making the decision.
She has never once typed a prompt.
What Happens Without a Harness
We know what happens without a human-side harness because we lived it. In the early days, we put a chat interface in front of the model and gave it to loan officers. The same model, the same data, the same backend. The results were terrible.
Officers would type something like "check this loan" and get back a wall of analysis they couldn't parse. The model gave them everything it knew. It didn't structure the output around what they needed to decide. It didn't flag the one thing that mattered. It just answered, thoroughly and uselessly, like a consultant who doesn't know when to stop talking.
Some officers started copying-pasting from the AI into their own notes, manually reorganizing the information into a format they could use. They were doing integration work. The AI was making them busier, not less busy.
A few just stopped using it. The chat window felt like another task on top of the ones they already had. They went back to their spreadsheets.
This is the AI adoption gap. It's not that people don't have access to AI. It's that the AI makes them do work they shouldn't have to do.
What We Built Instead
We stopped building a chat interface. We started building a harness around what the officer can do: make a judgment call on a loan application.
Before the officer sees anything, the harness has already done the work. It pulls the applicant's credit history, verifies the documents, cross-references the loan amount against regional risk profiles, checks for inconsistencies in the application, and runs the internal model's assessment. All of this happens automatically. The officer never has to ask for it.
What the officer sees is a structured decision: here is the loan, here is the risk profile, here are the flags, here is our recommendation. Approve or reject.
The officer does not prompt. The officer judges.
If the officer disagrees with the recommendation, she overrides it. That override is logged. The harness learns from it. Next time, the recommendation accounts for patterns in overrides from that branch, that region, that officer.
This is the human-side harness. It is not a chat window. It is a delegation protocol.
The Two Axes
The AI industry has converged on the idea that the harness matters more than the model. This is correct, and the harness discourse only talks about one axis: making the agent reliable. Context retrieval, tool orchestration, error recovery, sub-agent coordination. All model-side problems. All necessary.
We operate on a second axis: making the agent usable. How little does the human have to do to get value? How much of the integration work does the system handle? Where does the human intervene, and is that intervention a judgment (which humans are good at) or a specification task (which most humans are bad at)?
Both axes matter. Right now the industry is building on one.
Before, During, After
The human-side harness has three phases. They are the same three phases you go through when you delegate a project to a junior team member.
Before the task: You don't hand a junior a one-sentence brief and disappear. You discuss the approach. You define what success looks like. You break it into steps. Our harness does this with the officer: it structures the credit analysis, defines the risk thresholds, and surfaces the decision points before the officer ever sees the application.
During the task: You check in. You redirect if the work is going off track. You don't wait until everything is done to discover it went wrong. Our harness checks in with the officer at decision points. If the model detects an inconsistency mid-analysis, it flags it immediately rather than burying it in a final report.
After the task: You review the result against the criteria. If you make changes, the person learns from them. Our harness logs every override and feeds it back into the model's calibration. The officer doesn't have to re-explain why she overrides certain loan types. The system remembers.
The $100M Proof
This is not theoretical. $100M in loans every month. 200,000+ applications processed on the platform. Users who had never used AI before. No training program. No prompt engineering workshop. No "AI literacy" course.
The system works because it was designed around what the human can do, not what the AI can do. The AI can analyze a credit profile ten ways from Sunday. The human can look at the recommendation and say "yes" or "no" based on local context the AI doesn't have. That's the division of labor. The harness enforces it.
What the Industry Is Missing
The AI industry is building remarkable model-side harnesses. Verification architectures, self-healing loops, tool orchestration protocols. Real engineering advances, and they matter.
The human-side harness is barely being discussed. The industry assumes that if the agent is reliable, usability will follow. Our experience says the opposite. Reliability without usability gives you a very capable tool that nobody uses correctly.
The Forbes piece on AI productivity captured the symptom ("employees are experiencing technology overload") but the prescription was better training and more intentionality. Our experience says the prescription is a harness that doesn't require training. Our officers didn't take a course. They opened the app and made judgment calls, because that's what they already knew how to do.
The Real Test
The test of a human-side harness is not whether a power user can get value from AI (I burn 10M tokens a day on my personal setup, of course I get value, I'm willing to do the integration work that most people aren't).
The test is whether someone who only uses WhatsApp can get value from AI without changing anything about how they work. Can they make the same judgment they were already making, but with better information, presented in a way that supports exactly that judgment and nothing more?
If the answer is yes, you've built a human-side harness. If the answer is "well, they need to learn to prompt," you haven't.
That's the metric. Can the WhatsApp user get value without learning anything new?
We have 200,000+ data points that say yes. But we are one company in one market. The human-side harness needs to become a discipline, not an accident. Right now it is an accident.