Overview
AWS Summit saw UKI Director Joe Carroll reference our partnership project with Funding Xchange (FXE) as a successful example of how agentic AI is being applied to real, regulated workloads in financial services. This case study unpacks what we specifically built, why it works, and how the system operates on AWS.
About the client
A UK-based credit decisioning specialist and SME lending marketplace connecting small businesses to more than 70 lenders through a single application. FXE Technologies provides white-label credit decisioning software to over 80 bank brokers and other financial institutions, including tier 1 banks. Serving more than 10,000 customers per month, their mission is to transform customer engagement through innovation in technology.
Customer Challenge
Commercial lending is much harder to automate than consumer lending. Borrowers vary widely in structure, financial profile, and funding purpose, so score-only decisioning can often fall short. Many cases sit in a grey area where genuine risks coexist with legitimate mitigating factors. Lenders need a workflow that can gather evidence, reason through ambiguity, and produce a clear, reviewable underwriting narrative.
FXE's challenge was twofold. The platform had to generate useful underwriting analysis across complex commercial cases and preserve the role of the human underwriter. Systematically learning from reviewer changes instead of leaving that expertise trapped in manually edited reports.
The Solution at a Glance
Cloud Combinator built a multi-agent AI underwriting system on AWS that mirrors the way experienced underwriters assess commercial finance applications. Running inside Amazon Bedrock AgentCore, the workflow follows a structured decision path:
•A Risk Profiler classifies each application by business risk and proposal risk.
•Specialist agents review borrower, conduct, business, and property factors.
•Mixed-risk cases trigger an adversarial debate between agents arguing opposing positions.
•A final Arbiter agent produces a structured verdict and formal credit paper for human review.
Not every application needs the same depth of review. Straight forward applications move through a lighter touch-path, whilst complex cases receive deeper investigation. Cost and processing time align with case complexity, rather than applying the same analysis to every application.
For example, a negative account conduct signal doesn’t trigger an automatic decline. The system retrieves the mitigation logic, checks the supporting evidence, and decides whether it’s a structure issue or a temporary, explainable one.
Human review remains a crucial function built into the workflow. Once an AI-generated report is produced, an underwriter can review and refine. The platform will then compare the AI version with the human reviewed version, identify the meaningful deltas, and store those differences as structured records.
Recurring patterns are aggregated into proposed updates to the risk-to-mitigation knowledge base, whilst needing human approval before any rule change takes effect. The system doesn’t just generate reports, it learns from reviewer judgement in a governed way.
To support evidence-based reasoning, the platform uses AWS Lambda functions exposed as tools. These retrieve mitigation logic and supporting information from tenant-scoped stores in Amazon DynamoDB and Amazon S3. Keeping mitigation rules outside the prompt itself lets the underwriting logic evolve through data-layer updates, rather than code changes and makes tool usage explicit and traceable. Which is crucial in a regulated decisioning context.
.png?width=2720&height=3300&name=underwriting_agent_dag%20(1).png)
Workflow on AWS
Step 1: AI report generation
The Bedrock underwriting agent generates the initial report from input data, risk signals, and the tenant-specific risks-and-mitigations table. The reviewed version is uploaded to a second tenant-specific S3 path. The case is registered in the Underwriter Reports DynamoDB table with status waiting review.
Step 2: Human Review
An underwriter reviews and edits the AI-generated report. The reviewed version is uploaded to a second tenant-specific S3 path. The DynamoDB record is updated with human report and status is updated to human reviewed.
Step 3: AI-versus- Human Comparison
The upload will trigger an S3 event notification, which invokes a Lambda comparison function. The Lambda retrieves both reports and calls a Bedrock reviewer agent to compare them. The record will then move to either success or failure, where cases can be reprocessed with operational visibility.
The reviewer agent returns its findings as structured output. Each result identifies the specific deltas between the two reports, links them to the relevant risk category, indicates whether the change calls for updating an existing rule or creating a new one, and includes the proposed revised mitigation logic. This turns reviewer differences into machine-readable data, rather than leaving them buried inside edited documents where no system can learn from them.
Step 4: Delta Storage
Comparison output is then stored in a report deltas DynamoDB table, with fields holding the AI statement, human statement, identified changes and the proposed rule change.
Step 5: Weekly Aggregation
To move from single-case changes to repeatable improvement, Amazon EventBridge runs a scheduled weekly aggregation. Deltas are grouped by tenant, risk type, and mitigation-change pattern. The grouped results are written into a candidate rules table as pending review, and the deltas source reports move to aggregated.
Step 6: Human Approval of Proposed Rule Changes
Amazon SNS notifies the relevant reviewers of new candidate rules. Reviewers can approve, reject, or edit before approval. This design choice is key, as the system learns from a human judgement, but never autonomously changes the underwriting knowledge base improved through operational human feedback.
Step 7: Knowledge-based Update
When a candidate rule moves to approved, a DynamoDB stream triggers a Lambda that reads the approved rule content and writes it into the tenant-scoped Risk Mitigation table. The loop is then closed, future AI-generated reports draw on a knowledge base improved through operational human feedback.

Security, Governance and Tenant-Isolation
The platform is designed for regulated financial-service environments, therefore governance is built in rather than bolted on.
Tenant isolation is enforced across identity, storage and workflow state. Amazon Cognito handles singing in and which group they belong to. IAM policies make sure each tenant can only touch its own resources in Dynamo DB and S3. Tenant-specific runtime configuration lives in AWS System Manager Parameter Store. Inside the agent runtime, every request is checked against an approved tenant identifier. Gateway connections are cached per tenant, so runtime state never leaks across boundaries.
Amazon Bedrock Guardrails at the system entry point enforce content safety and policy controls, as this adds protection against unsafe or non-compliant outputs. Combined with Amazon CloudWatch and AWS X-Ray, the platform gives FXE end-to-end visibility into how cases are processed, which services were called, and how decisions were assembled.
Governance is reinforced through explicit workflow states. Transitions create an auditable trail across report generation, human review, comparison, aggregation and knowledge-based update.
The Result
FXE can now assess commercial lending applications using a contextual and scalable approach, rather than traditional score-based methods alone. AI agents generate structured underwriting reports and investigate evidence efficiently. Human underwriters remain in control of review and final refinement. The reviewer changes are captured in a structured way, recurring patterns are aggregated, and approved changes are fed back into the tenant specific, risk-to-mitigation knowledge base.
Benefits include a reduce in manual burden in grey-area cases, as well as stronger consistency across underwriting outputs. There is a better reuse of underwriter expertise, and a governed mechanism for continuous improvement. FXE now has the technical foundations needed for regulated use cases and tenant aware storage. As well as approval-based knowledge updates, explicit workflow states and end-to-end operational traceability.
The combination of these qualities and having agentic reasoning with human governance is at this platforms core. Which is what made this project a fitting example for the AWS Summit keynote stage.
Why AWS?
FXE selected AWS for the managed services needed to support enterprise grade agentic AI without adding operational complexity.
Amazon Bedrock provides the foundation models for reasoning, synthesis and structured decision support without managing model infrastructure. Then Bedrock with Amazon AgentCore provides the runtime and orchestration layer for the multi-agent workflow. This enables controlled execution, conditional routing and secure tool access.
AWS Lambda separated business logic from prompting by exposing mitigation lookups and document-processing functions as secure tools., which powered the event driven comparison workflow.
Amazon DynamoDB provides scalable, low-latency storage for mitigation mappings, flag catalogues, report metadata and structured delta records.
Amazon S3 provides durable storage for supporting documents and generated reports, maintaining a clean separation between evidence, reasoning and outputs.
Amazon Cognito and AWS Systems Manager Parameter Store supported secure, multi-tenant operation.
Amazon Guardrails, CloudWatch, Bedrock and AWS X-Ray strengthen governance, policy enforcement and end-to-end traceability.
AWS Services used
Amazon Bedrock, Amazon Bedrock AgentCore, Amazon Bedrock Guardrails, AWS Lambda, Amazon DynamoDB, Amazon S3, Amazon EventBridge, Amazon SNS, Amazon Cognito, AWS Systems Manager Parameter Store, Amazon CloudWatch, and AWS X-Ray.



