What types of open banking data products exist?
Open banking data products fall into six categories, each with different buyers, data requirements, and commercial models.| Product type | What it does | Primary buyer | Data requirement |
|---|---|---|---|
| Account aggregation | Consolidates accounts from multiple banks into a single view | Consumers, wealth managers, PFM apps | Account balances, transaction history across institutions |
| Income verification | Confirms income source, amount, and stability from transaction data | Lenders, landlords, employers | 3-6 months of salary credits, categorised by source |
| Affordability assessment | Calculates disposable income after committed expenditure | Consumer lenders, mortgage providers | 3-6 months of transactions, categorised into income and committed spend |
| Credit decisioning | Provides risk scores or lending recommendations based on financial behaviour | Banks, credit unions, alternative lenders | 6-12 months of transactions, debt payments, gambling markers, income stability |
| Transaction categorisation | Classifies raw transaction descriptions into spending categories | Any data product provider; white-label for banks | Ongoing transaction feeds, merchant data, category taxonomies |
| Financial health insights | Delivers personalised financial guidance based on spending patterns | Banks (embedded), PFM providers, insurers | Ongoing consented access with categorisation and trend analysis |
Why does use case design determine commercial viability?
The most common failure in open banking product development is building from data availability rather than buyer need. Founders see the data available through the API and ask “what can we build?” The correct question is: “what decision does the buyer need to make, and what data makes that decision better?” Use case design determines viability because:- Willingness to pay tracks decision value, not data volume. A lender will pay for income verification because it directly reduces default risk. The same lender will not pay for a spending visualisation because it does not change a lending decision.
- Regulatory scope tracks use case, not product. An affordability assessment for lending falls under FCA consumer credit regulation. A personal finance management tool may not. The regulatory burden — and therefore the cost to build — is determined by what the buyer uses the product for.
- Data scope must be proportionate. GDPR requires data minimisation. A product that accesses 12 months of full transaction history for a use case that only needs 3 months of income data is over-scoped and harder to sell.
How is transaction data transformed into a data product?
The transformation pipeline has five stages. Each stage adds commercial value and introduces quality risk.Stage 1: Data ingestion
Raw transaction data arrives through open banking APIs in different formats across banks. Field names, transaction descriptions, date formats, and status codes vary by institution. Practical challenge: a direct debit to “ACME INSURANCE” may appear as “DD ACME INS”, “ACME INSURANCE LTD”, or “ACME INS CO” depending on the bank. The ingestion layer must normalise these before downstream processing.Stage 2: Categorisation
Transaction descriptions are classified into categories: salary, rent, utilities, groceries, subscriptions, gambling, debt repayment. This is where most of the value — and most of the error — is created. Categorisation accuracy directly affects product quality. If a salary credit is miscategorised, income verification fails. If a gambling transaction is missed, credit risk assessment is wrong. Industry-standard categorisation accuracy ranges from 85% to 95%, with the gap between 95% and 99% requiring significant investment.Stage 3: Enrichment
Categorised data is enriched with derived metrics: income stability (variance in salary credits over time), debt-to-income ratio, essential-versus-discretionary spending ratio, recurring payment patterns. These derived metrics are the product. The raw data and categories are the inputs. Buyers pay for the output — the score, the verification, the recommendation.Stage 4: Scoring and decisioning
Enriched data feeds a decisioning model: approve/decline/refer for lending, income-verified/unverified for landlords, affordable/unaffordable for credit providers. The scoring model must be explainable. In regulated markets, the buyer needs to understand why a decision was made — not just what the decision was. Black-box models are harder to sell in regulated environments.Stage 5: Delivery
The data product is delivered via API, dashboard, or embedded integration. The delivery format affects adoption: lenders want API integration into existing workflows; landlords want a PDF report; banks want a white-label widget.What commercial models work for open banking data products?
Three commercial models dominate:- Per-query pricing. The buyer pays per API call, per verification, or per decisioning request. Works for income verification, affordability checks, and credit decisioning. Typical range: £0.50 to £5 per query depending on data depth and buyer segment.
- Platform subscription. The buyer pays a monthly fee for access to the API and a defined volume of queries. Works for larger buyers with predictable volume. Typical range: £5,000 to £50,000 per month.
- Revenue share. The data product provider takes a percentage of the value generated — for example, a share of loans originated using the product’s decisioning. Higher risk, higher upside. Requires trust and transparency on both sides.
Common mistakes
- Building from data, not from use case. The API provides access to many data fields. Accessing all of them and building “something interesting” produces a product nobody pays for.
- Underestimating categorisation complexity. Transaction categorisation looks simple until you handle multi-currency accounts, split payments, bank-specific description formats, and edge cases. It is the hardest technical problem in open banking.
- Ignoring the 90-day re-consent requirement. Products requiring ongoing data access must build PSD2 re-consent into the user experience. Failing to do this causes silent data access loss.
- Over-scoping data access. Requesting more data than the use case requires reduces consent rates and increases regulatory risk. See consent and risk.
- Treating the API as the product. The API provides raw data. The product is the transformation. If the buyer could use the raw data directly, they would not need a data product provider.
- Neglecting multi-bank consistency. Data formats, field availability, and API reliability vary across banks. A product that works well with one bank but fails with another has a limited market.
Key takeaways
- Open banking data products transform raw transaction data into decisions. The value is in the transformation, not the data.
- Start from the buyer’s decision, not from the available data. The decision determines what to build, what data to access, and what commercial model works.
- Categorisation accuracy is the quality bottleneck. Invest in getting it right.
- Consent scope, retention policy, and the 90-day re-consent requirement must be designed into the product from the start.
- Three commercial models work: per-query, subscription, and revenue share. Match the model to the buyer’s procurement structure.