IBM Launches their Digital Asset Platform Powered by DfnsRead the News

Learn

The Wallet RFP Guide

Christopher Grilhault des Fontaines
Christopher Grilhault des Fontaines
José Aguinaga
José Aguinaga
April 28, 2026
Read time:

This is Part II of our wallet guide series for fintechs and institutions.

In Part I, The Wallet Service Guide, we argued that the industry has outgrown the habit of reducing wallets to binary choices like “custodial vs non-custodial.” We established that a wallet couldn’t be seen as just a key holder, a signer, a smart contract, or an app interface anymore. It’s a layered system that combines key management, node connectivity, transaction building, transaction management, policy enforcement, user and entitlement controls, and, increasingly, orchestration across third-party services. We also introduced a broader framework for understanding wallets through three lenses at once: their technical components, their control and deployment model, and the interfaces through which users, developers, and systems interact with them.

This Part II builds on that foundation and turns taxonomy into a decision-making process. If Part I was about understanding what a wallet service is, Part II is about understanding how to adopt one. The challenge for banks, fintechs, and financial institutions is not simply to compare features or security claims. It is to determine where wallet infrastructure should sit inside a broader financial architecture, how authority and operational control should be distributed, how transactions should be governed and delivered, and how this new layer should complement, extend, or reshape existing systems. In other words, this is the point where theory becomes operational. 

Because the truth is that most financial institutions are not evaluating wallet infrastructure as a nice new feature, or as a sidecar developer tool, or as a crypto-native experiment sitting safely outside the real business. They are evaluating it as one of the newest and most consequential components they may have to introduce into their core architecture if they want to remain relevant as value, money, securities, collateral, and payments move onchain. In 2025, the Bank for International Settlements explicitly framed tokenized platforms, with central bank money, commercial bank money, and government bonds at the centre, as part of a next-generation monetary and financial system. In parallel, the Eurosystem has already run dozens of DLT settlement trials with market participants and has since moved forward with a plan to enable DLT transactions to settle in central bank money. At the same time, institutions in Europe are making these decisions under the pressure of MiCA, which means wallet infrastructure is no longer only a product or engineering question. It is also a resilience, governance, outsourcing, and supervisory question. That is why wallet evaluations feel unusually difficult.

A bank does not assess one “wallet.” It assesses several possible wallet types. It assesses several possible places for wallet infrastructure to sit. It assesses whether the wallet layer should remain mostly complementary to existing systems or gradually become competitive with some of them. It assesses whether the wallet should behave like a channel feature, a control plane, a transaction engine, a signing boundary, a custody layer, an integration hub, or some combination of all of them. And it must do so while taking one of the boldest technical adoption moves in its recent history: introducing blockchain-native execution, asset handling, and cryptographic control into systems that were built for accounts, ledgers, files, batch processing, and tightly managed perimeter security.

That is why a quality wallet RFP has to do more than compare vendors. It has to teach the buyer how to think. This guide is written for exactly that purpose. It is for banks, fintechs, brokerages, custodians, payment companies, exchanges, tokenization platforms, and treasury teams that want to migrate flows and business onchain without losing clarity in the process. It is not a feature checklist. It is a way to move from confusion to structure, from slogans to architecture, and from vendor demos to real decision-making.

The first mistake most teams make

The most common mistake in wallet evaluations is that teams begin with the technology before they have defined the job. They begin by comparing MPC and HSMs. Or they ask which vendors support Solana, Ethereum, Bitcoin, and Cosmos. Or they look for SOC 2, ISO 27001, FIPS modules, raw signing APIs, staking support, webhook streams, and dashboards. Those questions matter, but they matter later.

The first question is simpler and more demanding: What is the job of your wallet?

That question sounds almost too basic, but it is the one that determines whether the rest of the RFP will be useful or mostly noise. A wallet is not just a place where keys live. In a financial institution, it becomes an execution layer for organization-wide intents. It may sit at the intersection of ownership, permissions, transaction authorization, policy enforcement, compliance checks, customer entitlements, ledger updates, reconciliation, and service automation. If you do not define what the wallet is supposed to power, every vendor presentation will sound plausible.

We recommend beginning your process with six simple descriptions written in full sentences.

  1. Describe the business line.
    What role does this wallet play in your institution? Is this supporting treasury, custody, trading, payments, tokenized deposits, or securities? Is it internal infrastructure or a client-facing product? What revenue or operational function depends on it? Is this a standalone use case or part of a broader roadmap?
  2. Describe the actors.
    Who interacts with the system, directly or indirectly? Who initiates transactions: individuals, teams, or automated systems? Who approves them: treasury, risk, compliance, or management? Are there end users involved, or only internal operators? What roles must be separated, and which can overlap?
  3. Describe the velocity and criticality.
    What kind of activity profile are you designing for? Are you processing thousands of low-value transactions or a few high-value ones? What is the acceptable latency for execution and approval? Which actions are routine, and which are exceptional? What is the financial and reputational impact of a mistake?
  4. Describe the user experience.
    What should users never have to think about? Should users see blockchain elements like addresses, gas, and signatures? Should transactions feel like traditional banking flows? Where do you abstract complexity, and where do you expose it? Who needs full transparency versus simplified views?
  5. Describe the flow of funds.
    What are the actual flows of value over time? Where do funds enter and exit the system? How are assets distributed across cold, warm, and segregated wallets? How often are balances rebalanced or swept? What are the typical transaction paths?
  6. Describe the regulatory obligations.
    What constraints must the system satisfy by design? Who is legally in control of the assets? What approvals, controls, or segregation are required by regulation? Where must keys, data, or systems be located? What must be logged, auditable, and reportable at all times? What compliance checks must occur before execution?

If you cannot answer those six points clearly, you may not be ready to evaluate wallet infrastructure.

Why financial institutions think differently about wallets

Crypto-native companies often begin with the wallet close to the center of the system. They start with signing, asset movement, and chain interaction, then build operations, controls, and user experience around that core. 

Banks and established fintechs usually come at the problem from the opposite direction. They already have a core banking system, payment rails, treasury tooling, sanctions screening, fraud systems, general ledger, reconciliation stack, IAM, HSM or cloud KMS estates, data warehouses, reporting systems, and established control functions. For them, wallet infrastructure is not arriving into a vacuum, but into a crowded architecture with existing owners, controls, politics, and definitions of what a critical system is. 

The wallet infrastructure may be complementary to a bank’s core ledger because it handles blockchain execution while the ledger remains the accounting system of record. It may also be complementary to payments systems because it extends payout rails into stablecoins and blockchain networks. But it may be competitive with some existing middleware because it starts to absorb transaction orchestration, policy checks, routing logic, and asset operations that previously sat elsewhere.

This is why simplistic wallet debates often go nowhere in banks. The real issue is not merely “Which wallet is safest?” The real issue is “What role is this new layer going to play inside our architecture, who is going to own it, how much of our operating model will bend around it, and how reversible will that decision be?”

That is a much better starting point.

How to structure the benchmark in five layers

Most wallet RFPs become large because they are not properly layered. They mix existential questions with secondary questions and then overwhelm everyone involved. A much more useful approach is to move through five layers in order.

  1. The first layer is for business purposes. What is the wallet there to enable?
  2. The second layer is architectural placement. Where will wallet infrastructure sit relative to your ledger, channels, controls, data, and operations?
  3. The third layer is the control model. Who can do what, under what conditions, with what cryptographic and organizational guarantees?
  4. The fourth layer is execution and resilience. How are transactions created, simulated, signed, delivered, monitored, retried, and reconciled?
  5. The fifth layer is vendor fitness. Does this provider have the product maturity, operational discipline, documentation, support, roadmap, and economics to be trusted with that role?

If your process moves in that order, you will make better choices. If it starts with vendor rankings and ends with a vague sense of “They seemed strong,” you will not.

Layer one: define what the wallet is really powering

The strongest sentence in any wallet RFP is usually not technical, but operational. For example: “We need a wallet platform that supports USD stablecoin accounts, automates on-chain withdrawals, enforces approvals for large transactions, connects to our ledger, and lets support teams act safely on behalf of users.” That sentence is already more valuable than three pages of disconnected feature requests.

From there, you can get specific. Are you building a consumer-facing app or an internal treasury platform? Do you need wallets per user, per organization, per product line, or per settlement corridor? Do you need transaction approvals, automation, or human-in-the-loop validation? Is the wallet supporting payments, custody, trading, token issuance, or internal market infrastructure? Do you need users, teams, and agents to coexist under the same model? Do you need segregated environments for production, staging, client-specific setups, and regulated entities?

A wallet is not the product, it’s the execution layer for the product (i.e., the financial service). If you miss that distinction, you will overbuy features that do not matter and under-specify the capabilities that do.

Layer two: decide where the wallet sits in your architecture

This is where many institutions finally realize they are not evaluating one thing, but several possible shapes of the same thing. A wallet infrastructure layer can sit in at least four different places, sometimes simultaneously:

  1. In one model, it behaves mainly as an end-user or application-facing wallet layer. It supports onboarding, user interactions, account views, approvals, and mobile or web experiences. In that configuration, the key question is how much blockchain complexity is abstracted and how safely the user-facing layer maps to the control layer underneath.
  2. In a second model, it behaves as a policy and orchestration layer. It sits between channels and execution systems, evaluating who can initiate, approve, simulate, route, or cancel transactions. In this model, the wallet is less a UI and more a programmable trust system.
  3. In a third model, it behaves as the signing and transaction engine itself. This is where key management, transaction assembly, raw signing, broadcasting, nonce and UTXO handling, rebroadcasts, and finality monitoring become central.
  4. In a fourth model, it becomes part of an asset operations layer. It handles wallet provisioning, address management, asset coverage, staking, token operations, chain indexing, and integrations with compliance, accounting, liquidity venues, and reporting.

The point is not that one of these is right and the others are wrong. The point is that your vendor may be strong in one of these roles and weak in another. A provider built around dashboard-driven treasury approvals may not be the strongest programmable transaction engine. A provider strong in raw signing and low-latency execution may not provide the enterprise controls, reporting surfaces, or delegated support flows your bank needs. A provider with beautiful end-user abstractions may be too rigid to fit the rest of your system. 

This is why you should ask every vendor, in one form or another: what role do you really want your platform to play inside my architecture? The answer is often more revealing than the feature set.

Layer three: choose the control model, not just the vendor label

The labels custodial, non-custodial, and co-custodial are only a starting point. What actually matters is the distribution of authority and failure. Who can move assets? Can any single admin, region, or system bypass policy? What happens if a device is compromised, a signer is unavailable, or infrastructure fails? This layer defines how your system behaves under stress, not how it looks in a demo.

At its core, the market splits into two architectural models:

  • In the first, more common model—the user-hosted key (UHK) model—private keys (or key shares) effectively act as authentication credentials. Access to a device, API token, or signing surface is tied directly to control over the key, which in turn grants authority to move funds. Identity, approval, and execution are coupled into the same layer. This approach became the industry default over a decade ago, driven by the principle “not your keys, not your coins.” While it reinforced ownership, it also shifted the burden of key management onto users and operators. The result has been significant loss and fragility—estimates suggest that around 20% of all Bitcoin is permanently lost due to private key mismanagement (New York Times). Structurally, the model creates weak boundaries: if credentials are exposed—through a leaked API key, compromised device, or operational error—policy can often be bypassed and assets moved directly. Many platforms today still rely on variations of this model, even when wrapped in MPC or mobile-based signing flows.
  • In the second model—the network-hosted key (NHK) model—authentication and key control are strictly separated. Users authenticate using credentials such as passkeys or hardware factors, policies are evaluated independently, and only then is a signing process triggered within a controlled, isolated environment. Private keys are never used as login primitives and are never exposed to user devices. Signing becomes the final step of a governed workflow, not the immediate consequence of access. This layered design enforces clear separation between identity, policy, and cryptographic execution, reducing the risk that a single compromised component can propagate across the system. It aligns closely with established security practices, including NIST cybersecurity and key management frameworks, where sensitive key material is isolated from user-facing environments and access is continuously verified rather than implicitly trusted.

This distinction matters more than the underlying cryptography. Whether you use MPC, HSM, TEE, or a combination of them all, the real question is how intents are processed and how authority is structured, constrained and protected. MPC favors distribution and availability. HSMs favor hardware-backed control and familiarity. TEEs isolate execution. Air-gapped setups protect reserves. In practice, institutions combine these approaches but the control model determines how they fit together.

You also need to be precise about who can do what. How many roles exist? Who can initiate, approve, sign, or override? Are there delegated flows for support or incident response? How are devices, users, and service accounts modeled? How are credentials issued and revoked? The depth of these answers quickly reveals whether a system is designed for real operations or just basic custody.

At this stage, you are not choosing a vendor. You are defining who holds power, how that power is enforced, and how failures are contained. Everything else follows from that decision.

Layer four: map the real transaction lifecycle

This is the section many teams underestimate, and it is where some of the most painful production failures occur. A serious financial transaction does not begin and end at signing. It has a lifecycle. Someone or something initiates it, data is enriched, policies are checked, logs are populated, the transaction may be decoded and simulated, approvals may be collected, rules may inspect amount thresholds, destinations, geography, counterparties, devices, or time windows. Then signing occurs, then broadcasting, then monitoring, then confirmation (unless blocked by AML/KYT checks or last-mile checks), exception handling, reconciliation, audit archival, and often downstream accounting or customer messaging. A wallet platform should therefore be evaluated as part of a transaction lifecycle, not as an isolated signing service.

Ask yourself who initiates the transaction, who reviews it, who can amend it, who can cancel it, how simulations are run, how policies are enforced, what external checks are called, whether the action is one blockchain transaction or a chain of actions, how errors are surfaced back to your system, and whether your evidence is structured for auditability from the start. Then ask the vendor to map those steps onto their architecture.

This becomes even more important when the wallet is meant to support programmable flows. Do you need automated withdrawals and settlements? Onchain policy enforcement? Scheduled execution? Conditional actions based on external inputs such as exchange rates, oracle values, or internal risk flags? Can the vendor tell you what is configurable, what is programmable, and what remains hardcoded?

A growing dimension in this lifecycle is the role of AI systems and autonomous agents. Increasingly, transactions are not initiated by humans but by software: risk engines, treasury optimizers, market-making bots, customer support tools, or AI agents acting on behalf of users or institutions. This raises new questions. Can an agent initiate a transaction? Under what identity? With what permissions? Can it propose actions but require human approval? Can it act within predefined limits autonomously? How are its decisions logged, explained, and audited? What prevents an agent from escalating its own privileges or acting outside its intended scope?

In this context, the lifecycle must account for machine actors alongside human actors. Agents should be treated as first-class participants with clearly defined roles, scoped permissions, and verifiable identities. Their actions should go through the same policy checks, approval layers, and audit pipelines as any other transaction. API-first systems are usually better at this than UI-first ones. Also, if AI is used to enrich or recommend actions, such as flagging suspicious flows, suggesting rebalancing, or optimizing execution, those interventions must remain observable, reversible, and bounded by deterministic controls.

Many systems today are not designed for this. They either assume a human operator at every step, or they expose raw primitives without guardrails, making automation risky. As AI-driven workflows expand, the wallet layer must evolve from a passive executor to a governed orchestration system where humans, machines, and policies interact safely.

Last but not least, we see that transaction delivery is where “wallet security” breaks down. A key distinction in any RFP is between key safety and transaction delivery reliability. Vendors focus heavily on the former. In practice, many failures happen in the latter. Transactions don’t fail at signing, they fail after. They get dropped, fees go stale, nonces collide, UTXOs are reused, nodes desync, broadcasts fail, chains reorganize, or systems record success too early. Users think money has moved when it hasn’t. That’s why a wallet must be evaluated not just as a key manager, but as a transaction delivery system.

Ask what happens when things go wrong: Do they rebroadcast automatically? Adjust fees dynamically? Handle reorgs? Track confirmation state? Detect duplicates or nonce issues? Can they distinguish between signature success, broadcast success, and settlement success? If the answer is “we sign and return a hash,” it’s not a complete system. This is also where the difference becomes clear between a wallet that does everything for you and one that gives you control. You want visibility and flexibility to manage reliability, not hidden assumptions that surface under stress.

Many wallet products are either too rigid, because they embed too many assumptions, or too loose, because they expose primitives without giving enough operational structure. You want something better than both. You want a system that exposes the right surfaces to code, including agents and automation, while still enforcing discipline, control, and auditability across the entire transaction lifecycle.

Layer five: security cannot be a late-stage checkbox

Security comes next, but not as a detached section. It has to frame everything else. A wallet platform’s value diminishes very quickly if its security assumptions fail under pressure. This is especially true in financial institutions, where the wallet layer is not just exposed to external attackers but also to insider risk, process breakdowns, configuration drift, rushed operational workarounds, and third-party dependency failures. If you want a practical way to structure this part of the RFP, use three lenses.

  1. The first is architecture. Ask how risk is distributed. Are keys split, replicated, or hardware-bound? Can any single admin or service bypass policy? Is infrastructure separated by environment, client, region, and administrative domain? Are approval policies enforced by a runtime engine or by people coordinating manually? If a vendor says “institutional-grade MPC” but cannot explain how policy, identity, infrastructure, and recovery interact, the answer is not mature enough.
  2. The second is controls. Ask about end-to-end encryption, phishing-resistant authentication, role-based access, session controls, transaction limits, destination whitelisting, device restrictions, audit logging, breach response, webhook integrity, and SIEM export. Is authentication done only via an opaque mobile application, or can it be done through strong hardware-based authentication controls, as suggested by NIST SP 800 63B Authentication Assurance Levels? Ask whether logs are immutable or tamper-evident, whether log chaining or equivalent integrity measures exist, and whether sessions can be constrained tightly enough for regulated operational environments.
  3. The third is evidence. This is where certifications and audits matter. NIST’s Cybersecurity Framework 2.0 is useful here because it gives institutions a way to translate cybersecurity outcomes into governance, risk, and operational language. NIST SP 800-57 remains a reference point for thinking about key lifecycle and key management maturity. MAS’s Technology Risk Management Guidelines are especially useful for institutions that want to see wallet infrastructure through the lens of enterprise technology governance rather than crypto marketing. FFIEC outsourcing guidance remains highly relevant for banks evaluating third-party technology providers whose resilience could affect critical services.

Corporate security cannot be an afterthought in wallet design. It must be treated as an inherent feature of the product. Be cautious of companies that claim to “rely on Ethereum’s security” by offloading responsibility to a smart contract. That should raise a red flag. A platform can have excellent cryptography and still be operationally weak if its admin model is sloppy, its support access is unclear, its approval boundaries are social rather than deterministic, or its logs cannot support an internal investigation.

Certifications and audits can help, but only if you understand what they actually prove. SOC 2, ISO 27001, ISO 27017, ISO 27018, CCSS, FIPS validations, penetration tests, and privacy controls are all useful signals, but they are often misunderstood. A SOC 2 Type I shows that controls existed at a point in time. A SOC 2 Type II shows that those controls operated over a period. ISO 27001 confirms the existence of an information security management system, but not necessarily that the most critical aspects of key management were in scope. A penetration test tells you someone looked for vulnerabilities, but the real questions are when, with what scope, by whom, and what changed as a result. Always ask for the reports. Don’t trust, verify.

The right approach is to connect the dots between controls, audits, and real operations. Ask whether key generation and key lifecycle were in scope. Ask whether production systems were tested, what exclusions were made, how often independent assessments are run, and whether reports can be shared, even in redacted form. Ask about major findings, remediation timelines, and whether any security incidents occurred in recent years, and what changed afterward. Security claims only become credible when a vendor can link controls to audits and audits to operational outcomes.

This is especially critical for financial institutions and fintechs. You are not just buying cryptography, you are buying a system that must fit into enterprise security environments. Your evaluation should therefore go beyond wallet mechanics and include identity and access management, such as SSO (SAML or OIDC), SCIM provisioning, device trust models, passkeys and phishing-resistant MFA, admin session controls, approval entitlements, just-in-time (JIT) access, and privileged access management (RBAC, PolP, etc.). It should also include system-level protections such as webhook signing, replay protection, SIEM integration, incident response workflows, evidence collection, and strict separation of duties.

The systems that will last are the ones that treat money movement as enterprise control infrastructure, not just as interaction with a blockchain.

Resilience matters as much as confidentiality

The Digital Operational Resilience Act (DORA) has made this much harder to ignore in Europe under MiCA, but the principle is broader. Similar requirements can be found in the USA under FISMA. A financial institution must assume that severe ICT disruptions will happen and that critical external providers will sit inside the blast radius of those disruptions. DORA’s application from January 2025 and the EBA’s focus on oversight of critical ICT third-party providers reinforce the idea that resilience is now a first-order selection criterion, not a supporting one.

This is why your RFP should ask for recovery time objectives, recovery point objectives, evidence of drills, region and provider separation, and concrete answers about failover. A useful baseline in many evaluations is an RTO of four hours or less and an RPO measured in seconds rather than hours. Those are not universal thresholds, but they are a good test of whether a vendor thinks like a critical system provider or like a generic SaaS company.

Operational resilience also has to cover chain-specific failure modes. Wallets do not only fail at the key level. They fail at the network level, the mempool level, the indexing level, the coordination level, and the observability level. A vendor that cannot speak comfortably about congestion, gas volatility, broadcast reliability, reorgs, nonce and UTXO management, and dropped webhooks is not ready for serious transaction operations.

Learn more about DORA here: dfns.co/article/dora-eu-regulation 

99% benchmarks fall short, performance is not one number 

If your use case involves trading, payment operations, automated treasury, or any workflow where user or market expectations are time-sensitive, you have to break performance into distinct components.

You should separate at least seven categories:

  1. Key generation latency: How long does it take to create wallets or cryptographic material, especially under concurrency?
  2. Signing latency: How fast can the platform produce a valid signature under realistic conditions, including policy checks and approvals?
  3. Infrastructure latency: How much delay comes from the architecture itself, including region placement, service orchestration, approval routing, or cold-start behavior?
  4. Data indexing latency: How quickly can the system detect and expose chain events, balances, confirmations, and state changes?
  5. Throughput latency: How many transactions, signatures, or wallet operations can the system handle in parallel? What happens under load?
  6. Failure & recovery latency: How long does it take to detect, surface, and recover from failures such as dropped transactions, failed broadcasts, stuck approvals, or node outages?
  7. Human-in-the-loop latency: Often much larger than raw cryptographic latency, but frequently omitted from demos. How long do approvals actually take in practice?

This is why you should ask vendors for chain-specific and geography-specific numbers, and why you should always ask what exactly is being measured. Is the quoted number raw signing only? Does it include policy checks? Does it include network round trips? Does it reflect load? Is it measured in a lab, or in production?

Good engineering shows up very clearly here. So does weak engineering. A microservice-first architecture with clean internal separation often behaves differently from a monolithic system under concurrent load. Cluster location matters. RPC strategy matters. Region strategy matters. If you are doing high-throughput or geographically distributed operations, all of this becomes real very quickly.

Scale should be modeled across the next 24 months, not the next 30 days

Many wallet selections are made based on present-day volume. That is another common mistake. A wallet platform that works for 500 daily actions may struggle at 50,000. A system designed for manual approvals may collapse once automation becomes central. A chain support model that seems fine when you use two networks may become a bottleneck when you need eight. A vendor that looks affordable at low usage may become one of your most expensive infrastructure lines as API calls and signatures scale.

So model growth deliberately. Estimate wallet count, transaction count, approval count, webhook volume, supported chains, active assets, internal users, customer support workflows, and compliance events across at least the next 12 to 24 months. Include optimistic and pessimistic cases. Then ask vendors how their pricing, architecture, support, and operating model behave under those cases. Scale is not only TPS. It is organizational complexity.

Multi-chain support should be evaluated as depth, not breadth

One of the easiest ways to be misled is to ask, “How many chains do you support?” and accept the number. That number is almost meaningless on its own.

What matters is how “deeply” those chains are supported. Does support include native assets only, or also token standards? Does it include transfers, contract calls, indexing, history, webhooks, simulation, fee estimation, staking, governance, and operational monitoring? How quickly can the vendor add new chains? Are chains onboarded through a reusable abstraction layer or by hardcoded engineering work each time? What happens when a chain becomes slow, unstable, or commercially less interesting to the vendor? Are encrypted transactions supported for privacy-preserving use cases? 

These questions matter because most institutions will not remain on one chain. Even those that begin with Ethereum-compatible networks soon encounter Bitcoin settlement needs, Solana-based use cases, Tron-based stablecoin flows, Cosmos ecosystems, or newer L1 and L2 demands such as Canton, Tempo, or Arc. The vendor’s ability to absorb that heterogeneity is a strong indicator of architectural maturity.

A useful operational metric here is time-to-support. Ask how long it took the vendor to add the last three major chain launches that mattered to their customers. Identify whether they were able to participate in the latest blockchains to measure adoption speed to non-incumbents. That number tells you more than a marketing grid.

Advanced asset support changes the wallet conversation

The moment your business handles more than native tokens, your wallet RFP becomes much more interesting. Tokenized securities, stablecoins with administrative controls, LP positions, staked assets, tokenized deposits, governance rights, NFTs, and DeFi-connected positions all place different demands on the infrastructure. They affect indexing, policy models, asset discovery, metadata handling, transaction building, reconciliation, reporting, and user experience.

A serious wallet service today should be able to answer not only whether it supports ERC-20 or SPL tokens, but how it supports token standards operationally. Can it enforce asset-specific policies? Can it present correct metadata? Can it handle staking or governance workflows? Can it support token issuance and redemption flows? Can it participate in airdrops? Are you able to retrieve their public keys? Can it reflect asset restrictions that arise in regulated tokenization contexts?

This is one reason why wallet infrastructure is increasingly being seen by institutions as a platform rather than a utility. The wallet is the place where the operational meaning of on-chain assets is expressed.

Reporting and auditability deserve a big role in the RFP

Many wallet evaluations are still too engineering-centric. That is understandable early on, but it becomes a mistake the moment finance, compliance, auditors, and regulators enter the room.

A financial institution should ask what the audit log contains, whether it is exportable, whether it is tamper-evident, how long it is retained, what deletion policies exist, whether it can be connected to SIEM and monitoring systems, whether abnormal behavior can trigger alerts, and whether regulators or auditors can be given intelligible evidence of what happened.

The same goes for reporting. Can balances, transaction histories, and policy events be exported in a structured way? Can this data be reconciled with the general ledger? Can it support tax, accounting, or regulatory requests? Can the platform help distinguish between operational actions, blockchain outcomes, and user-facing states? A wallet that moves money but cannot explain itself clearly becomes a governance burden.

Developer experience is one of the most honest signals in the market

This deserves emphasis because it is too often treated as secondary. The quality of SDKs, docs, changelogs, sample code, mock APIs, rate limit documentation, test environments, and error messages tells you a great deal about how a vendor thinks. It tells you whether they see developers as a strategic constituency or as a support cost. It tells you whether the platform is really designed to be integrated or merely claimed to be.

A simple practical exercise works well here. Run a one-hour or half-day internal sprint with one engineer who did not attend the vendor demo. Ask that person to create a wallet, configure a policy, simulate a transaction, obtain an approval, sign it, and track it. If that flow cannot be completed with the documentation and test environment, you have found something important.

Poor developer experience rarely stays confined to the integration phase. It spills into operational risk, because ambiguous APIs and fragile wrappers create real production debt. If you need to spin separate infrastructure to load co-signers to do simple automatic approvals, that overhead will be 10x in production.

Programmability is what separates infrastructure from software you work around

One of the biggest differentiators between wallet platforms is not how they store keys, but how much of the system is programmable. What can you control through APIs? What can be expressed through policies? What is configurable versus hardcoded? What can be automated by service accounts? What can trigger webhooks or event streams? Can you version and test policies? Can you simulate transactions before approval? Can workflows be scheduled? Can rules change by asset, amount, counterparty, geography, device, risk score, or user class? This is where many vendors expose their real architecture. 

Some platforms are essentially dashboards with APIs bolted on. Some are APIs with fragmented operational surfaces. Some are true programmable control planes. That matters because serious financial businesses rarely want a wallet that does everything exactly one way. They want a wallet platform that enables their own product logic, risk models, compliance routines, and service design. The best wallet services do not try to become your entire business. They make your business easier to build.

Cost should be thought of as total ownership and strategic dependence

List price is the wrong lens. You should ask for a three-year model that includes onboarding, migration, support tiers, usage-based charges, chain expansion, feature unlocks, compliance modules, exit support, and professional services. Then you should add your own internal costs: engineering adaptation, operations overhead, support dependence, audit support, and future migration work.

Pricing in this category is often less transparent than it appears. Some vendors rely on projected transaction volumes, layered setup fees, and add-ons that only become visible once you start integrating. Integrating a KYT provider may mean paying both the provider and the wallet platform. Webhooks, advanced APIs, or basic operational features may be packaged as premium tiers. Long-term contracts can start at reasonable levels and scale aggressively over time, making it difficult to forecast costs or compare vendors on a consistent basis. These structures may optimize for vendor revenue predictability, but they often introduce uncertainty and friction on the client side.

A more robust approach is to look for pricing that is transparent, usage-based, and fully documented upfront. Costs should be estimable before integration begins, without hidden dependencies or implicit assumptions about growth. Wallet infrastructure is not a financial service, it’s a technology. Charging basis points on assets under management (AUM) or taking fees on transaction flows blurs that boundary and can create misaligned incentives between the platform and the institution operating on top of it.

Beyond pricing, you should examine the cost of lock-in. If key export is painful, policy logic is not portable, logs are hard to extract, or transaction history requires bespoke conversion, the future exit cost may far exceed any initial discount. Migration is not just about keys; it includes policies, workflows, audit trails, user entitlements, and operational continuity.

A simple but effective test is this: if a vendor cannot clearly explain how you would export your keys, retrieve your logs, and continue operating your application elsewhere, you should reduce its strategic score sharply, regardless of how attractive the product or pricing may seem. Many vendors are betting on their ability to lock you into their platform, making these question thread critical to follow.

Ultimately, cost is not just what you pay, it is how dependent you become. The right choice minimizes surprises, preserves optionality, and gives you a clear path forward as your system evolves.

“Build vs buy” comes down to “where do you want to own complexity?”

Because we are talking about APIs, infrastructure, control planes, and cryptographic systems, this guide is naturally aimed at technical leaders, developers, security teams, and architects. And technical teams have strong instincts here. Builders build. That is what they do.

Over time, a few recognizable patterns show up.

  • There are the overachievers, who take on everything at once, promise themselves they will build the whole stack, and later discover that operating cryptography, resilience, compliance surfaces, and chain integration all at once is far heavier than it looked.
  • There are the purists, who believe the company’s deepest value lies in owning infrastructure all the way down to the kernel or the enclave boundary, and who therefore resist managed services unless absolutely forced.
  • There are the speedrunners, who integrate as much as possible, move fast, and accept dependency because time-to-market matters more than theoretical control in the early stage.
  • There are the tinkerers, who stitch together several providers, build orchestration around them, and create a hybrid architecture that gives them flexibility at the cost of complexity.
  • And there are the perfectionists, who spend too long deciding, choose one provider too absolutely, get burned, and later find themselves rebuilding under pressure.

None of these archetypes is always wrong. But all of them become dangerous when they are unconscious.

The healthier framing is to ask “where complexity creates real differentiation for your business?” vs “where does it consume scarce attention?” Build what gives you leverage. Buy what is expensive to reinvent and hard to operate safely. Orchestrate in a way that preserves optionality. Keep enough of the control plane and business logic on your side that you can evolve or migrate later. The most durable institutional architectures are often hybrid for exactly this reason.

What an institutional wallet service should look like

By this point, a robust wallet service for businesses begins to look less like a product category and more like a set of architectural behaviors.

It should help you move money and assets securely, but also efficiently. It should support your business logic without trying to replace it. It should let you express policy clearly and enforce it deterministically. It should expose enough programmability that you can build differentiated workflows, but not leave you alone with raw primitives and operational chaos. It should make recovery and evidence possible. It should integrate with the rest of your stack rather than force the stack to orbit around it blindly. It should be flexible enough to support multiple wallet types and multiple roles, because one business almost always needs many wallets, not one.

Most importantly, it should help you decide what to reveal and what to keep operationally invisible. Good wallets do not just manage keys. They manage complexity.

Vendor quality is not just technical quality

You are not just buying software. You are choosing a strategic dependency. That means your RFP should include questions about the company itself.

  • How long have they existed? 
  • Where is the legal entity? 
  • Who are the investors? 
  • What is the product-to-sales ratio? 
  • Do they have customers in production that look like you? 
  • Can they provide references in your vertical? 
  • How transparent are they about incidents?
  • How much of the supply chain have they built themselves? 
  • Do they publish roadmap updates? 
  • How often do they ship? 
  • Are executives and engineers visible in the market and technical community? 
  • What happens if the company is acquired, restructured, or shut down?

These are not secondary questions. Wallet infrastructure providers sit close to your money movement, your operational continuity, and your customer experience. Vendor fragility can become your fragility. Longevity does not guarantee quality, but opacity is often a warning sign.

Open ecosystem or closed ecosystem?

This is another major strategic choice. Open ecosystems tend to expose APIs cleanly, support standards, publish usable SDKs, and let you build without relying on the vendor’s UI. Closed ecosystems often centralize more functionality in proprietary interfaces, opaque data formats, and tightly controlled workflows.

Closed systems can be simpler in the short term. Open systems can be more work at first. But over time, closed ecosystems often create more dependency and less freedom to integrate, compose, or migrate.

So ask direct questions.

  • Can I use your platform fully through APIs?
  • Can I export keys, logs, and transaction history cleanly?
  • Do you support open standards where applicable?
  • Can I integrate your system into my own compliance, ERP, treasury, and analytics stack?
  • Can I operate without your dashboard if I choose?
  • Do I understand the entire key lifecycle within their systems?
  • What happens if I want to replace one part of your stack but keep the rest?

The more evasive the answers, the more likely lock-in is part of the business model.

Things that could go wrong

Every wallet RFP should include scenario testing. Not just capabilities.

Ask vendors to walk through what happens when:

  • a signer is compromised
  • an admin account is phished
  • a region goes down
  • an RPC provider becomes inconsistent
  • mempool conditions spike suddenly
  • a chain reorganizes after apparent success
  • a webhook endpoint is unavailable
  • a user needs emergency offboarding
  • a policy is misconfigured
  • a key share or hardware boundary fails
  • the vendor suffers an acquisition or corporate event
  • you want to exit the platform

What you are looking for is not perfection. It is maturity. Good vendors answer these questions concretely, with architecture, logs, boundaries, and procedures. Weak vendors fall back on reassurance.

Geopolitical risk and sovereignty

There is one more dimension that is increasingly hard to ignore: where your system lives, and who ultimately has control over it. Wallet infrastructure does not exist in a vacuum. It sits within jurisdictions, cloud providers, legal frameworks, and geopolitical realities. You should ask:

  • Where is the infrastructure hosted, and in which jurisdictions?
  • Where does key material reside, and under whose legal control?
  • Can the system be deployed in a specific region or sovereign environment if required?
  • What happens if cross-border data flows are restricted?
  • Could a regulator, government, or third party compel access, shutdown, or restrictions?
  • Can you operate independently if access to the vendor is disrupted?
  • Does the architecture allow for hybrid or on-premise control if your requirements evolve?

This is not theoretical. Financial institutions increasingly operate under data residency rules, outsourcing regulations, and political constraints that can affect infrastructure choices. A system that cannot adapt to these realities may work in normal conditions but fail under regulatory or geopolitical pressure.

Sovereignty is not just about where data is stored. It is about who can ultimately influence, restrict, or interrupt your ability to operate. The strongest wallet infrastructures are designed with this in mind. They allow you to localize control where needed, maintain operational continuity across jurisdictions, and avoid single points of geopolitical dependency.

How to make the RFP more actionable in practice

The best wallet RFPs are not laundry lists. They are filters that reveal alignment.

A useful working model is to combine four practical tools.

  1. The first is a RACI map for the transaction lifecycle. Write down who is responsible, accountable, consulted, and informed at each step from initiation to finality. Do this for at least one normal transaction, one high-value exception, and one incident response flow. This exercise exposes hidden assumptions quickly.
  2. The second is a weighted decision matrix. Give the highest weight to the categories that could create irreversible pain later: control model, transaction reliability, auditability, exit flexibility, and operational resilience. Don’t let cool UX or low sticker price compensate for weaknesses in those categories.
  3. The third is scenario testing. Ask every shortlisted vendor to walk through a mempool failure, a compromised signer, an unavailable region, a failed webhook, a key rotation, and a clean migration. This is where canned answers fall apart.
  4. The fourth is a two-horizon architecture map. Horizon 1 is what you need to ship in the next 12 months. Horizon 2 is what the system must support if stablecoin flows, tokenized deposits, tokenized securities, treasury operations, and new chains expand over the following 24 months. A vendor that looks perfect for horizon 1 and brittle for horizon 2 is not necessarily the right choice.

You can also borrow from broader enterprise risk methods. STRIDE can help security teams structure threat scenarios. FAIR can help risk teams think about probable loss and exposure. NIST CSF can help frame outcomes and control maturity. RTO and RPO force resilience into concrete terms rather than vague assurances. None of these models is wallet-specific, which is precisely why they are useful: they let the wallet decision be evaluated in the same language as the rest of the bank.

The RFP checklist you cannot ignore

Before you finalize a decision, make sure your process includes all of the following:

  1. A clear statement of what the wallet is meant to power.
  2. A defined role model across users, bots, admins, and teams.
  3. A mapped transaction lifecycle.
  4. A required control model, including policy and approval boundaries.
  5. A detailed view of key management and recovery.
  6. Real-world failure mode review, not just happy-path demos.
  7. Performance evidence across relevant chains and regions.
  8. Chain support broken down by capability depth.
  9. Reporting, reconciliation, and audit requirements.
  10. Enterprise security integration requirements.
  11. A 36-month TCO model.
  12. A migration and exit plan.
  13. A cross-functional scoring committee.
  14. Live technical validation, not just slideware.
  15. Reference calls focused on incidents, support quality, and hidden pain.

If any of these are missing, the evaluation is probably incomplete.

Suggested reading and reference points for a serious wallet RFP

If you want the RFP process to feel less improvised and more institutionally grounded, it helps to anchor it in a few external references.

  • NIST Cybersecurity Framework 2.0 is useful for turning technical wallet questions into governance and risk language that executives and control functions can work with. NIST SP 800-57 remains one of the most helpful ways to think about key management as a lifecycle rather than a feature. (NIST Computer Security Resource Center)
  • For regulated financial institutions, DORA is essential reading because it reframes wallet vendors not only as product providers but also as ICT dependencies whose resilience can become a supervisory concern. MiCA matters because it provides a common EU frame for crypto-asset activity and crypto-asset services. (ESMA)
  • MAS’s Technology Risk Management Guidelines and FFIEC guidance on outsourced technology services are especially useful for institutions that want to evaluate wallet infrastructure with the same seriousness they apply to other critical third parties. (Monetary Authority of Singapore)
  • And for anyone still wondering whether this is really a core banking issue rather than a crypto side topic, the BIS and the ECB make the broader direction hard to dismiss. The conversation is no longer about whether tokenisation and DLT-based settlement belong in the future of finance. It is about how institutions connect to that future without losing control of their systems in the process. (Bank for International Settlements)

Closing thoughts

The best wallet vendor is not the one with the smoothest demo or the most polished architecture diagram. It is the one that understands the depth of the decision you are actually making.

Because for a serious financial institution, choosing wallet infrastructure is not simply choosing where signatures happen. It is choosing how a new on-chain execution layer enters the bank, how authority will be distributed, how resilience will be proven, how existing systems will be complemented or challenged, how customers will experience digital assets, and how much of the institution’s future operating model will depend on one vendor’s design choices.

That is why this decision deserves more than a checklist and more than a slogan. It deserves structure, patience, honest tradeoffs, and a clear architectural point of view. 

Wallets are not just endpoints anymore. They are programmable trust systems sitting at the frontier between banking architecture and on-chain finance.

Choose accordingly.

Authors