Dfns Secures $16M Series A Funding – See the Full Announcement

Security

The Bybit/Safe Hack

Thibault de Lachèze-Murel
Thibault de Lachèze-Murel
Christopher Grilhault des Fontaines
Christopher Grilhault des Fontaines
March 10, 2025
Read time:

How one compromised laptop led to the largest crypto heist in history. This deep dive breaks down the attack, the systemic weaknesses it revealed, and why crypto must embrace real security beyond “best practices.”

The recent Bybit/Safe hack has sent shockwaves through the crypto world… and for good reason. With a jaw-dropping $1.5 billion in ETH stolen, this breach has become the largest crypto theft in history and represents a discrediting vulnerability at the heart of digital asset finance. The most troubling is that it all hinged on a single developer’s computer being compromised, allowing the attackers to inject a malicious JavaScript snippet that hijacked the Safe{Wallet} infrastructure used by Bybit.

Despite employing the so-called best practices of multisignature wallets, also known as “multisig wallets”, Bybit still fell victim to a devastating delegatecall exploit. One flipped parameter, changing “operation=0” to “operation=1”, was enough to give attackers complete control to replace Safe’s wallet logic and move 401,000 ETH (~$1.5B) into their own addresses. The level of sophistication of this operation was high, with two malicious smart contracts deployed in advance and a deliberate strategy of delaying fund movement across 900+ wallets, cross-chain bridges, and DEXs to evade immediate detection. It was a perfect storm that exploited security flaws at every level: corporate, infrastructure, application, protocol, and even UX. Investigations are still underway, but SEAL 911 and the FBI believe that TraderTraitor (a North Korean group known for hacking exchanges with distinct tactics) carried out the heist.

In this post, we’ll explain how the hack happened, starting from a single developer’s compromised computer and ending with a critical change to one transaction parameter. We’ll then look at the first recommendations from players like Ledger, Binance, and Fireblocks. Their proposed fixes (e.g. eliminating blind signing, adopting stronger multi-sig, adding MPC, multi-level approvals, whitelisting, hardware verification, better operational governance, etc.) are good steps toward preventing new disasters. But let’s be frank here: it’s truly outrageous that $1.5 billion could have been stolen simply because one compromised laptop gave attackers full access to Bybit’s wallet infrastructure. It’s even worse to think that every company using Safe was at risk… Bybit just happened to have the biggest reserves.

While each recommendation provides valuable protection, none of them alone is enough. They must all work together in a holistic security framework to bring bank-grade safety to crypto. Without a collective push for stronger, more integrated systems, these attacks will keep happening. It’s time to move beyond fragmented “best practices” and deliver the robust protections digital assets, and their users, truly need.

Bybit’s “cold wallet” setup with Safe{Wallet}

It happened on February 21, 2025. Surprisingly, the attack didn’t involve a flaw in Bybit’s core systems or in the Ethereum blockchain. Instead, it targeted Safe{Wallet}, a third-party “multisig” platform Bybit trusted. In this post, we’ll examine how Bybit’s “cold wallet” was set up, how a single transaction pulled off the exploit, and what this means for the supposedly secure setups many believed were safe just a week ago. We use quotes around “cold wallet” because “cold” implies a fully offline setup, which isn’t the case with Safe smart contracts and Ledger Nanos.

Like most web3 companies, Bybit stored most of its funds in a “cold wallet,” separate from its “hot wallet” and designed to keep assets safe from hackers. This cold wallet ran on a Safe{Wallet} smart contract located at 0x1Db92e2EeBC8E0c075a02BeA49a2935BcD2dFCF4

Safe{Wallet} is a popular multisig platform (read: onchain transaction execution and approval platform). Bybit’s version used a standard proxy contract, whose only job is to forward function calls to an implementation contract that carries the real transaction logic. In Bybit’s case, this critical implementation contract, that was ultimately stored in the proxy’s “slot 0”, was originally located at 0x34CfAC646f301356fAa8B21e94227e3583Fe3F5F. Bybit viewed this as its “cold wallet”, aka the last line of defense for its ETH reserves.

Bybit secured its “cold wallet” with a multisig arrangement, requiring multiple employees to approve every withdrawal, which is good. Typically, funds were transferred from the cold wallet to a hot wallet (e.g., 0xf89d7b9c864f589bbF53a82105107622B35EaA40) for day-to-day operations. Each signer (1) used a Ledger hardware wallet, (2) logged into the Safe{Wallet} UI, (3) reviewed the transaction, (4) connected their Ledger to sign, and (5) submitted. Once enough signatures were collected, Safe broadcasted the transaction to Ethereum (6). The proxy contract then called the implementation contract (7), verified the signatures (8), and released the funds (9).

For example, on February 3, 2025, transaction 0x3e10310c05bb87269bfd60f67e13fd9dff4da80b4e47a541af1835f15fd96071 transferred 30,000 ETH from the cold wallet to the hot wallet. It called the execTransaction function on the proxy contract (0x1Db92e2EeBC8E0c075a02BeA49a2935BcD2dFCF4) with the following parameters:

to: 0xf89d7b9c864f589bbF53a82105107622B35EaA40 (hot wallet)
value: 30,000 ETH (30,000,000,000,000,000,000,000 wei)
data: (empty)
operation: 0 (direct call)
safeTxGas: 45,745
dataGas: 0
gasPrice: 0
gasToken: 0x000...000
refundReceiver: 0x000...000
signatures: (multisig signatures from Bybit employees)

This is a textbook transfer: The proxy then forwarded this call to the implementation contract, which verified the signatures and sent 30,000 ETH to the hot wallet. Because the operation field was set to 0, it was a straightforward call with no additional complexities. This process was secure and repeatable, relying on Safe{Wallet}’s supposedly battle-tested multisig logic. Until it wasn’t.

Anatomy of a carefully orchestrated zero-day exploit

On February 21, 2025, Bybit needed to move 30,000 ETH from the cold wallet to the hot wallet—business as usual. The employees followed the standard process: they logged into the Safe UI, saw the transaction details, connected their Ledger devices, signed it, and submitted it to Safe. The system broadcasted the transaction as expected: 0x46deef0f52e3a983b67abf4714448a41dd7ffd6d32d32da69d62081c68ad7882. At first glance, it looked like another execTransaction call to the proxy contract (0x1Db92e2EeBC8E0c075a02BeA49a2935BcD2dFCF4). But the arguments told a different story:

to: 0x96221423681A6d52E184D440a8eFCEbB105C7242
value: 0
data: 0xa9059cbb000000000000000000000000bdd077f651ebe7f7b3ce16fe5f2b025be29695160000000000000000000000000000000000000000000000000000000000000000
operation: 1 (delegatecall)
safeTxGas: 45,746
dataGas: 0
gasPrice: 0$
gasToken: 0x000000
refundReceiver: 0x000000
signatures: (multisig)

Something is off. The transaction’s value is set to 0, so no ETH is being transferred. Also, the operation field was changed from 0 to 1, and there's a new data field called delegatecall. Let's break it down.

This operation has a 1 flag, meaning it’s a delegatecall rather than a regular call. In Ethereum, a delegatecall lets the target contract (0x96221423681A6d52E184D440a8eFCEbB105C7242) run its code using the proxy’s storage. This is critical because any changes to the target’s storage affect the proxy’s storage, including the slot that holds the implementation contract address.

The target contract (0x96221423681A6d52E184D440a8eFCEbB105C7242) is simple but dangerous. Written in pseudocode, it looks like this:

storage:  
	stor0: uint256 at storage 0
def _fallback() payable:  
	revert
def transfer(address _to, uint256 _value) payable:  
	require calldata.size - 4 >= 64  
    	require _to == _to  
    	stor0 = _to

The data field (0xa9059cbb…) triggers the transfer function. The first four bytes (0xa9059cbb) are the Keccak-256 hash of "transfer(address,uint256)", telling the Ethereum Virtual Machine (EVM) to execute the transfer. The next 32 bytes (0xbDd077f651EBe7f7b3cE16fe5F2b025BE2969516) represent the _to address. The _value is 0, but that doesn’t matter here. When executed, transfer updates stor0 to 0xbDd077f651EBe7f7b3cE16fe5F2b025BE2969516. But since this is a delegatecall, stor0 doesn’t belong to the target contract; it belongs to the proxy’s storage at slot 0. Before the transaction, slot 0 held 0x34CfAC646f301356fAa8B21e94227e3583Fe3F5F, which was Safe’s legitimate implementation. After execution, it was overwritten with an attacker-controlled contract: 0xbDd077f651EBe7f7b3cE16fe5F2b025BE2969516. In a single signed transaction, Bybit’s cold wallet proxy was redirected from Safe’s secure logic to the attacker’s contract. Employees believed they were approving a 30,000 ETH transfer to the hot wallet, but the Safe UI, compromised by a supply chain attack days earlier, masked the real payload. We'll get to that part soon.

With the proxy now pointing to 0x47666fab8bd0ac7003bce3f5c3585383f09486e2, the attacker acted immediately. That same day, they broadcasted transaction 0xb61413c495fdad6114a7aa863a00b2e3c28945979a10885b12b30316ea9f072c. Instead of calling execTransaction, they used sweepETH on the proxy, which forwarded the call to the new implementation. The attack contract contained this function:

def sweepETH(address _param1) payable:  
	[...]  
	call _param1 with:    
	value self.balance  
    	[...]

It’s brutally simple: transfer all ETH in the contract to _param1. In this case, _param1 was 0x47666fab8bd0ac7003bce3f5c3585383f09486e2, an attacker-controlled address. The cold wallet held 401,000 ETH ($1.5 billion). With a single call, the attacker drained it all. No multisig approvals were required. They now had full control of the wallet.

Breaking down how Safe{Wallet} was compromised

The Bybit hack didn’t start on February 21, 2025. It began days earlier with a hidden breach of Safe{Wallet}’s infrastructure. While details are still emerging, forensic reports from Sygnia and Verichains suggest a supply chain attack on the Safe UI, the web interface used by Bybit’s signers. The entry point? A compromised developer’s machine, hacked using tactics linked to North Korea’s Lazarus Group. Here’s how the attack unfolded, from the initial breach to the tampered UI that deceived Bybit’s team.

First, hack the developer

Hacking a machine in a high-stakes environment like Safe’s development team isn’t a quick smash-and-grab. It’s a slow, deliberate process. The Lazarus Group, known for its advanced cyber campaigns, excels in this kind of attack. They don’t force their way in; they exploit trust. Forensic evidence suggests they targeted a Safe UI developer using a familiar tactic: social engineering.

It starts with a simple message. A "recruiter" or "job candidate" reaches out on LinkedIn or email, building rapport over days or weeks. They discuss a fake job opportunity or collaboration. Then comes the ask "download and run this sample project" to assess skills. Or maybe they send a PDF exploiting a known vulnerability in an outdated reader. A single click, and a backdoor quietly installs itself. Sometimes, it's even easier: a project requiring a quick npm i, pulling in malicious dependencies from a compromised package. In this specific case, forensic analysis revealed a chilling detail: the developer's machine was compromised by a Docker container running in privileged mode, with the dangerous 'privileged' flag concealed, likely craftily, within a configuration file. Nothing seems off. But now, the attacker has access.

Once inside, the Lazarus Group takes its time. They scan the developer’s machine for valuable data (e.g. SSH keys, VPN credentials, AWS tokens). In this case, they struck gold: credentials to Safe’s AWS S3 bucket, where the Safe UI’s JavaScript code was stored. This wasn’t luck. It was careful planning, the kind of methodical attack Lazarus is known for when targeting crypto firms.

Second, prepare the trap

With stolen AWS S3 credentials, the attackers didn’t rush to upload malware, they studied their prize. Their target was the Safe UI’s codebase, a key part of the multisig workflow. They likely spent days analyzing how it displayed transactions and interacted with Ledger devices. Their goal? Modify it just enough to turn it into a weapon without raising suspicion.

On February 19, 2025, they made their move. Using the stolen credentials, they uploaded a tampered JavaScript file to the S3 bucket. This wasn’t a messy hack, it was precise. The modified UI introduced two key changes:

  • Targeted Activation: The malicious code only triggered for Bybit’s multisig signers, identified by their wallet address (0x1Db92e2EeBC8E0c075a02BeA49a2935BcD2dFCF4) or session data. Other Safe users saw nothing unusual. The attackers focused on their biggest target, but in theory, every Safe UI user was at risk.
  • UI Deception: When Bybit’s signers logged in on February 21, the UI showed a routine 30,000 ETH transfer to their hot wallet. Behind the scenes, the transaction data sent to their Ledger devices had been swapped. Instead of an execTransaction for 30,000 ETH (operation: 0), it injected the delegatecall payload we analyzed earlier—operation: 1, targeting 0x96221423681A6d52E184D440a8eFCEbB105C7242.

The genius and horror of this attack was its subtlety. The signers saw a familiar interface, connected their Ledger devices, and signed what they thought was a normal transfer. The malicious JavaScript didn’t need to steal keys or hack devices, it simply fed them a lie. The Ledger signed the attack transaction (0x46deef0f…), Safe broadcasted it, and the attackers took control of the cold wallet.

Third, wait for the payoff

After uploading the modified code, the Lazarus Group played the waiting game. They knew Bybit’s cold wallet operations were infrequent and only triggered when the hot wallet needed a refill. On February 21, that moment arrived. Bybit’s signers loaded the Safe UI, pulling the malicious JavaScript from the compromised S3 bucket. The trap sprang: they signed the rigged transaction, and the attacker’s contract took over, draining all Bybit ETH hours later.

What the code reveals

The recovered JavaScript code shows how precise the attackers were. Instead of rewriting the whole UI, they changed just a few key lines. One part targeted Bybit’s wallet to keep the attack focused. Another adjusted the transaction display, showing a legitimate transfer while sneaking in the delegatecall payload for signing. These small, careful changes made the attack hard to detect, until it was too late.

Billions of dollars cannot be secured by amateurs

This compromise wasn’t just Bybit’s problem, it could have easily led to a systemic meltdown similar to what happened with FTX, knowing that Safe stores over a $100 billion (!). The fact that the attackers could infiltrate their UI with such ease raises so many questions. Consider this and let it sink in: every user of Safe was a potential target, Bybit just had the deepest pockets.

Moreover, the Bybit/Safe hack has exposed a fundamental truth about crypto security: it doesn’t matter how strong your smart contracts are if the infrastructure behind them is fragile, centralized, and mismanaged. Safe, despite positioning itself as a leading decentralized smart contract wallet, was built on an immature and insecure foundation. Unlike other leaders like OpenZeppelin, which maintains strict security protocols and holds SOC 2 and ISO 27001 certifications, Safe had no such certifications in place. This means no formal audits, no structured compliance processes, and no assurances that security best practices were followed. We will double-click on this later.

Security starts with governance, and Safe lacked it. As one insider put it, the team was just “a bunch of devs in Switzerland and Berlin” with no managed endpoints (i.e. engineers worked on personal machines, not company-controlled, security-hardened devices). This is one of the biggest operational security failures imaginable for a company securing billions of dollars in assets. Without managed endpoints, a single phishing attack or malware infection could compromise everything. It’s no surprise that an exploit eventually happened.

Safe’s backend wasn’t decentralized at all, it was an over-engineered, centralized mess. Their own Github repository, safe-infrastructure, reveals how their entire system depended on PostgreSQL and Redis databases for transaction state and indexing, RabbitMQ queues to process transaction requests, and a single gateway API (safe-client API) to handle all transaction broadcasts. This wasn’t a decentralized protocol at all, it was a highly fragile and trivially normal system that was used as the backbone of Ethereum’s global treasury management. The entire infrastructure was controlled, maintained, and operated by a small team with no formal security governance. Worse still, it is all configured on one single Docker Compose file, which, once again, is a brittle and dangerous setup for a platform managing billions of dollars.

Safe’s infrastructure was so centralized that at one point, its core services were shut down without warning, directly impacting projects relying on its transaction relayers. A frustrated developer called them out on GitHub: “That sounds like quite a big thing for people who build products on top of these services and maybe something that shouldn't just be communicated in a Twitter thread.” For a project branding itself as “decentralized,” the ability to unilaterally pause critical infrastructure is a glaring contradiction. This was not a trustless system, it was a single point of failure masquerading as the opposite.

Yet, despite this fiasco, billions of dollars remain locked in Safe wallets. While some users exited after the Bybit/Safe breach, most either trust the contract layer blindly or claim to have no other viable alternatives. Onchain data from Dune Analytics shows that Safe’s TVL is still in the billions, proving that the market is slow (or too irresponsible) to react to critical infrastructure failures.

The Safe scandal is a wake-up call: crypto security isn’t just about smart contracts, but information security at large, including operational security, governance, and real infrastructure resilience. At some point, the industry needs to stop confusing decentralization theater with actual security. If you’re managing billions of dollars, you need more than good smart contracts. You need real security, real governance, and real engineering discipline. Safe had none of these, and the Bybit/Safe hack was the inevitable result.

How to prevent the next billion dollar hack

This $1.5 billion exploit came down to two main flaws: blind signing in an insecure interface and exposing critical secrets in a highly adversarial environment. How do we prevent this from happening again? We’ll break down the weaknesses in Safe’s design, suggest a safer approach, and show how Dfns puts these principles into practice.

“Blind signing” and user-side transaction forging

The Safe UI’s failure started with a critical mistake: it exposed the raw blockchain transaction (i.e. the exact data sent to Ethereum) on the user’s computer. The UI processed this data to show key details (e.g., “Send 30,000 ETH to the hot wallet”), but then forwarded the full transaction to the Ledger device for signing. This setup assumed the user’s computer, used by Bybit employees, was secure. It wasn’t.

User devices are a security risk. Employees browse the web, open emails, install apps, and sometimes even play games, any of which could introduce malware. Remote work makes it worse: how do you protect a laptop at home from physical tampering? In Bybit’s case, the compromised Safe UI modified an obscure transaction field from 0 to 1, an invisible change to the signers. The Ledger’s small screen and sub-par UX made verifying the raw transaction (a long hex string like 0xa9059cbb…) almost impossible. The result? Blind signing, where trust in the UI became a fatal weakness.

This design treated the user’s computer as the source of truth for creating and signing transactions. That was the mistake. Personal devices are inherently untrustworthy, and critical operations, like crafting a transaction that controls $1.5 billion, should never happen there. If malicious JavaScript can modify the data before it reaches the Ledger, no amount of multisig security can protect you.

User-hosted keys are ticking time bombs

Even worse, Safe’s setup left critical secrets (i.e. the private keys used for those Ledger signatures) on the user side. While Ledger devices can protect keys from direct theft, the surrounding environment is wide open. Lose the device, have it stolen in a fire, or deal with an employee suddenly leaving, and you're in trouble. Multisig helps by requiring multiple signatures instead of one, but key rotations still expose those secrets. If an attacker hijacks the UI (as happened here) or compromises a device, it's over the moment a key signs a malicious transaction. No policy or access control can reverse an on-chain signature.

Intent-based transactions and secure infrastructure

So, what’s the fix? First, stop forging raw transactions on the user side. Instead, let users express an intent—a plain-English description of what they want (e.g., “Transfer 30,000 ETH from cold wallet to hot wallet XYZ”). This intent gets sent to a secure server operated by the wallet provider. That server—running in a monitored, isolated environment like a confidential computing enclave—translates the intent into a raw blockchain transaction. Why does this work?

  • Centralized Control: The server, not the user’s device, forges the transaction. It’s not browsing Reddit or opening PDFs—it’s a locked-down system dedicated to this task.
  • Policy Enforcement: Before forging, the server checks predefined rules. For Bybit, a policy might say, “Funds can only move from cold wallet to hot wallet XYZ.” If the intent matches, the transaction is crafted; if not, it’s rejected.
  • Human-Readable Validation: Key fields (amount, recipient) can change, but obscure ones (like operation) stay locked. No hidden delegatecalls slipping through.

Second, don’t ever expose private keys to users. Instead of holding keys directly, users should get access tokens, which are revocable credentials that authorize the provider’s secure system to sign on their behalf. The actual private keys stay locked inside a Hardware Security Module (HSM) or a Multi-Party Computation (MPC) setup, never leaving the provider’s control. Lose a token? Revoke it. No key leaks, no multisig resets. This follows NIST security best practices (see Recommendation for Key Management, SP 800-57 Part 1 Rev. 5) widely used in traditional information security, and sorely needed in blockchain finance. At Dfns, we call this “network-hosted keys” or the “NHK model” (as opposed to “user-hosted keys” or the “UHK model”). We've been advocating for this key deployment approach since 2019, despite most of the industry rushing to cater to “DeFi” and align with so-called non-custodial setups, or ideals shall we say.

Closing the deception gap in admin experiences

The UI needs a rethink. Safe’s JavaScript-based interface was an easy target. Attackers could swap the displayed transaction with a fake one. A secure UI should be tamper-proof, ideally built into the browser and cryptographically verifiable. This is where WebAuthn and passkeys come in. Using asymmetric cryptography, they authenticate users and sign intents without exposing secrets. Combined with WebAuthn’s latest Secure Payment Confirmation (SPC), operating systems like iOS can now present key transaction details (e.g. amount, recipient, etc.) in a browser-native dialog that JavaScript can’t alter. This ensures users sign exactly what they see.

We need strong corporate security more than ever 

Finally, companies must treat user-side risks as a corporate security issue. Mandate hardware security keys (e.g. Yubikeys) for authentication, enforce regular audits of employee devices, and use confidential computing for sensitive operations. Governance, access controls, monitoring, and attestation must protect the systems forging and signing transactions, not just the keys themselves.

The downside of Ethereum’s smart contract upgradability

Now that we’ve covered the key players in the near-catastrophic chain of events (the smart contract provider, the hardware device, and the exchange employees) it’s time to address the underlying protocol: Ethereum. We’re not afraid to say it. One of the core reasons the Bybit/Safe hack escalated so dramatically is tied to Ethereum’s account-based smart contract model and its approach to upgradability. Unlike UTXO-based models, Ethereum allows smart contracts to store funds centrally under a single contract address. In the case of Bybit, this meant that a vast sum—$1.5 billion—was controlled by a single Safe contract.

The real vulnerability, however, was in how that contract could be modified. Safe, like many other Ethereum-based smart contract wallets, supports upgradability via delegatecall, a feature that lets a contract execute external logic while maintaining its own storage. This means an attacker who gains control over the contract’s logic can execute arbitrary code while retaining control over all stored funds. That’s exactly what happened: once the malicious upgrade was deployed, it overrode the contract’s behavior and allowed attackers to drain funds instantly.

Ethereum’s flexibility makes it an innovation platform, but they also introduce security tradeoffs. The delegatecall exploit that enabled the Bybit/Safe hack is an inherent risk of Ethereum’s contract architecture, one that other chains mitigate through different account models, or stricter governance mechanisms. But not all blockchains operate this way, and many are inherently more resistant to this type of exploit. Here’s why:

  1. UTXO-based models: Unlike Ethereum, these blockchains don’t use account-based smart contracts where funds accumulate in a single address. Instead, funds are controlled by discrete, independent outputs, each locked with specific spending conditions. In Canton for example, each holding is a separate contract (i.e., a distinct UTXO) and cannot be unilaterally upgraded without all signatories’ explicit consent. This inherently limits the blast radius if any single key or interface is compromised. Moreover, the UTXO design ensures you’re signing an exact transaction outcome—no hidden switches, no last-second toggles of malicious flags—so either that specific transaction occurs or nothing happens at all. Together, these architectural choices fundamentally reduce an attacker’s ability to hijack an entire wallet’s balance through a single malicious contract upgrade.
  2. Immutable smart contracts: Some blockchains, like Solana and Cosmos, discourage or entirely prevent smart contract upgradability. Once deployed, smart contracts on these networks are immutable unless explicitly designed otherwise. This removes the attack surface that delegatecall-based upgradability creates. While Solana does allow program upgrades, it requires explicit governance approvals rather than an open-ended delegatecall mechanism.
  3. Multisig and hardware controls: Some chains implement governance-based upgrades that require multi-sig confirmations or quorum approvals rather than direct admin overrides. This ensures that any contract change goes through multiple verifications, significantly reducing the risk of a single compromised key leading to a systemic failure.

If Bybit had used Dfns instead of Safe{Wallet}

At Dfns, we’ve built this model from the ground up. Here’s how it works:

  1. Passkeys with WebAuthn: Users authenticate using a Yubikey or secure enclave (e.g., on a Mac). Their private key signs a challenge; we verify it with their public key. No shared secrets, no spoofing.
  2. Intent-based workflow: Users express an intent (e.g. “Transfer 30,000 ETH from cold wallet to hot wallet XYZ”) and sign it with their passkey. The browser’s WebAuthn UI (via SPC) shows the details, unmodifiable by JavaScript.
  3. Server-side forging: Our isolated server, running in a confidential computing environment, validates the intent against wallet policies (e.g., “Only hot wallet XYZ is allowed”). If it passes, the transaction is forged and signed using MPC-stored keys.
  4. Token-based access: Users hold access tokens, not keys. If compromised, we revoke the token: no key exposure, no damage.

Let’s move forward. Instead of Safe, now let’s imagine Bybit had used Dfns on February 21, 2025. Their cold wallet would have had a policy: “Transfers only to hot wallet XYZ, approved by three employees.” An employee logs in with a Yubikey, enters their PIN, and initiates a transfer: “Move 30,000 ETH to XYZ.” A WebAuthn prompt appears, confirming: “30,000 ETH to XYZ.” They approve it, and two colleagues do the same with their Yubikeys. The signed intent reaches Dfns’ server. The policy engine verifies XYZ, builds the raw transaction (operation: 0), and signs it using MPC in our case. The transaction is broadcasted: 30,000 ETH moves safely. Now, imagine hackers tampering with the UI to display “30,000 ETH to XYZ” while secretly sending an intent for a different address. WebAuthn exposes them, it shows the actual destination, warning the signer. Even if it slips through, the policy engine kills it: the address isn’t XYZ, so the transfer never happens. No delegatecall, no $1.5 billion loss. The raw transaction stays clean because the user side never forges it. Trusting user-side environments for critical operations is a risk we can’t take. The solution? Shift transaction forging to secure servers, protect secrets with tokens, and use tamper-proof UIs like WebAuthn. That’s how we build systems that don’t just survive but thrive in adversarial conditions. At Dfns, we’re proving it works. The next $1.5 billion hack doesn’t have to happen.

Corporate security is a product feature

The Bybit/Safe breach highlights a critical issue: infrastructure security is often overlooked in “web3 native” projects. Safe, like many DeFi platforms, relied on smart contracts and Ethereum-based logic, assuming these alone would ensure security. But the attack had nothing to do with smart contract flaws, it happened because a single developer’s compromised laptop gave attackers access to Safe’s AWS S3 environment. This was a failure of corporate security, not blockchain security.

Safe lacked SOC 2 or ISO certifications, meaning essential safeguards, like continuous access reviews, third-party audits, and strict cloud governance, were missing. In contrast, Dfns follows zero-trust principles, advanced compliance standards, and strict corporate security protocols to eliminate single points of failure. Unlike setups where one developer can deploy production code unchecked, our processes are continuously monitored, logged, and audited. Any infrastructure changes require peer reviews, automated checks, and multi-party approvals. If Safe had these controls, the S3 breach wouldn’t have happened. Instead of relying on the flawed belief that “a multisig is enough,” we enforce a security framework built on international standards, external audits, and accountability at every level, based on best practices such as:

  • Principle of Least Privilege (PoLP): We issue narrow, time-limited permissions for every role, so even if a developer’s machine were compromised, the attackers couldn’t jump straight into our CI/CD pipeline or rewrite code in AWS.
  • No Single Point of Failure (No SPOF): Any critical change—deployment, config update, key rotation—needs approvals from multiple, independent parties. This design means no single dev, no matter how senior, can unilaterally inject malicious updates.
  • Just-in-Time (JIT) Access: Elevated privileges expire automatically after a short window. Attackers would find those higher-level credentials gone if they tried to exploit them hours or even minutes later.

Security goes way beyond smart contracts

At Dfns, we don’t assume that “if our smart contract code is audited, we’re safe.” Security extends beyond onchain logic—it includes developer endpoints, cloud infrastructure, and internal policies. We apply the same rigor to every layer:

  • Strict CI/CD and code signing: Every codecommit is scanned, peer-reviewed, and cryptographically signed. If Safe had enforced a similar process for front-end changes, the malicious snippet would have been flagged long before production.
  • Mandatory MFA and hardware tokens: Employees must use hardware-based MFA for Slack, GitHub, AWS, and other services. This blocks common supply chain attacks that rely on phishing and stolen keys.
  • Continuous monitoring and threat detection: Our SIEM system analyzes logs across all services. Suspicious activity—like an unauthorized front-end modification—triggers alerts and can lock down systems automatically.

The Safe breach is a reminder that secure smart contracts don’t mean secure infrastructure. Dfns combines blockchain innovation with rigorous, compliance-driven security to prevent the exact type of compromise that led to Bybit’s $1.5 billion loss. If the industry wants to avoid the next multi-billion-dollar disaster, it must move beyond smart contract audits and adopt zero trust, verifiable compliance, and holistic security—just like Dfns.

Thank you to José Aguinaga, CEO of Tungsten, for his insights on digital custody and security. His expertise in governance and risk controls continues to shape the industry’s approach to resilient custody solutions.

Must-read references

  • José Perez Aguinaga's LinkedIn post highlights the critical lessons on digital custody strategy, emphasizing governance measures, risk controls, and the need for robust multi-layer defenses against similar attacks.
  • Security Alliance DPRK Advisory led by samczsun presents official guidance and threat intelligence on North Korean (DPRK) state-sponsored hacking, including detailed profiles of infiltration methods, social engineering tactics, and advanced laundering techniques. It underscores how groups like Lazarus leverage front-end compromises and malicious code injections—tactics central to the ByBit hack—and provides best-practice security recommendations to mitigate these risks.
  • Elliptic’s Examination of the ByBit Hack covers the scale and laundering strategies behind the hack, delving into how stolen ETH was routed through hundreds of addresses and various no-KYC services. Discusses how forensic blockchain analysis can trace and sometimes freeze illicit funds.
  • Chainalysis Report on the ByBit Hack explains how Chainalysis helped track the theft in real-time, identify DPRK-linked addresses, and freeze roughly $40 million in assets. It also underscores the importance of rapid incident response and cross-border collaboration among blockchain intelligence firms and law enforcement.
  • Substack Analysis by Harry Donnelly offers a deep dive into the ByBit/Safe exploit—covering the social engineering elements, malicious snippet injection in the user interface, and the smart contract manipulation that enabled the $1.5 billion theft.

Authors