# Why is quantum computing unsuitable for mining? https://media.licdn.com/dms/image/v2/D4E12AQHtzcgEO-okOA/article-cover_image-shrink_600_2000/B4EZpzPID0GcAQ-/0/1762869938409?e=1765411200&v=beta&t=Q6aEZvC4D_Wbc5Pfxg0BX1oV-YMn6G0HR5ruERWkDEs The idea that quantum computers could one day “revolutionize” Bitcoin mining is a recurring theme in the media. This anticipation is based on a confusion between two distinct fields: post-quantum cryptanalysis (concerning the security of digital signatures) and proof of work (concerning the search for valid SHA-256 hashes). However, recent scientific research shows that quantum computing offers **no competitive advantage for mining**, either in theory or in practice. The following analysis explains the specific reasons: algorithmic limitations, hardware constraints, energy costs, protocol neutralization, and lack of real economic impact. **Key figures to know beforehand:** - **256 bits**: size of the SHA-256 hash used for Bitcoin mining. - **1 in 2²⁵⁶**: the raw probability that a random hash will satisfy the network target. - **10 minutes**: the average time targeted by the Bitcoin protocol for discovering a block. - **2016 blocks**: the interval for automatic recalculation of network difficulty. - **≈ 1.23 × 10¹⁹**: average number of theoretical attempts with Grover for a difficulty equivalent to 128 bits. - **100 to 400 TH/s**: computing power of modern ASICs (hundreds of trillions of hashes per second). - **12 to 35 joules per terahash**: average energy efficiency of a current ASIC miner. - **< 1 nanojoule per hash**: individual energy efficiency of an SHA-256 ASIC. - **10⁻¹⁴ seconds**: average execution time of an SHA-256 hash on ASIC. - **10⁻³ to 1 second**: estimated duration of a quantum SHA-256 oracle per iteration (even in an optimistic scenario). - **10¹¹ to 10¹⁵ times slower**: performance gap between a quantum oracle and a conventional ASIC. - **10³ to 10⁶ physical qubits**: required to stabilize a single error-corrected logical qubit. - **> 10⁹ T logic gates**: estimated depth of a complete fault-tolerant quantum SHA-256 circuit. - **10 to 15 millikelvins**: typical operating temperature of superconducting quantum systems. - **Several kilowatts**: power consumption of a single cryogenic dilution refrigerator. - **Several hundred physical qubits**: maximum capacity of the best quantum processors (Google, IBM, 2025). - **Several million corrected qubits**: required to break a 256-bit ECDSA key with Shor's algorithm. - **2²⁵⁶ ≈ 1.16 × 10⁷⁷**: total search space of the SHA-256 hash, which cannot be exploited by Grover beyond the symbol. - **O(2ⁿ)** → **O(2ⁿ⁄²)**: Grover's maximum theoretical gain, i.e., only quadratic acceleration. - **10⁶ to 10⁸ times more expensive**: estimated energy cost of a quantum calculation equivalent to a classical hash. ### Definition of a quantum SHA-256 oracle This is the translation into quantum computing formalism of the SHA-256 hash function used in Bitcoin mining. It is a central component of Grover's algorithm when applied to a hash function. In a classical calculation, SHA-256 is a deterministic function: it takes an input (a block of data) and produces a 256-bit hash. In quantum computing, this function must be represented by a **reversible unitary operation**, i.e., a logic circuit that transforms an input quantum state |x⟩ and an output register |y⟩ according to the rule: |x, y⟩ → |x, y ⊕ SHA-256(x)⟩ where ⊕ represents a bitwise addition (XOR). This operator is called a **quantum oracle** because it “guides” Grover's search by marking entries whose hash satisfies a given condition (for example, being less than the network target). During each iteration of Grover's algorithm, the quantum SHA-256 oracle: 1. Calculates the SHA-256 hash of all possible entries **in superposition**. 2. Compares the result to a condition (e.g., “the first 20 bits are equal to zero”). 3. Reverses the phase of the states that satisfy this condition. This operation then amplifies the probability of measuring a valid input at the end of the calculation through constructive interference. Building a realistic quantum SHA-256 oracle involves: - Converting the **irreversible operations** of classical SHA-256 (modular addition, shifts, XOR, AND, OR) into **reversible quantum gates**. - Ensuring **quantum coherence** over millions of successive gates. - Maintaining **fault tolerance** (error correction) over thousands of logical qubits. In practice, each quantum SHA-256 oracle would correspond to an extremely deep circuit, comprising billions of elementary operations and requiring millions of physical qubits. **In summary**, a quantum SHA-256 oracle is the reversible and unitary version of the hash function used in Bitcoin, serving to mark valid solutions in a Grover algorithm. It is the theoretical element that links classical cryptography to quantum computing, but also the main practical barrier making quantum mining unfeasible. ### Nature of the computational problem Mining is based on the **SHA-256 hash function**, applied twice for each block: the miner must find a nonce value such that the hash of the block is less than a target set by the protocol. This process corresponds to an exhaustive search, where each attempt is statistically independent. The probability of success for an attempt is: p = T / 2^256 where T represents the network target. The average number of attempts required to find a valid block is therefore: N_classic = 1 / p In this model, each attempt is a hash calculation, and current ASIC miners perform several hundred **trillion hashes per second**, thanks to a massively parallel architecture optimized for energy efficiency of a few dozen joules per terahash. ### The illusion of quantum acceleration Grover's algorithm (1996) accelerates the search for a particular element in an unstructured space. Its complexity goes from O(2^n) to O(2^(n/2)). Applied to mining, this would reduce the average number of attempts to: N_Grover ≈ (π/4) × 1 / √p, which is a theoretical gain of a quadratic factor. Let's take a simple example: If the probability of success is p = 2⁻¹²⁸, then: – N_classic = 2¹²⁸ – N_Grover ≈ (π/4) × 2⁶⁴ ≈ 1.23 × 10¹⁹ Even in the best-case scenario, this gain remains marginal in view of the physical constraints of implementation. Quantum mining therefore does not multiply the speed by 10⁶ or 10⁹; it only reduces the exponential complexity by a quadratic factor. This improvement is **arithmetically insufficient** to compete with ASIC farms equipped with millions of parallel circuits. ### Actual implementation of quantum SHA-256 The main obstacle lies in the depth and stability of the circuits needed to execute SHA-256 in quantum form. A benchmark study (Amy et al., SAC 2016) estimates that implementing SHA-256 with quantum error correction would require **several billion T logic gates** and **millions of physical qubits** . By comparison, the best experimental quantum processors (Google, IBM, Rigetti) currently handle **a few hundred physical qubits**, with gate error rates between 10⁻³ and 10⁻² and coherence times on the order of microseconds. Even assuming the availability of a fault-tolerant quantum computer (FTQC), the circuit depth of Grover's algorithm on SHA-256 would far exceed the coherence window of current qubits. The cost of error correction, which requires 10³ to 10⁶ physical qubits per logical qubit, makes any industrial application impractical. ### Energy and hardware limitations Contrary to popular belief, a quantum computer **does not consume “zero energy”**. Superconducting or trapped ion devices require cooling to **temperatures close to absolute zero (10 to 15 mK)**, using expensive and energy-intensive dilution refrigerators. The consumption of a single cryogenic system already exceeds several kilowatts for a few hundred qubits, not counting microwave control instruments and high-frequency power supplies. However, mining is a **massively parallel process**: billions of independent calculations must be performed per second. Quantum computing, on the other hand, is **sequential**, with each Grover iteration depending on the previous one. Thus, even if a quantum computer could perform a “smarter” hash, its overall throughput would be orders of magnitude lower than that of specialized ASICs, whose energy efficiency per operation is less than 1 nanojoule. The 2023 study (“Conditions for advantageous quantum Bitcoin mining,” _Blockchain: Research and Applications_) confirms that the energy cost and latency of quantum control negate any theoretical advantage. In other words, **quantum computing is unsuited to the PoW structure**, which is based on the ultra-fast repetition of a simple function, not on deep, coherent computation. ### Difficulty adjustment: protocol neutralization Even if an actor discovers a faster quantum method, the Bitcoin protocol's **difficulty adjustment mechanism** would make this advantage temporary. The difficulty is recalculated every 2016 blocks to maintain an average interval of 10 minutes. If a “quantum” miner doubled the network's overall hash rate, the difficulty would be doubled in the next period, bringing the yield back to normal. Thus, quantum computing could never “break” mining: it would simply be integrated into the economic equilibrium of the network and then neutralized. The only residual risk would be **centralization**: the possession of exceptionally powerful quantum hardware by a single player could temporarily unbalance the hashpower market. But this risk is economic in nature, not cryptographic, and remains unlikely given the necessary investment costs (cryogenic infrastructure, maintenance, advanced engineering). ### Differentiating risks: signatures vs. hashing Two distinct threats must be distinguished: - **Hashing (SHA-256)**: used for mining, it is resistant to quantum attacks because Grover only confers a quadratic gain. - **Signatures (ECDSA)**: used to prove ownership of an address, they would be vulnerable to **Shor's algorithm (1994)**, which is capable of calculating discrete logarithms. It is therefore the signature layer, not the mining layer, that justifies post-quantum transition work. Recent estimates put the resources needed to break a 256-bit ECDSA key at several **millions of corrected qubits**. In 2025, no system will come close to this scale: corrected logic processors will be counted in units, not thousands. ### The real progress of 2024-2025: advances with no impact on mining Recent announcements of progress—for example, the stabilization of **error-corrected logical qubits**—are important steps, but they concern experimental reliability, not computing power. Quantum computing useful for mining would involve billions of consistent, repeated operations, which current qubits cannot sustain. Even a major breakthrough in error correction or modularity would not reverse the fact that quantum architecture remains incompatible with the massively parallel, shallow depth, and high frequency nature of mining. ### The following explanations are a little more complex, so here are some prerequisites The concepts of bits, pool mining, and difficulty bounds may seem abstract. Here is a clear explanation of these three essential elements for understanding how mining actually works. **MSB and LSB** In a 256-bit binary number (such as the result of an SHA-256), the **MSB** (_Most Significant Bits_) are the bits on the left: they represent the most significant values in the number. The **LSB** (Least Significant Bits) are those on the right, which change most often but have little influence on the overall value. When we talk about finding a hash “with leading zeros,” it means that the MSB must be zero: the hash begins with a long series of zeros. Miners vary a small data field called a _nonce_ so that the final hash meets this constraint. The difficulty of the network is precisely the number of MSBs that the hash must have as zero. **How pools work** Mining is now organized into **pools**, groups of miners who work together and share the reward. Each miner is given simplified tasks: they do not seek to validate the entire block, but to produce _shares_, i.e., hashes whose difficulty is lower than a target that is much easier than that of the network. These shares serve as proof of participation: the more a miner provides, the greater their share of the final block reward will be. The pool server constantly adjusts the individual difficulty (vardiff) to balance speeds: a miner who is too fast is given more difficult tasks, which prevents any unfair advantage. **Lower and upper mining limits** The Bitcoin protocol sets two difficulty thresholds that govern the entire mining process. The **upper limit** corresponds to the network target: for a block to be validated, its header hash must be less than this value. The lower the target, the more zeros are required at the beginning of the hash, making the block more difficult to find. Conversely, the **lower limit** corresponds to the difficulty of work assigned by the pools to each miner, which is much easier to achieve. It is used solely to measure individual participation. The pool server constantly adjusts these limits. If a miner finds too many shares too quickly, the pool increases the difficulty of their tasks. If they find them too slowly, it reduces it. This mechanism—called vardiff—effectively eliminates extreme behavior: miners who are too fast do not earn more, while those who are too slow are naturally excluded, as their shares become too rare to be profitable. Thanks to this balancing system, each miner's computing power remains proportional to their actual contribution, with no possibility of a lasting advantage. The upper and lower limits thus ensure overall network stability and local fairness in the distribution of work. ### Understanding the “partial Grover” illusion One idea often comes up: applying Grover's algorithm not to the entire 256 bits of the SHA-256 hash, but only to a portion of the most significant bits (the “MSBs”), then completing the rest in the traditional way. This approach, known as “partial Grover,” seems logical: if the search covers a smaller space (for example, 40 bits instead of 256), the number of iterations required decreases accordingly, according to the rule √(2^r). In theory, this could make it possible to obtain low-difficulty shares more quickly in a mining pool. In practice, this approach does not change the reality of the calculation. Each Grover iteration requires executing **the entire SHA-256** to evaluate the condition on the most significant bits. It is impossible to “truncate” the hash or partially test a cryptographic hash function without calculating it entirely. In other words, fewer iterations are repeated, but each one costs just as much—and millions of times more than a conventional hash on ASIC. Furthermore, Grover does not allow multiple correlated solutions to be produced. The quantum state collapses after the first measurement: to find another solution, you have to start all over again. Unlike classical computation, you cannot reuse the result to generate nearby variants or multiple close shares. Finally, even if a quantum miner achieved a slight local acceleration on the shares, this difference would be immediately neutralized by the pools' automatic regulation mechanisms, which dynamically adjust the difficulty for each miner. The protocol is designed to maintain a balance between all participants, regardless of their speed. In summary, “partial Grover” offers no practical advantage: the quadratic gain remains purely theoretical, negated by the slowness, decoherence, and physical constraints of quantum computing. Even when applied to a small portion of the hash, the energy, time, and structural costs of such a process exceed those of conventional miners by several orders of magnitude. ### Other possible objections **“Grover's algorithm can process multiple solutions (multiple-solutions search).”** Source: PennyLane Codebook on “Grover's Algorithm | Multiple Solutions” explains the generalization of the algorithm to find M solutions in a space of size N. **Response**: In theory, finding M solutions reduces the complexity to O(√(N/M)). However: - In the context of mining, “solutions” would correspond to valid hashes for the difficulty target. But the quantum oracle must still test the entire hash function for each input, so the cost remains maximum per iteration. - Having multiple solutions M does not change the **latency** or **circuit depth**: we remain limited by error correction and consistency. - For large values of N (≈ 2²⁵⁶) and small M (very rare target), √(N/M) remains astronomical. Therefore, even by adopting Grover's “multiple-solutions” variant, hardware and time constraints make its application to mining still impractical. **“If a quantum miner appeared, it could cause more forks/reorganizations.”** Source: the academic article “On the insecurity of quantum Bitcoin mining” (Sattath, 2018) suggests that the correlation of measurement times could increase the probability of forking. **Response**: This argument is interesting but largely speculative and is based on the assumption that an ultra-fast quantum miner would work. However: - The scenario required a quantum miner capable of reaching a speed comparable to or greater than the best ASICs, which is not realistic today. - Even if such a miner existed, the increase in forks would not necessarily result from a generalized mining advantage but from an opportunistic strategy. This does not call into question network adaptation, difficulty adjustment, or security measures. - The fact that forks can occur does not mean that quantum mining is viable or advantageous: the cost remains prohibitive. In summary, this objection can be formalized, but it does not constitute proof of an effective quantum advantage in the real world. ### Economic and energy consequences Modern ASIC farms operate at full energy efficiency, around **12 to 35 J/TH**. A cryogenic quantum computer, even if perfectly optimized, would have **efficiency several orders of magnitude lower**, due to the costs of cooling, control, and error correction. Quantum computing is therefore **uneconomical** for mining: - it requires a centralized architecture; - it does not allow for large-scale duplication; - it does not reduce total energy consumption; - it does not improve network security. ### Conclusion Quantum computing, in its current and foreseeable state, is **fundamentally unsuitable for Bitcoin mining**: 1. **Algorithmically**, Grover's quadratic acceleration remains insufficient in the face of the exponential complexity of hashing. 2. **In terms of hardware**, error correction and decoherence limit any attempt at large-scale parallelization. 3. **In terms of energy**, cryogenic cooling and the complexity of control make any industrial operation inefficient. 4. **In terms of protocol**, the difficulty adjustment mechanism neutralizes any transient advantage. 5. **Economically**, the centralization required to maintain a quantum infrastructure would destroy the network's resilience and would therefore be excluded from rewards by the nodes (which decide). The quantum threat to Bitcoin concerns exclusively **cryptographic signatures (ECDSA)** and not **proof of work (SHA-256)**. Based on current knowledge and technological projections, **there is no credible prospect** of quantum computing offering any advantage for mining, or even energy efficiency. The myth of the “quantum miner” is therefore more a matter of media speculation than applied science. Bitcoin, designed to adapt and adjust its difficulty, remains today and for the foreseeable future **resilient in the face of the quantum revolution**. [Source]() #Bitcoin #QuantumComputing #ProofOfWork #SHA256 #Grover #Mining #PostQuantum #Decentralization
# What does a Bitcoin address reveal before and after a transaction? Reusing a Bitcoin address is often presented as a privacy issue. However, it also poses a **real cryptographic risk** related to the security of the private key itself. This issue concerns both older P2PKH addresses and newer SegWit (bc1q...) or Taproot (bc1p...) formats: when an address is reused after having already been used to spend a UTXO, all funds associated with that same key now depend on cryptographic material that has been exposed multiple times on the blockchain. This article explains the structural reasons for this risk, the cryptographic mechanisms involved, and the practical way to observe the public key revealed during a transaction. ### Exposure of the public key: a critical moment Before any transaction, a Bitcoin address **does not reveal the public key**, but only a hash: ``` HASH160(pubkey) = RIPEMD160(SHA-256(pubkey)) ``` This hash offers no possibility of retrieving the public key. As long as a UTXO remains unspent, the associated key remains mathematically inaccessible. As soon as a UTXO is spent: - the **signature** is published, - the **complete public key** is revealed, - the validity of the signature is verified against this key. From this point on, the address no longer offers the same cryptographic protection: the public key is exposed to offensive analysis, and any reuse of this same key multiplies the data that can be exploited by an attacker. ### Where is the public key located at the time of spending? The exact location depends on the type of address: ### P2PKH (addresses beginning with 1 or 3) In **P2PKH** transactions, the public key appears: - **in the scriptSig**, - immediately after the signature, - in hexadecimal form, usually as a compressed key (33 bytes, prefix 02 or 03) or uncompressed (65 bytes, prefix 04). ### P2WPKH (SegWit v0, bc1q addresses, etc.) In **P2WPKH** transactions, the public key appears in the **witness**: - witness[0] → signature (DER format), - witness[1] → **compressed public key** (33 bytes, starting with 02 or 03). ### Taproot (P2TR, bc1p addresses, etc.) **Taproot** transactions use Schnorr signatures and **x-only** public keys. The public key appears: - in the **witness script**, - usually under the “key path spending” line, - in **x-only** format: 32 bytes (64 hex) without the 02/03 prefix. ### On mempool.space [mempool.space]() does **not display “Public Key” in plain text**. You have to read the raw hexadecimal fields and recognize the format: - **33 bytes** → compressed pubkey: starts with 02 or 03. - **65 bytes** → uncompressed pubkey: starts with 04. - **32 bytes** → Taproot x-only pubkey. The public key is therefore still visible, but in the form of a hexadecimal field in the Inputs. ### Why does reuse weaken security? ### Revealing the public key once is not critical Security relies on the difficulty of the discrete logarithm problem (ECDLP). As long as an attacker only has a single signature produced by the key: - they cannot reconstruct anything, - they have no statistical material, - ECDLP remains intact. ### Revealing the same key multiple times multiplies the attack surface Each UTXO expenditure associated with the same address publishes: - an identical public key, - a new, distinct signature. In ECDSA (P2PKH, P2WPKH), each signature requires a random number: the **nonce k**. k must be: - unique, - unpredictable, - perfectly generated. > A flaw in the generation of k — well-documented events — allows the private key to be recovered if two signatures use the same k or correlated k's. Real-world examples: - Android bug in 2013, - Faulty hardware RNG, - Old OpenSSL libraries, - Entropy weakness when booting a device, - Smartcards producing biased nonces. Reusing addresses **multiplies the signatures produced** by the same key → increases the probability of a cryptographic incident. ### Taproot improves the situation but does not eliminate it Taproot uses Schnorr: - deterministically derived nonce → eliminates the “same k” risk, - more resistant linear signature structure. However: - the x-only key remains unique and exposed, - multiple signatures remain exploitable for statistical analysis, - hardware risks remain, - post-quantum cryptography will compromise any exposed public key. ### Risk concentration An HD wallet (BIP32) allows each UTXO to be isolated behind a different derived key. Reusing addresses negates this advantage: - a bug in a single signature → compromises all UTXOs dependent on that key. This is the worst possible configuration in terms of compartmentalization. ### What about cryptographic advances (quantum or otherwise)? If an attacker gained the ability to solve ECDLP: - any public key **already exposed** would become vulnerable, - all reused addresses would be particularly fragile, - an address that has never been spent would remain protected by HASH160. Address reuse thus concentrates a future risk that the ecosystem explicitly seeks to avoid. ### Concrete example: key revealed in a real transaction For the transaction: ``` 7ee6745718bec9db76390f3a4390b9e7daeeb401e8c666a7b261117a6af654a1 ``` This is a P2WPKH input. In the witness: - the signature is in witness[0], - the compressed public key is in witness[1]. The revealed public key is: ``` 02174ee672429ff94304321cdae1fc1e487edf658b34bd1d36da03761658a2bb09 ``` > Before spending: only HASH160(pubkey) was visible. > After spending: the actual public key is visible, permanently. ### Conclusion Reusing Bitcoin addresses represents a tangible cryptographic risk. It is not just a matter of poor privacy hygiene, but a structural problem: **a public key should only be exposed once**, and a signature should never be multiplied on the same key if maximum robustness is desired. Current cryptographic mechanisms are robust, but experience shows that: - implementations are never perfect, - nonces can be biased, - devices can lack entropy, - hardware attacks exist, - cryptanalysis is advancing. Minimizing the exposure of public keys remains a fundamental best practice, today and tomorrow, and this starts with a simple rule: **never reuse an address that has already spent a UTXO**. [Source]() #Bitcoin #Privacy #Cryptography #ECDSA #Schnorr #Taproot #SegWit #UTXO #Decentralized #BitcoinPrivacy #CryptoEducation #BIP32 #HDWallet #QuantumThreat
# Qu’est-ce qu’une adresse Bitcoin révèle avant et après une dépense ? La réutilisation d’une adresse Bitcoin est souvent présentée comme un problème de confidentialité. Pourtant, elle soulève aussi un risque **cryptographique réel**, lié à la sécurité même de la clé privée. L’enjeu concerne autant les anciennes adresses P2PKH que les formats récents SegWit (bc1q…) ou Taproot (bc1p…) : lorsqu’une adresse est réutilisée après avoir déjà servi à dépenser un UTXO, l’ensemble des fonds associés à cette même clé dépend désormais d’un matériau cryptographique exposé plusieurs fois sur la blockchain. Le présent article expose les raisons structurelles de ce risque, les mécanismes cryptographiques en jeu, et la manière concrète d’observer la clé publique révélée lors d’une dépense. ### L’exposition de la clé publique : un moment critique Avant toute dépense, une adresse Bitcoin **ne révèle pas la clé publique**, mais seulement un hachage : ``` HASH160(pubkey) = RIPEMD160(SHA-256(pubkey)) ``` Ce hachage n’offre aucune possibilité de retrouver la clé publique. Tant qu’un UTXO reste non dépensé, la clé associée demeure mathématiquement inaccessible. Dès qu’un UTXO est dépensé : - la **signature** est publiée, - la **clé publique complète** est révélée, - la validité de la signature est vérifiée contre cette clé. À partir de ce moment, l’adresse n’offre plus la même protection cryptographique : la clé publique est exposée à l’analyse offensive, et toute réutilisation de cette même clé multiplie les données exploitables par un attaquant. ### Où se trouve la clé publique au moment de la dépense ? L’emplacement exact dépend du type d’adresse : ### P2PKH (adresses commençant par 1 ou 3) Dans les transactions **P2PKH**, la clé publique apparaît : - **dans le scriptSig**, - immédiatement après la signature, - sous forme hexadécimale, généralement en clé compressée (33 octets, préfixe 02 ou 03) ou non compressée (65 octets, préfixe 04). ### P2WPKH (SegWit v0, adresses bc1q…) Dans les transactions **P2WPKH**, la clé publique apparaît dans le **witness** : - witness[0] → signature (format DER), - witness[1] → **clé publique compressée** (33 octets, débutant par 02 ou 03). ### Taproot (P2TR, adresses bc1p…) Les transactions **Taproot** utilisent des signatures Schnorr et des clés publiques **x-only**. La clé publique apparaît : - dans le **script witness**, - en général sous la ligne « key path spending », - au format **x-only** : 32 octets (64 hex) sans préfixe 02/03. ### Sur mempool.space [mempool.space]() n’affiche **pas “Public Key” en texte clair**. Il faut lire les champs bruts hexadécimaux et reconnaître le format : - **33 octets** → pubkey compressée : commence par 02 ou 03. - **65 octets** → pubkey non compressée : commence par 04. - **32 octets** → Taproot x-only pubkey. La clé publique est donc toujours visible, mais sous la forme d’un champ hexadécimal dans les Inputs. ### Pourquoi la réutilisation affaiblit-elle la sécurité ? ### Révéler une fois la clé publique n’est pas critique La sécurité repose sur la difficulté du problème du logarithme discret (ECDLP). Tant qu’un attaquant ne dispose que d’une seule signature produite par la clé : - il ne peut rien reconstituer, - il n’a pas de matière statistique, - ECDLP reste intact. ### Révéler la même clé plusieurs fois multiplie la surface d’attaque Chaque dépense d’un UTXO associée à une même adresse publie : - une clé publique identique, - une nouvelle signature distincte. En ECDSA (P2PKH, P2WPKH), chaque signature nécessite un nombre aléatoire : le **nonce k**. k doit être : - unique, - imprédictible, - parfaitement généré. > Un défaut de génération de k — événements bien documentés — permet de récupérer la clé privée si deux signatures utilisent un même k ou des k corrélés. Exemples réels : - bug Android 2013, - RNG matériel défaillant, - bibliothèques OpenSSL anciennes, - faiblesse d’entropie au boot d’un appareil, - smartcards produisant des nonces biaisés. La réutilisation d’adresse **multiplie les signatures produites** par une même clé → augmente la probabilité d’un incident cryptographique. ### Taproot améliore la situation mais ne l’annule pas Taproot utilise Schnorr : - nonce dérivé de façon déterministe → élimine le risque “même k”, - structure linéaire des signatures plus résistante. Mais : - la clé x-only reste unique et exposée, - plusieurs signatures restent exploitables pour des analyses statistiques, - les risques matériels demeurent, - la cryptographie post-quantique mettra à mal toute clé publique exposée. ### Concentration du risque Un portefeuille HD (BIP32) permet d’isoler chaque UTXO derrière une clé dérivée différente. La réutilisation d’adresse annule cet avantage : - un bug dans une seule signature → compromet tous les UTXO dépendant de cette clé. C’est la pire configuration possible en termes de compartimentation. ### Et en cas d’avancées cryptographiques (quantique ou non) ? Si un attaquant obtenait la capacité de résoudre ECDLP : - toute clé publique **déjà exposée** deviendrait vulnérable, - toutes les adresses réutilisées seraient particulièrement fragiles, - une adresse jamais dépensée resterait protégée par HASH160. La réutilisation d’adresse concentre ainsi un risque futur que l’écosystème cherche explicitement à éviter. ### Exemple concret : clé révélée dans une transaction réelle Pour la transaction : ``` 7ee6745718bec9db76390f3a4390b9e7daeeb401e8c666a7b261117a6af654a1 ``` Il s’agit d’un input P2WPKH. Dans le witness : - la signature se trouve dans witness[0], - la clé publique compressée dans witness[1]. La clé publique révélée est : ``` 02174ee672429ff94304321cdae1fc1e487edf658b34bd1d36da03761658a2bb09 ``` > Avant dépense : seule HASH160(pubkey) était visible. > Après dépense : la clé publique réelle l’est, définitivement. ### Conclusion La réutilisation d’adresse Bitcoin représente un risque cryptographique tangible. Elle ne relève pas seulement d’une mauvaise hygiène de confidentialité, mais d’un problème structurel : **une clé publique ne devrait être exposée qu’une seule fois**, et une signature ne devrait jamais être multipliée sur une même clé si l’on souhaite maximiser la robustesse. Les mécanismes cryptographiques actuels sont solides, mais l’expérience montre que : - les implémentations ne sont jamais parfaites, - les nonces peuvent être biaisés, - les appareils peuvent manquer d’entropie, - les attaques matérielles existent, - la cryptanalyse progresse. Minimiser l’exposition des clés publiques reste une bonne pratique fondamentale, aujourd’hui comme demain, et cela passe d’abord par une règle simple : **ne jamais réutiliser une adresse qui a déjà dépensé un UTXO**. [Source]()
Interview from #PlanBLugano Check what the guys from are doing. Cool demo with an NFC tag to login into a website or to open a door 🚪 They should go live in Q1 2026 with their project, so don't hesitate to contact them to beta test. PS: there was lots of background noise in the room, I had to use AI to remove the noise. #PlanB
At #PlanBLugano I interviewed Daniel from His company supports circular economy in developing countries. He buys product from them and pay them in sats. Keychain and mini surf board, upcycling from used board that the kids cannot surf on it anymore with Bitcoin related backstreet art from StreetCyber from Barcelona. #PlanB
Guess with whom I ran into at #PlanBLugano ? Uncle Rockstar Dev. He gave me a cool NFC card with his image on it and when you tap it to your phone, you got laser eyes ! 🤩 Read my feedback from the conference here:
On my way to Lugano PlanB conference with @Saidah - Ask a Bitcoiner 21 Questions
HAARP: Inject energy into the ionosphere to be able to control it
Projet HAARP une realité HAARP : le climat sous contrôle ? #HAARP #SNOWDEN
**RÉVÉLATION CHOC : SNOWDEN DÉVOILE LE PROGRAMME DE GUERRE MONDIALE DE HAARP** Edward Snowden vient de faire exploser la bombe d'information que l'État profond redoutait le plus. HAARP n'est pas une station de recherche — c'est une arme de contrôle global. Des documents fuités montrent que HAARP peut déclencher des AVC et des crises cardiaques en ciblant le tronc cérébral avec des ondes radio à très haute fréquence. Les décès semblent naturels, ne laissant aucune trace de meurtre. Snowden confirme que l'OTAN utilise HAARP pour réprimer la dissidence, manipuler les pensées et provoquer des ruptures psychotiques afin de discréditer des cibles. Ce n'est pas une théorie — lors de sa fuite de Hong Kong, Snowden et le personnel de WikiLeaks ont dû se protéger des impulsions homicidaires induites par les ondes radio. Snowden a construit une cage de Faraday pour bloquer tous les signaux entrants, preuve qu'encore aujourd'hui HAARP le cible. Il a fourni des e-mails d'amiraux et de généraux de l'Armée de l'air confirmant les capacités de HAARP. Des initiés du renseignement ont vérifié leur authenticité. Le réseau HAARP en Alaska génère 36 millions de watts d'énergie dirigée, assez pour manipuler l'ionosphère, perturber le climat et transformer des régions entières en zones de catastrophe. Ces installations existent dans le monde entier — Alaska, Suède, Russie — toutes faisant partie d'un système coordonné capable de provoquer famine, tremblements de terre et chaos pour écraser la résistance. Des initiés affirment que les antennes relais et le système TrapWire de la DARPA sont connectés à cette grille, créant une toile planétaire de surveillance et de contrôle. D'anciens lanceurs d'alerte comme Nick Begich ont averti que la manipulation émotionnelle via HAARP était « effroyablement facile ». C'est une guerre contre l'humanité elle-même. Notre météo, nos esprits, même nos morts peuvent être scénarisés par ceux qui actionnent HAARP. Le régime globaliste le cache sous le prétexte de « recherche scientifique » tout en menant une guerre silencieuse. L'avertissement de Snowden est clair : HAARP est l'arme ultime. Elle peut déstabiliser des nations sans qu'aucune balle ne soit tirée. Elle peut effacer des patriotes sans laisser d'empreintes. C'est la main invisible qui guide les événements mondiaux — tempêtes, guerres, effondrements — tous orchestrés pour forcer la soumission. Les patriotes doivent se réveiller. Exiger la fermeture de ces installations. Exiger la révélation de ceux qui autorisent leur usage. L'horloge tourne. Chaque seconde où HAARP reste actif est une seconde de plus de guerre psychologique et environnementale contre le peuple. La tempête n'arrive pas — elle est déjà là. #HAARP #SNOWDEN