REPO Cleaned.

This commit is contained in:
Captain 2021-11-07 12:47:30 +00:00
parent 16fad834a1
commit d73164bced
No known key found for this signature in database
GPG Key ID: 18CDB3ED5E85D2D4
4459 changed files with 1 additions and 1328673 deletions

View File

@ -1,52 +0,0 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQSuBFpgP9IRDAC5HFDj9beW/6THlCHMPmjSCUeT0lKtT22uHbTA5CpZFTRvrjF8
l1QFpECuax2LiQUWCg2rl5LZtjE2BL53uNhPagGiUOnMC7w50i3YD/KWoanM9or4
8uNmkYRp7pgnjQKX+NK9TWJmLE94UMUgCUach+WXRG4ito/mc2U2A37Lonokpjb2
hnc3d2wSESg+N0Am91TNSiEo80/JVRcKlttyEHJo6FE1sW5Ll84hW8QeROwYa/kU
N8/jAAVTUc2KzMKknlVlGYRcfNframwCu2xUMlyX5Ghjrr3PmLgQX3qc3k/eTwAr
fHifdvZnsBTquLuOxFHk0xlvdSyoGeX3F0LKAXw1+Y6uyX9v7F4Ap7vEGsuCWfNW
hNIayxIM8iOeb6AOFQycL/GkI0Mv+SCd/8KqdAHT8FWjsJUnOWcYYKvFdN5jcORw
C6OVxf296Sj1Zrti6XVQv63/iaJ9at142AcVwbnvaR2h5IqyXdmzmszmoYVvf7jG
JVsmkwTrRvIgyMcBAOLrwQ7I4JGlL54nKr1mIvGRLZ2lH/2sfM2QHcTgcCQ5DACi
P0wOKlt6UgRQ27Aeh0LtOuFuZReXE8dIpD8f6l+zLS5Kii1SB1yffeSsQbTD6bvt
Ic6h88iUKypNHiFcFNncyad6f4zFYPB1ULXyFoZcpPo3jKjwNW/h//AymgfbqFUa
4dWgdVhdkSKB1BzSMamxKSv9O87Q/Zc2vTcA/0j9RjPsrRIfOCziob+kIcpuylA9
a71R9dJ7r2ivwvdOK2De/VHkEanM8qyPgmxdD03jLsx159fX7B9ItSdxg5i0K9sV
6mgfyGiHETminsW28f36O/WMH0SUnwjdG2eGJsZE2IOS/BqTXHRXQeFVR4b44Ubg
U9h8moORPxc1+/0IFN2Bq4AiLQZ9meCtTmCe3QHOWbKRZ3JydMpoohdU3l96ESXl
hNpD6C+froqQgemID51xe3iPRY947oXjeTD87AHDBcLD/vwE6Ys2Vi9mD5bXwoym
hrXCIh+v823HsJSQiN8QUDFfIMIgbATNemJTXs84EnWwBGLozvmuUvpVWXZSstcL
/ROivKTKRkTYqVZ+sX/yXzQM5Rp2LPF13JDeeATwrgTR9j8LSiycOOFcp3n+ndvy
tNg+GQAKYC5NZWL/OrrqRuFmjWkZu0234qZIFd0/oUQ5tqDGwy84L9f6PGPvshTR
yT6B4FpOqvPt10OQFfpD/h9ocFguNBw0AELjXUHk89bnBTU5cKGLkb1iOnGwtAgJ
mV6MJRjS/TKL6Ne2ddiv46fXlY05zJfg0ZHehe49BIZXQK8/9h5YJGmtcUZP19+6
xPTF5zXWs0k3yzoTGP2iCW/Ksf6b0t0fIIASGFAhQJUmGW1lKAcZTTt425G3NYOc
jmhJaFzcLpTnoqB8RKOTUzWXESXmA86cq4DtyQ2yzeLKBkroRGdpwvpZLH3MeDJ4
EIWSmcKPxm8oafMk6Ni9I4qQLFeSTHcF2qFoBMLKai1lqLd+NAzQmbXHDw6gOac8
+DBfIcaj0f5AK/0G39dOV+pg29pISt2PWDDhZ/XsjetrqcrnhsqNNRyplmmy0xR0
srQwQ2FwdGFpbiBEZXJvIChodHRwczovL2Rlcm8uaW8pIDxzdXBwb3J0QGRlcm8u
aW8+iJAEExEIADgWIQQPOeQljGU5R3AqgjQIsgNgoDqd6AUCWmA/0gIbAwULCQgH
AgYVCAkKCwIEFgIDAQIeAQIXgAAKCRAIsgNgoDqd6FYnAQChtgDnzVwe28s6WDTK
4bBa60dSZf1T08PCKl3+c3xx1QEA2R9K2CLQ6IsO9NXD5kA/pTQs5AxYc9bLo/eD
CZSe/4u5Aw0EWmA/0hAMALjwoBe35jZ7blE9n5mg6e57H0Bri43dkGsQEQ1fNaDq
7XByD0JAiZ20vrrfDsbXZQc+1SBGGOa38pGi6RKEf/q4krGe7EYx4hihHQuc+hco
PqOs6rN3+hfHerUolKpYlkGOSxO1ZjpvMOPBF1hz0Bj9NoPMWwVb5fdWis2BzKAu
GHFAX5Ls86KKZs19DRejWsdFtytEiqM7bAjUW75o3O24faxtByTa2SVmmkavCFS4
BpjDhIU2d5RqhJRkb9fqBU8MDFrmCQqSraQs/CqmOTYzM7E8wlk1SwylXN6yBFX3
RAwq1koFMw8yRMVzswEy917kTHS4IyM2yfYjbnENmWJuHiYJmgn8Lqw1QA3syIfP
E4qpzGBTBq3YXXOSymsNKZmKH0rK/G0l3p33rIagl5UXfr1LVd5XJRu6BzjKuk+q
uL3zb6d0ZSaT+aQ/Sju3shhWjGdCRVoT1shvBbQeyEU5ZLe5by6sp0FH9As3hRkN
0PDALEkhgQwl5hU8aIkwewADBQv/Xt31aVh+k/l+CwThAt9rMCDf2PQl0FKDH0pd
7Tcg1LgbqM20sF62PeLpRq+9iMe/pD/rNDEq94ANnCoqC5yyZvxganjG2Sxryzwc
jseZeq3t/He8vhiDxs3WwFbJSylzPG3u9xgyGkKDfGA74Iu+ASPOPOEOT4oLjI5E
s/tB7muD8l/lpkWij2BOopiZzieQntn8xW8eCFTocSAjZW52SoI1x/gw3NasILoB
nrTy0yOYlM01ucZOTB/0JKpzidkJg336amZdF4bLkfUPyCTE6kzG0PrLrQSeycr4
jkDfWfuFmRhKD2lDtoWDHqiPfe9IJkcTMnp5XfXAG3V2pAc+Mer1WIYajuHieO8m
oFNCzBc0obe9f+zEIBjoINco4FumxP78UZMzwe+hHrj8nFtju7WbKqGWumYH0L34
47tUoWXkCZs9Ni9DUIBVYWzEobgS7pl/H1HLR36klfAHLut0T9PZgipKRjSx1Ljz
M78wxVhupdDvHDEdKnq9E9lD6018iHgEGBEIACAWIQQPOeQljGU5R3AqgjQIsgNg
oDqd6AUCWmA/0gIbDAAKCRAIsgNgoDqd6LTZAQDESAvVHbtyKTwMmrx88p6Ljmtp
pKxKP0O5AFM7b7INbQEAtE3lAIBUA31x3fjC5L6UyGk/a2ssOWTsJx98YxMcPhs=
=H4Qj
-----END PGP PUBLIC KEY BLOCK-----

View File

@ -1,46 +0,0 @@
### Welcome to the DEROHE Testnet
[Explorer](https://testnetexplorer.dero.io) [Source](https://github.com/deroproject/derohe) [Twitter](https://twitter.com/DeroProject) [Discord](https://discord.gg/H95TJDp) [Wiki](https://wiki.dero.io) [Github](https://github.com/deroproject/derohe) [DERO CryptoNote Mainnet Stats](http://network.dero.io) [Mainnet WebWallet](https://wallet.dero.io/)
### DERO HE Changelog
[From Wikipedia: ](https://en.wikipedia.org/wiki/Homomorphic_encryption)
###At this point in time, DERO blockchain has the first mover advantage in the following
* Private SCs ( no one knows who owns what tokens and who is transferring to whom and how much is being transferred)
* Homomorphic protocol
* Ability to do instant sync (takes couple of seconds or minutes), depends on network bandwidth.
* Ability to deliver encrypted license keys and other data.
* Pruned chains are the core.
* Ability to model 99.9% earth based financial model of the world.
* Privacy by design, backed by crypto algorithms. Many years of research in place.
###3.3
* Private SCs are now supported. (90% completed).
* Sample Token contract is available with guide.
* Multi-send is now possible. sending to multiple destination per tx
* Few more ideas implemented and will be tested for review in upcoming technology preview.
###3.2
* Open SCs are now supported
* Private SCs which have their balance encrypted at all times (under implementation)
* SCs can now update themselves. however, new code will only run on next invocation
* Multi Send is under implementation.
###3.1
* TX now have significant savings of around 31 * ringsize bytes for every tx
* Daemon now supports pruned chains.
* Daemon by default bootstraps a pruned chain.
* Daemon currently syncs full node by using --fullnode option.
* P2P has been rewritten for various improvements and easier understanding of state machine
* Address specification now enables to embed various RPC parameters for easier transaction
* DERO blockchain represents transaction finality in a couple of blocks (less than 1 minute), unlike other blockchains.
* Proving and parsing of embedded data is now available in explorer.
* Senders/Receivers both have proofs which confirm data sent on execution.
* All tx now have inbuilt space of 144 bytes for user defined data
* User defined space has inbuilt RPC which can be used to implement most practical use-cases.All user defined data is encrypted.
* The model currrently defines data on chain while execution is referred to wallet extensions. A dummy example of pongserver extension showcases how to enable purchases/delivery of license keys/information privately.
* Burn transactions which burn value are now working.
###3.0
* DERO HE implemented

90
LICENSE
View File

@ -1,90 +0,0 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

237
Readme.md
View File

@ -1,239 +1,4 @@
### Welcome to the DEROHE Testnet
[Explorer](https://testnetexplorer.dero.io) [Source](https://github.com/deroproject/derohe) [Twitter](https://twitter.com/DeroProject) [Discord](https://discord.gg/H95TJDp) [Wiki](https://wiki.dero.io) [Github](https://github.com/deroproject/derohe) [DERO CryptoNote Mainnet Stats](http://network.dero.io) [Mainnet WebWallet](https://wallet.dero.io/)
### DERO HE [ DERO Homomorphic Encryption]
[From Wikipedia: ](https://en.wikipedia.org/wiki/Homomorphic_encryption)
**Homomorphic encryption is a form of encryption allowing one to perform calculations on encrypted data without decrypting it first. The result of the computation is in an encrypted form, when decrypted the output is the same as if the operations had been performed on the unencrypted data.**
Homomorphic encryption can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted. In highly regulated industries, such as health care, homomorphic encryption can be used to enable new services by removing privacy barriers inhibiting data sharing. For example, predictive analytics in health care can be hard to apply via a third party service provider due to medical data privacy concerns, but if the predictive analytics service provider can operate on encrypted data instead, these privacy concerns are diminished.
**DERO is pleased to announce release of DERO Homomorphic Encryption Protocol testnet.**
DERO will migrate from exisiting CryptoNote Protocol to it's own DERO Homomorphic Encryption Blockchain Protocol(DHEBP).
REPO cleaned here.
### Table of Contents [DEROHE]
1. [ABOUT DERO PROJECT](#about-dero-project)
1. [DERO HE Features](#dero-he-features)
1. [DERO HE TX Sizes](#dero-he-tx-sizes)
1. [DERO Crypto](#dero-crypto)
1. [DERO HE PORTS](#dero-he-ports)
1. [Technical](#technical)
1. [DERO blockchain salient features](#dero-blockchain-salient-features)
1. [DERO Innovations](#dero-innovations)
1. [Dero DAG](#dero-dag)
1. [Client Protocol](#client-protocol)
1. [Dero Rocket Bulletproofs](#dero-rocket-bulletproofs)
1. [51% Attack Resistant](#51-attack-resistant)
1. [DERO Mining](#dero-mining)
1. [DERO Installation](#dero-installation)
1. [Installation From Source](#installation-from-source)
1. [Installation From Binary](#installation-from-binary)
1. [Next Step After DERO Installation](#next-step-after-dero-installation)
1. [Running DERO Daemon](#running-dero-daemon)
1. [Running DERO wallet](#running-dero-wallet)
1. [DERO Cmdline Wallet](#dero-cmdline-wallet)
1. [DERO WebWallet](#dero-web-wallet)
1. [DERO Gui Wallet ](#dero-gui-wallet)
1. [DERO Explorer](#dero-explorer)
1. [Proving DERO Transactions](#proving-dero-transactions)
#### ABOUT DERO PROJECT
&nbsp; &nbsp; &nbsp; &nbsp; [DERO](https://github.com/deroproject/derosuite) is decentralized DAG(Directed Acyclic Graph) based blockchain with enhanced reliability, privacy, security, and usability. Consensus algorithm is PoW based on [DERO AstroBWT: ASIC/FPGA/GPU resistant CPU mining algorithm ](https://github.com/deroproject/astrobwt). DERO is industry leading and the first blockchain to have bulletproofs, TLS encrypted Network.
&nbsp; &nbsp; &nbsp; &nbsp; DERO is the first crypto project to combine a Proof of Work blockchain with a DAG block structure and fully anonymous transactions based on [Homomorphic Encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption). The fully distributed ledger processes transactions with a sixty-seconds average block time and is secure against majority hashrate attacks. DERO will be the first Homomorphic Encryption based blockchain to have smart contracts on its native chain without any extra layers or secondary blockchains. At present DERO has Smart Contracts on old CryptoNote protocol [testnet](https://github.com/deroproject/documentation/blob/master/testnet/stargate.md).
#### DERO HE Features
1. **Homomorphic account based model** [First privacy chain to have this.](Check blockchain/transaction_execute.go line 82-95).
2. Instant account balances[ Need to get 66 bytes of data only from the blockchain].
3. No more chain scanning or wallet scanning to detect funds, no key images etc.
4. Truly light weight and efficient wallets.
5. Fixed per account cost of 66 bytes in blockchain[Immense scalability].
6. Perfectly anonymous transactions with many-out-of-many proofs [bulletproofs and sigma protocol]
7. Deniability
8. Fixed transaction size say ~2.5KB (ring size 8) or ~3.4 KB (ring size 16) etc based on chosen anonymity group size[ logarithmic growth]
9. Anonymity group can be chosen in powers of 2.
10. Allows homomorphic assets ( programmable SCs with fixed overhead per asset ), with open Smart Contract but encrypted data [Internal testing/implementation not on this current testnet branch].
11. Allows open assets ( programmable SCs with fixed overhead per asset ) [Internal testing/implementation not on this current testnet branch]
12. Allows chain pruning on daemons to control growth of data on daemons.
13. Transaction generation takes less than 25 ms.
14. Transaction verification takes even less than 25ms time.
15. No trusted setup, no hidden parameters.
16. Pruning chain/history for immense scalibility[while still secured using merkle proofs].
17. Example disk requirements of 1 billion accounts ( assumming it does not want to keep history of transactions, but keeps proofs to prove that the node is in sync with all other nodes)
```
Requirement of 1 account = 66 bytes
Assumming storage overhead per account of 128 bytes ( constant )
Total requirements = (66 + 128)GB ~ 200GB
Assuming we are off by factor of 4 = 800GB
```
18. Note that, Even after 1 trillion transactions, 1 billion accounts will consume 800GB only, If history is not maintained, and everything still will be in proved state using merkle roots.
And so, Even Raspberry Pi can host the entire chain.
18. Senders can prove to receiver what amount they have send (without revealing themselves).
19. Entire chain is rsyncable while in operation.
20. Testnet released with source code.
#### DERO HE TX Sizes
| Ring Size | DEROHE TX Size |
| -------- | -------- |
| 2 | 1553 bytes |
| 4 | 2013 bytes |
| 8 | 2605 bytes |
| 16 | 3461 bytes |
| 32 | 4825 bytes |
| 64 | 7285 bytes |
| 128 | 11839 bytes |
| 512 | ~35000 bytes |
**NB:** Plan to reduce TX sizes further.
#### DERO Crypto
&nbsp; &nbsp; &nbsp; &nbsp; Secure and fast crypto is the basic necessity of this project and adequate amount of time has been devoted to develop/study/implement/audit it. Most of the crypto such as ring signatures have been studied by various researchers and are in production by number of projects. As far as the Bulletproofs are considered, since DERO is the first one to implement/deploy, they have been given a more detailed look. First, a bare bones bulletproofs was implemented, then implementations in development were studied (Benedict Bunz,XMR, Dalek Bulletproofs) and thus improving our own implementation.
&nbsp; &nbsp; &nbsp; &nbsp; Some new improvements were discovered and implemented (There are number of other improvements which are not explained here). Major improvements are in the Double-Base Double-Scalar Multiplication while validating bulletproofs. A typical bulletproof takes ~15-17 ms to verify. Optimised bulletproofs takes ~1 to ~2 ms(simple bulletproof, no aggregate/batching). Since, in the case of bulletproofs the bases are fixed, we can use precompute table to convert 64*2 Base Scalar multiplication into doublings and additions (NOTE: We do not use Bos-Coster/Pippienger methods). This time can be again easily decreased to .5 ms with some more optimizations. With batching and aggregation, 5000 range-proofs (~2500 TX) can be easily verified on even a laptop. The implementation for bulletproofs is in github.com/deroproject/derosuite/crypto/ringct/bulletproof.go , optimized version is in github.com/deroproject/derosuite/crypto/ringct/bulletproof_ultrafast.go
&nbsp; &nbsp; &nbsp; &nbsp; There are other optimizations such as base-scalar multiplication could be done in less than a microsecond. Some of these optimizations are not yet deployed and may be deployed at a later stage.
#### DEROHE PORTS
**Mainnet:**
P2P Default Port: 10101
RPC Default Port: 10102
Wallet RPC Default Port: 10103
**Testnet:**
P2P Default Port: 40401
RPC Default Port: 40402
Wallet RPC Default Port: 40403
#### Technical
&nbsp; &nbsp; &nbsp; &nbsp; For specific details of current DERO core (daemon) implementation and capabilities, see below:
1. **DAG:** No orphan blocks, No soft-forks.
2. **BulletProofs:** Zero Knowledge range-proofs(NIZK)
3. **AstroBWT:** This is memory-bound algorithm. This provides assurance that all miners are equal. ( No miner has any advantage over common miners).
4. **P2P Protocol:** This layers controls exchange of blocks, transactions and blockchain itself.
5. **Pederson Commitment:** (Part of ring confidential transactions): Pederson commitment algorithm is a cryptographic primitive that allows user to commit to a chosen value while keeping it hidden to others. Pederson commitment is used to hide all amounts without revealing the actual amount. It is a homomorphic commitment scheme.
6. **Homomorphic Encryption:** Homomorphic Encryption is used to to do operations such as addition/substraction to settle balances with data being always encrypted (Balances are never decrypted before/during/after operations in any form.).
7. **Homomorphic Ring Confidential Transactions:** Gives untraceability , privacy and fungibility while making sure that the system is stable and secure.
8. **Core-Consensus Protocol implemented:** Consensus protocol serves 2 major purpose
1. Protects the system from adversaries and protects it from forking and tampering.
2. Next block in the chain is the one and only correct version of truth ( balances).
9. **Proof-of-Work(PoW) algorithm:** PoW part of core consensus protocol which is used to cryptographically prove that X amount of work has been done to successfully find a block.
10. **Difficulty algorithm**: Difficulty algorithm controls the system so as blocks are found roughly at the same speed, irrespective of the number and amount of mining power deployed.
11. **Serialization/De-serialization of blocks**: Capability to encode/decode/process blocks .
12. **Serialization/De-serialization of transactions**: Capability to encode/decode/process transactions.
13. **Transaction validity and verification**: Any transactions flowing within the DERO network are validated,verified.
14. **Socks proxy:** Socks proxy has been implemented and integrated within the daemon to decrease user identifiability and improve user anonymity.
15. **Interactive daemon** can print blocks, txs, even entire blockchain from within the daemon
16. **status, diff, print_bc, print_block, print_tx** and several other commands implemented
17. GO DERO Daemon has both mainnet, testnet support.
18. **Enhanced Reliability, Privacy, Security, Useability, Portabilty assured.**
#### DERO blockchain salient features
- [DAG Based: No orphan blocks, No soft-forks.](#dero-dag)
- [51% Attack resistant.](#51-attack-resistant)
- 60 Second Block time.
- Extremely fast transactions with one minute/block confirmation time.
- SSL/TLS P2P Network.
- Homomorphic: Fully Encrypted Blockchain
- [Dero Fastest Rocket BulletProofs](#dero-rocket-bulletproofs): Zero Knowledge range-proofs(NIZK).
- Ring signatures.
- Fully Auditable Supply.
- DERO blockchain is written from scratch in Golang. [See all unique blockchains from scratch.](https://twitter.com/cryptic_monk/status/999227961059528704)
- Developed and maintained by original developers.
#### DERO Innovations
&nbsp; &nbsp; &nbsp; &nbsp; Following are DERO first and leading innovations.
#### DERO DAG
&nbsp; &nbsp; &nbsp; &nbsp; DERO DAG implementation builds outs a main chain from the DAG network of blocks which refers to main blocks (100% reward) and side blocks (8% rewards).
![DERO DAG stats.dero.io](https://raw.githubusercontent.com/deroproject/documentation/master/images/Dag1.jpeg)
*DERO DAG Screenshot* [Live](https://stats.dero.io/)
![DERO DAG network.dero.io](https://raw.githubusercontent.com/deroproject/documentation/master/images/dagx4.png)
*DERO DAG Screenshot* [Live](https://network.dero.io/)
#### Client Protocol
&nbsp; &nbsp; &nbsp; &nbsp; Traditional Blockchains process blocks as single unit of computation(if a double-spend tx occurs within the block, entire block is rejected). However DERO network accepts such blocks since DERO blockchain considers transaction as a single unit of computation.DERO blocks may contain duplicate or double-spend transactions which are filtered by client protocol and ignored by the network. DERO DAG processes transactions atomically one transaction at a time.
#### DERO Rocket Bulletproofs
- Dero ultrafast bulletproofs optimization techniques in the form used did not exist anywhere in publicly available cryptography literature at the time of implementation. Please contact for any source/reference to include here if it exists. Ultrafast optimizations verifies Dero bulletproofs 10 times faster than other/original bulletproof implementations. See: https://github.com/deroproject/derosuite/blob/master/crypto/ringct/bulletproof_ultrafast.go
- DERO rocket bulletproof implementations are hardened, which protects DERO from certain class of attacks.
- DERO rocket bulletproof transactions structures are not compatible with other implementations.
&nbsp; &nbsp; &nbsp; &nbsp; Also there are several optimizations planned in near future in Dero rocket bulletproofs which will lead to several times performance boost. Presently they are under study for bugs, verifications, compatibilty etc.
#### 51% Attack Resistant
&nbsp; &nbsp; &nbsp; &nbsp; DERO DAG implementation builds outs a main chain from the DAG network of blocks which refers to main blocks (100% reward) and side blocks (8% rewards). Side blocks contribute to chain PoW security and thus traditional 51% attacks are not possible on DERO network. If DERO network finds another block at the same height, instead of choosing one, DERO include both blocks. Thus, rendering the 51% attack futile.
#### DERO Mining
[Mining](https://github.com/deroproject/wiki/wiki/Mining)
#### DERO Installation
&nbsp; &nbsp; &nbsp; &nbsp; DERO is written in golang and very easy to install both from source and binary.
#### Installation From Source
1. Install Golang, Golang version 1.12.12 required.
1. In go workspace: ```go get -u github.com/deroproject/derohe/...```
1. Check go workspace bin folder for binaries.
1. For example on Linux machine following binaries will be created:
1. derod-linux-amd64 -> DERO daemon.
1. dero-wallet-cli-linux-amd64 -> DERO cmdline wallet.
1. explorer-linux-amd64 -> DERO Explorer. Yes, DERO has prebuilt personal explorer also for advance privacy users.
#### Installation From Binary
&nbsp; &nbsp; &nbsp; &nbsp; Download [DERO binaries](https://github.com/deroproject/derosuite/releases) for ARM, INTEL, MAC platform and Windows, Mac, FreeBSD, OpenBSD, Linux etc. operating systems.
Most users required following binaries:
[Windows 7-10, Server 64bit/amd64 ](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_windows_amd64_2.1.6-1.alpha.atlantis.07032019.zip)
[Windows 32bit/x86/386](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_windows_x86_2.1.6-1.alpha.atlantis.07032019.zip)
[Linux 64bit/amd64](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_linux_amd64_2.1.6-1.alpha.atlantis.07032019.tar.gz)
[Linux 32bit/x86](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_linux_386_2.1.6-1.alpha.atlantis.07032019.tar.gz)
[FreeBSD 64bit/amd64](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_freebsd_amd64_2.1.6-1.alpha.atlantis.07032019.tar.gz)
[OpenBSD 64bit/amd64](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_openbsd_amd64_2.1.6-1.alpha.atlantis.07032019.tar.gz)
[Mac OS](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_apple_mac_darwin_amd64_2.1.6-1.alpha.atlantis.07032019.tar.gz)
Contact for support of other hardware and OS.
#### Next Step After DERO Installation
&nbsp; &nbsp; &nbsp; &nbsp; Running DERO daemon supports DERO network and shows your support to privacy.
#### Running DERO Daemon
&nbsp; &nbsp; &nbsp; &nbsp; Run derod.exe or derod-linux-amd64 depending on your operating system. It will start syncing.
1. DERO daemon core cryptography is highly optimized and fast.
1. Use dedicated machine and SSD for best results.
1. VPS with 2-4 Cores, 4GB RAM,15GB disk is recommended.
![DERO Daemon](https://raw.githubusercontent.com/deroproject/documentation/master/images/derod1.png)
*DERO Daemon Screenshot*
#### Running DERO Wallet
Dero cmdline wallet is most reliable and has support of all functions. Cmdline wallet is most secure and reliable.
#### DERO Cmdline Wallet
&nbsp; &nbsp; &nbsp; &nbsp; DERO cmdline wallet is menu based and very easy to operate.
Use various options to create, recover, transfer balance etc.
**NOTE:** DERO cmdline wallet by default connects DERO daemon running on local machine on port 20206.
If DERO daemon is not running start DERO wallet with --remote option like following:
**./dero-wallet-cli-linux-amd64 --remote**
![DERO Wallet](https://raw.githubusercontent.com/deroproject/documentation/master/images/wallet-recover2.png)
*DERO Cmdline Wallet Screenshot*
#### DERO Explorer
[DERO Explorer](https://explorer.dero.io/) is used to check and confirm transaction on DERO Network.
[DERO testnet Explorer](https://testnetexplorer.dero.io/) is used to check and confirm transaction on DERO Network.
DERO users can run their own explorer on local machine and can [browse](http://127.0.0.1:8080) on local machine port 8080.
![DERO Explorer](https://github.com/deroproject/documentation/raw/master/images/dero_explorer.png)
*DERO EXPLORER Screenshot*
#### Proving DERO Transactions
DERO blockchain is completely private, so anyone cannot view, confirm, verify any other's wallet balance or any transactions.
So to prove any transaction you require *TXID* and *deroproof*.
deroproof can be obtained using get_tx_key command in dero-wallet-cli.
Enter the *TXID* and *deroproof* in [DERO EXPLORER](https://testnetexplorer.dero.io)
![DERO Explorer Proving Transaction](https://github.com/deroproject/documentation/raw/master/images/explorer-prove-tx.png)
*DERO Explorer Proving Transaction*

View File

@ -1,30 +0,0 @@
1] ### DEROHE Installation, https://github.com/deroproject/derohe
DERO is written in golang and very easy to install both from source and binary.
Installation From Source:
Install Golang, minimum Golang 1.15 required.
In go workspace: go get -u github.com/deroproject/derohe/...
Check go workspace bin folder for binaries.
For example on Linux machine following binaries will be created:
derod-linux-amd64 -> DERO daemon.
dero-wallet-cli-linux-amd64 -> DERO cmdline wallet.
explorer-linux-amd64 -> DERO Explorer. Yes, DERO has prebuilt personal explorer also for advance privacy users.
Installation From Binary
Download DERO binaries for ARM, INTEL, MAC platform and Windows, Mac, FreeBSD, OpenBSD, Linux etc. operating systems.
https://github.com/deroproject/derohe/releases
2] ### Running DERO Daemon
./derod-linux-amd64
3] ### Running DERO Wallet (Use local or remote daemon)
./dero-wallet-cli-linux-amd64 --remote
https://wallet.dero.io [Web wallet]
4] ### DERO Mining Quickstart
Run miner with wallet address and no. of threads based on your CPU.
./dero-miner --mining-threads 2 --daemon-rpc-address=http://testnetexplorer.dero.io:40402 --wallet-address deto1qxsplx7vzgydacczw6vnrtfh3fxqcjevyxcvlvl82fs8uykjkmaxgfgulfha5
NOTE: Miners keep your system clock sync with NTP etc.
Eg on linux machine: ntpdate pool.ntp.org
For details visit http://wiki.dero.io

View File

@ -1,26 +0,0 @@
Copyright (c) 2020 DERO Foundation. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors
may be used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,230 +0,0 @@
package astrobwt
//import "fmt"
import "strings"
import "errors"
import "sort"
import "golang.org/x/crypto/sha3"
import "encoding/binary"
import "golang.org/x/crypto/salsa20/salsa"
// see here to improve the algorithms more https://github.com/y-256/libdivsufsort/blob/wiki/SACA_Benchmarks.md
// ErrInvalidSuffixArray means length of sa is not equal to 1+len(s)
var ErrInvalidSuffixArray = errors.New("bwt: invalid suffix array")
// Transform returns BurrowsWheeler transform of a byte slice.
// See https://en.wikipedia.org/wiki/Burrows%E2%80%93Wheeler_transform
func Transform(s []byte, es byte) ([]byte, error) {
sa := SuffixArray(s)
bwt, err := FromSuffixArray(s, sa, es)
return bwt, err
}
// InverseTransform reverses the bwt to original byte slice. Not optimized yet.
func InverseTransform(t []byte, es byte) []byte {
le := len(t)
table := make([]string, le)
for range table {
for i := 0; i < le; i++ {
table[i] = string(t[i:i+1]) + table[i]
}
sort.Strings(table)
}
for _, row := range table {
if strings.HasSuffix(row, "$") {
return []byte(row[:le-1])
}
}
return []byte("")
/*
n := len(t)
lines := make([][]byte, n)
for i := 0; i < n; i++ {
lines[i] = make([]byte, n)
}
for i := 0; i < n; i++ {
for j := 0; j < n; j++ {
lines[j][n-1-i] = t[j]
}
sort.Sort(byteutil.SliceOfByteSlice(lines))
}
s := make([]byte, n-1)
for _, line := range lines {
if line[n-1] == es {
s = line[0 : n-1]
break
}
}
return s
*/
}
// SuffixArray returns the suffix array of s.
func SuffixArray(s []byte) []int {
_sa := New(s)
var sa []int = make([]int, len(s)+1)
sa[0] = len(s)
for i := 0; i < len(s); i++ {
sa[i+1] = int(_sa.sa.int32[i])
}
return sa
}
// FromSuffixArray compute BWT from sa
func FromSuffixArray(s []byte, sa []int, es byte) ([]byte, error) {
if len(s)+1 != len(sa) || sa[0] != len(s) {
return nil, ErrInvalidSuffixArray
}
bwt := make([]byte, len(sa))
bwt[0] = s[len(s)-1]
for i := 1; i < len(sa); i++ {
if sa[i] == 0 {
bwt[i] = es
} else {
bwt[i] = s[sa[i]-1]
}
}
return bwt, nil
}
func BWT(input []byte) ([]byte, int) {
if len(input) >= maxData32 {
panic("input too big to handle")
}
sa := make([]int32, len(input)+1)
text_32(input, sa[1:])
bwt := make([]byte, len(input)+1)
bwt[0] = input[len(input)-1]
emarker := 0
for i := 1; i < len(sa); i++ {
if sa[i] == 0 {
//bwt[i] = '$' //es
emarker = i
} else {
bwt[i] = input[sa[i]-1]
}
}
//bwt[emarker] = '$'
return bwt, emarker
}
const stage1_length int = 147253 // it is a prime
const MAX_LENGTH int = 1024*1024 + stage1_length + 1024
func POW(inputdata []byte) (outputhash [32]byte) {
var counter [16]byte
key := sha3.Sum256(inputdata)
var stage1 [stage1_length]byte // stages are taken from it
var stage2 [1024*1024 + stage1_length + 1024]byte
salsa.XORKeyStream(stage1[:stage1_length], stage1[:stage1_length], &counter, &key)
stage1_result, eos := BWT(stage1[:stage1_length])
key = sha3.Sum256(stage1_result)
stage2_length := stage1_length + int(binary.LittleEndian.Uint32(key[:])&0xfffff)
for i := range counter { // will be optimized by compiler
counter[i] = 0
}
salsa.XORKeyStream(stage2[:stage2_length], stage2[:stage2_length], &counter, &key)
stage2_result, eos := BWT(stage2[:stage2_length])
// fmt.Printf("result %x stage2_length %d \n", key, stage2_length)
copy(stage2_result[:], []byte("Broken for testnet"))
key = sha3.Sum256(stage2_result)
//fmt.Printf("result %x\n", key)
copy(outputhash[:], key[:])
_ = eos
return
}
// input byte
// sa should be len(input) + 1
// result len len(input) + 1
func BWT_0alloc(input []byte, sa []int32, bwt []byte) int {
//ix := &Index{data: input}
if len(input) >= maxData32 {
panic("input too big to handle")
}
if len(sa) != len(input)+1 {
panic("invalid sa array")
}
if len(bwt) != len(input)+1 {
panic("invalid bwt array")
}
//sa := make([]int32, len(input)+1)
text_32(input, sa[1:])
//bwt := make([]byte, len(input)+1)
bwt[0] = input[len(input)-1]
emarker := 0
for i := 1; i < len(sa); i++ {
if sa[i] == 0 {
//bwt[i] = '$' //es
emarker = i
} else {
bwt[i] = input[sa[i]-1]
}
}
//bwt[emarker] = '$'
return emarker
}
func POW_0alloc(inputdata []byte) (outputhash [32]byte) {
var counter [16]byte
var sa [MAX_LENGTH]int32
// var bwt [max_length]int32
var stage1 [stage1_length]byte // stages are taken from it
var stage1_result [stage1_length + 1]byte
var stage2 [1024*1024 + stage1_length + 1]byte
var stage2_result [1024*1024 + stage1_length + 1]byte
key := sha3.Sum256(inputdata)
salsa.XORKeyStream(stage1[:stage1_length], stage1[:stage1_length], &counter, &key)
eos := BWT_0alloc(stage1[:stage1_length], sa[:stage1_length+1], stage1_result[:stage1_length+1])
key = sha3.Sum256(stage1_result[:])
stage2_length := stage1_length + int(binary.LittleEndian.Uint32(key[:])&0xfffff)
for i := range counter { // will be optimized by compiler
counter[i] = 0
}
salsa.XORKeyStream(stage2[:stage2_length], stage2[:stage2_length], &counter, &key)
for i := range sa {
sa[i] = 0
}
eos = BWT_0alloc(stage2[:stage2_length], sa[:stage2_length+1], stage2_result[:stage2_length+1])
_ = eos
copy(stage2_result[:], []byte("Broken for testnet"))
key = sha3.Sum256(stage2_result[:stage2_length+1])
copy(outputhash[:], key[:])
return
}

View File

@ -1,225 +0,0 @@
package astrobwt
//import "os"
//import "fmt"
import "sync"
import "encoding/binary"
import "golang.org/x/crypto/sha3"
import "golang.org/x/crypto/salsa20/salsa"
// see here to improve the algorithms more https://github.com/y-256/libdivsufsort/blob/wiki/SACA_Benchmarks.md
// Original implementation was in xmrig miner, however it had a flaw which has been fixed
// this optimized algorithm is used only in the miner and not in the blockchain
//const stage1_length int = 147253 // it is a prime
//const max_length int = 1024*1024 + stage1_length + 1024
type Data struct {
stage1 [stage1_length + 64]byte // stages are taken from it
stage1_result [stage1_length + 1]byte
stage2 [1024*1024 + stage1_length + 1 + 64]byte
stage2_result [1024*1024 + stage1_length + 1]byte
indices [ALLOCATION_SIZE]uint64
tmp_indices [ALLOCATION_SIZE]uint64
}
var pool = sync.Pool{New: func() interface{} { return &Data{} }}
func POW_optimized_v1(inputdata []byte, max_limit int) (outputhash [32]byte, success bool) {
data := pool.Get().(*Data)
outputhash, success = POW_optimized_v2(inputdata, max_limit, data)
pool.Put(data)
return
}
func POW_optimized_v2(inputdata []byte, max_limit int, data *Data) (outputhash [32]byte, success bool) {
var counter [16]byte
for i := range data.stage1 {
data.stage1[i] = 0
}
/* for i := range data.stage1_result{
data.stage1_result[i] =0
}*/
key := sha3.Sum256(inputdata)
salsa.XORKeyStream(data.stage1[1:stage1_length+1], data.stage1[1:stage1_length+1], &counter, &key)
sort_indices(stage1_length+1, data.stage1[:], data.stage1_result[:], data)
key = sha3.Sum256(data.stage1_result[:])
stage2_length := stage1_length + int(binary.LittleEndian.Uint32(key[:])&0xfffff)
if stage2_length > max_limit {
for i := range outputhash { // will be optimized by compiler
outputhash[i] = 0xff
}
success = false
return
}
for i := range counter { // will be optimized by compiler
counter[i] = 0
}
salsa.XORKeyStream(data.stage2[1:stage2_length+1], data.stage2[1:stage2_length+1], &counter, &key)
sort_indices(stage2_length+1, data.stage2[:], data.stage2_result[:], data)
copy(data.stage2_result[:], []byte("Broken for testnet"))
key = sha3.Sum256(data.stage2_result[:stage2_length+1])
for i := range data.stage2 {
data.stage2[i] = 0
}
copy(outputhash[:], key[:])
success = true
return
}
const COUNTING_SORT_BITS uint64 = 10
const COUNTING_SORT_SIZE uint64 = 1 << COUNTING_SORT_BITS
const ALLOCATION_SIZE = MAX_LENGTH
func BigEndian_Uint64(b []byte) uint64 {
_ = b[7] // bounds check hint to compiler; see golang.org/issue/14808
return uint64(b[7]) | uint64(b[6])<<8 | uint64(b[5])<<16 | uint64(b[4])<<24 |
uint64(b[3])<<32 | uint64(b[2])<<40 | uint64(b[1])<<48 | uint64(b[0])<<56
}
func smaller(input []uint8, a, b uint64) bool {
value_a := a >> 21
value_b := b >> 21
if value_a < value_b {
return true
}
if value_a > value_b {
return false
}
data_a := BigEndian_Uint64(input[(a%(1<<21))+5:])
data_b := BigEndian_Uint64(input[(b%(1<<21))+5:])
return data_a < data_b
}
// basically
func sort_indices(N int, input_extra []byte, output []byte, d *Data) {
var counters [2][COUNTING_SORT_SIZE]uint32
indices := d.indices[:]
tmp_indices := d.tmp_indices[:]
input := input_extra[1:]
loop3 := N / 3 * 3
for i := 0; i < loop3; i += 3 {
k0 := BigEndian_Uint64(input[i:])
counters[0][(k0>>(64-COUNTING_SORT_BITS*2))&(COUNTING_SORT_SIZE-1)]++
counters[1][k0>>(64-COUNTING_SORT_BITS)]++
k1 := k0 << 8
counters[0][(k1>>(64-COUNTING_SORT_BITS*2))&(COUNTING_SORT_SIZE-1)]++
counters[1][k1>>(64-COUNTING_SORT_BITS)]++
k2 := k0 << 16
counters[0][(k2>>(64-COUNTING_SORT_BITS*2))&(COUNTING_SORT_SIZE-1)]++
counters[1][k2>>(64-COUNTING_SORT_BITS)]++
}
if N%3 != 0 {
for i := loop3; i < N; i++ {
k := BigEndian_Uint64(input[i:])
counters[0][(k>>(64-COUNTING_SORT_BITS*2))&(COUNTING_SORT_SIZE-1)]++
counters[1][k>>(64-COUNTING_SORT_BITS)]++
}
}
/*
for i := 0; i < N ; i++{
k := BigEndian_Uint64(input[i:])
counters[0][(k >> (64 - COUNTING_SORT_BITS * 2)) & (COUNTING_SORT_SIZE - 1)]++
counters[1][k >> (64 - COUNTING_SORT_BITS)]++
}
*/
prev := [2]uint32{counters[0][0], counters[1][0]}
counters[0][0] = prev[0] - 1
counters[1][0] = prev[1] - 1
var cur [2]uint32
for i := uint64(1); i < COUNTING_SORT_SIZE; i++ {
cur[0], cur[1] = counters[0][i]+prev[0], counters[1][i]+prev[1]
counters[0][i] = cur[0] - 1
counters[1][i] = cur[1] - 1
prev[0] = cur[0]
prev[1] = cur[1]
}
for i := N - 1; i >= 0; i-- {
k := BigEndian_Uint64(input[i:])
// FFFFFFFFFFE00000 = (0xFFFFFFFFFFFFFFF<< 21) // to clear bottom 21 bits
tmp := counters[0][(k>>(64-COUNTING_SORT_BITS*2))&(COUNTING_SORT_SIZE-1)]
counters[0][(k>>(64-COUNTING_SORT_BITS*2))&(COUNTING_SORT_SIZE-1)]--
tmp_indices[tmp] = (k & 0xFFFFFFFFFFE00000) | uint64(i)
}
for i := N - 1; i >= 0; i-- {
data := tmp_indices[i]
tmp := counters[1][data>>(64-COUNTING_SORT_BITS)]
counters[1][data>>(64-COUNTING_SORT_BITS)]--
indices[tmp] = data
}
prev_t := indices[0]
for i := 1; i < N; i++ {
t := indices[i]
if smaller(input, t, prev_t) {
t2 := prev_t
j := i - 1
for {
indices[j+1] = prev_t
j--
if j < 0 {
break
}
prev_t = indices[j]
if !smaller(input, t, prev_t) {
break
}
}
indices[j+1] = t
t = t2
}
prev_t = t
}
// optimized unrolled code below this comment
/*for i := 0; i < N;i++{
output[i] = input_extra[indices[i] & ((1 << 21) - 1) ]
}*/
loop4 := ((N + 1) / 4) * 4
for i := 0; i < loop4; i += 4 {
output[i+0] = input_extra[indices[i+0]&((1<<21)-1)]
output[i+1] = input_extra[indices[i+1]&((1<<21)-1)]
output[i+2] = input_extra[indices[i+2]&((1<<21)-1)]
output[i+3] = input_extra[indices[i+3]&((1<<21)-1)]
}
for i := loop4; i < N; i++ {
output[i] = input_extra[indices[i]&((1<<21)-1)]
}
// there is an issue above, if the last byte of input is 0x00, initialbytes are wrong, this fix may not be complete
if N > 3 && input[N-2] == 0 {
backup_byte := output[0]
output[0] = 0
for i := 1; i < N; i++ {
if output[i] != 0 {
output[i-1] = backup_byte
break
}
}
}
}

View File

@ -1,105 +0,0 @@
package astrobwt
import "crypto/rand"
import "strings"
import "testing"
import "encoding/hex"
// see https://www.geeksforgeeks.org/burrows-wheeler-data-transform-algorithm/
func TestBWTTransform(t *testing.T) {
tests := []struct {
input string
bwt string
}{
{"BANANA", "ANNB$AA"}, // from https://www.geeksforgeeks.org/burrows-wheeler-data-transform-algorithm/
{"abracadabra", "ard$rcaaaabb"},
{"appellee", "e$elplepa"},
{"GATGCGAGAGATG", "GGGGGGTCAA$TAA"},
}
for _, test := range tests {
input := "\x00" + test.input + "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
var output = make([]byte, 64, 64)
sort_indices(len(test.input)+1, []byte(input), output, &Data{})
output = output[:len(test.input)+1]
output_s := strings.Replace(string(output), "\x00", "$", -1)
if output_s != test.bwt {
t.Errorf("Test failed: Transform %s %s %x", output_s, test.bwt, output)
}
}
}
func TestPOW_optimized_v1(t *testing.T) {
p := POW([]byte{0, 0, 0, 0})
p0 := POW_0alloc([]byte{0, 0, 0, 0})
p_optimized, _ := POW_optimized_v1([]byte{0, 0, 0, 0}, MAX_LENGTH)
if string(p[:]) != string(p0[:]) {
t.Error("Test failed: POW and POW_0alloc returns different ")
}
if string(p[:]) != string(p_optimized[:]) {
t.Error("Test failed: POW and POW_rewrite returns different ")
}
for i := 20; i < 200; i++ {
buf := make([]byte, 20, 20)
rand.Read(buf)
p := POW(buf)
p0 := POW_0alloc(buf)
p_optimized, _ := POW_optimized_v1(buf, MAX_LENGTH)
if string(p[:]) != string(p0[:]) {
t.Errorf("Test failed: POW and POW_0alloc returns different for i=%d buf %x", i, buf)
}
if string(p[:]) != string(p_optimized[:]) {
t.Errorf("Test failed: POW and POW_rewrite returns different for i=%d buf %x", i, buf)
}
}
}
func TestPOW_optimized_v1Tests(t *testing.T) {
tests := []struct {
input string
}{
{"57b84420d2028aef8a05b42ea893a8c4f2219f73"},
{"67d49bd1c53645ec96c50230083e55b120b5005ffbaf4b2f"},
{"67d49bd1c53645ec96c50230183e55b120b5005ffbaf4b2f"},
{"3fe7baa520edb5d0b43b7a6999c146262b2c7f26e030fd7c256611262db727833a40a79f9988"},
{"ed32ea6f1c0ee2514eb73f6a0f9f00d1e2c2392a8896963eefddfd6600c105d52db6e93ba98f8454433894293eaa9f31973658c49f67f3361af70fac27bac6f8f5c69f52be9d7c86a5fea3e5b5d99d8f73888b4d4a7dbb28169035583632ef26604e472eb26a5da9da4e95f80460dfc7b788e8ee75194a7d2f1190a788f92e98cb83fd4c63d9976ec06c2df005d321baed360599af58ff45aa63b00261ea60b5adf623f256bfbc75da961c5960db68e8"},
{"7cf76f0d4072574bae246c4f7184000af5ce818943605151a73a49d7b704c127891e6e7008c331fa41776540b0db3b2ea2c187e119191adde6b0f5438fb48cc242c02420f44d070ef4c87a00952560f2ffcc5ac5932c5a0f40df9029ddc10d29b23ff4150fbe0dda5b14a73eadd90a3b6eaf049075b89c1c16da33f049c3235f158c"},
{"fbaf4f7ebec36c97f8994e67e74b281960846e6b5ce30e4fd95ce68d8875e19ab3ebf716e5887adb6eefbc3c5ca6096f936643f4bf22a9f61a1e35b019cfaabfe331ad2897a3b70bd6846c5003a999719d26246796a1d60b18bf89bdf4f5fea3b976ad7739e00089f7f11a5833351515e330d8580f918ea694a438f384946cdae0d9d3ccda33bc6de1a64d6c25c0b3f7d905172956"},
{"ff7f99a16b3e2c0f9daa2a44c9a364b212d836ba57f8d9b0e050490d1e74"},
{"f299d507d916e67f93345a42042e859170eb755262355826fcb7ed0d2e9c999bb21662275d1b99a53b397bf77f4e2af38a41358c41e9ecd750f3cc2859a3fef8a9ef9c189b7489fb0048903cfe78f5171f2476f86aae2346e5390740b09bb185268af16146ccab9876d8931f670f9ba93805f0277a3cba0fc9671cc78ac53ce60f538c7aa616660e3ca1e1eabf8938c095baeacb4ce11889c52ce63b9511d2f176d563a75a34418fddcb4e712a5936e4f72a2269b423954dadcf"},
}
for i, test := range tests {
buf, err := hex.DecodeString(test.input)
if err != nil {
t.Error(err)
}
p := POW(buf)
p0 := POW_0alloc(buf)
p_optimized, _ := POW_optimized_v1(buf, MAX_LENGTH)
if string(p[:]) != string(p0[:]) {
t.Errorf("Test failed: POW and POW_0alloc returns different for i=%d buf %x", i, buf)
}
if string(p[:]) != string(p_optimized[:]) {
t.Errorf("Test failed: POW and POW_optimized returns different for i=%d buf %x", i, buf)
}
}
}
func BenchmarkPOW_optimized_v1(t *testing.B) {
for i := 0; i < t.N; i++ {
rand.Read(cases[0][:])
_, _ = POW_optimized_v1(cases[0][:], MAX_LENGTH)
}
}

View File

@ -1,121 +0,0 @@
package astrobwt
import "math/rand"
import "testing"
// see https://www.geeksforgeeks.org/burrows-wheeler-data-transform-algorithm/
func TestBWTAndInverseTransform(t *testing.T) {
tests := []struct {
input string
bwt string
}{
{"BANANA", "ANNB$AA"}, // from https://www.geeksforgeeks.org/burrows-wheeler-data-transform-algorithm/
{"abracadabra", "ard$rcaaaabb"},
{"appellee", "e$elplepa"},
{"GATGCGAGAGATG", "GGGGGGTCAA$TAA"},
{"abcdefg", "g$abcdef"},
}
for _, test := range tests {
trans2, eos := BWT([]byte(test.input))
trans2[eos] = '$'
if string(trans2) != test.bwt {
t.Errorf("Test failed: Transform %s", test.input)
}
if string(InverseTransform([]byte(trans2), '$')) != test.input {
t.Errorf("Test failed: InverseTransform expected '%s' actual '%s`", test.input, string(InverseTransform([]byte(trans2), '$')))
}
p := POW([]byte(test.input))
p0 := POW_0alloc([]byte(test.input))
if string(p[:]) != string(p0[:]) {
t.Error("Test failed: difference between pow and pow_0alloc")
}
}
}
func TestFromSuffixArray(t *testing.T) {
s := "GATGCGAGAGATG"
trans := "GGGGGGTCAA$TAA"
sa := SuffixArray([]byte(s))
B, err := FromSuffixArray([]byte(s), sa, '$')
if err != nil {
t.Error("Test failed: FromSuffixArray error")
}
if string(B) != trans {
t.Error("Test failed: FromSuffixArray returns wrong result")
}
}
func TestPow_Powalloc(t *testing.T) {
p := POW([]byte{0, 0, 0, 0})
p0 := POW_0alloc([]byte{0, 0, 0, 0})
if string(p[:]) != string(p0[:]) {
t.Error("Test failed: POW and POW_0alloc returns different ")
}
}
var cases [][]byte
func init() {
rand.Seed(1)
alphabet := "abcdefghjijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890"
n := len(alphabet)
_ = n
scales := []int{100000}
cases = make([][]byte, len(scales))
for i, scale := range scales {
l := scale
buf := make([]byte, int(l))
for j := 0; j < int(l); j++ {
buf[j] = byte(rand.Uint32() & 0xff) //alphabet[rand.Intn(n)]
}
cases[i] = buf
}
POW([]byte{0x99})
}
var result []byte
func BenchmarkTransform(t *testing.B) {
var r []byte
var err error
for i := 0; i < t.N; i++ {
r, err = Transform(cases[0], '$')
if err != nil {
t.Error(err)
return
}
}
result = r
}
func BenchmarkTransform_quick(t *testing.B) {
var r []byte
for i := 0; i < t.N; i++ {
//r, err = Transform(cases[0], '$')
r, _ = BWT(cases[0])
}
result = r
}
func BenchmarkPOW(t *testing.B) {
for i := 0; i < t.N; i++ {
rand.Read(cases[0][:])
_ = POW(cases[0][:])
}
}
func BenchmarkPOW_0alloc(t *testing.B) {
for i := 0; i < t.N; i++ {
rand.Read(cases[0][:])
_ = POW_0alloc(cases[0][:])
}
}

View File

@ -1,92 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
// Gen generates sais2.go by duplicating functions in sais.go
// using different input types.
// See the comment at the top of sais.go for details.
package main
import (
"bytes"
"io/ioutil"
"log"
"strings"
)
func main() {
log.SetPrefix("gen: ")
log.SetFlags(0)
data, err := ioutil.ReadFile("sais.go")
if err != nil {
log.Fatal(err)
}
x := bytes.Index(data, []byte("\n\n"))
if x < 0 {
log.Fatal("cannot find blank line after copyright comment")
}
var buf bytes.Buffer
buf.Write(data[:x])
buf.WriteString("\n\n// Code generated by go generate; DO NOT EDIT.\n\npackage suffixarray\n")
for {
x := bytes.Index(data, []byte("\nfunc "))
if x < 0 {
break
}
data = data[x:]
p := bytes.IndexByte(data, '(')
if p < 0 {
p = len(data)
}
name := string(data[len("\nfunc "):p])
x = bytes.Index(data, []byte("\n}\n"))
if x < 0 {
log.Fatalf("cannot find end of func %s", name)
}
fn := string(data[:x+len("\n}\n")])
data = data[x+len("\n}"):]
if strings.HasSuffix(name, "_32") {
buf.WriteString(fix32.Replace(fn))
}
if strings.HasSuffix(name, "_8_32") {
// x_8_32 -> x_8_64 done above
fn = fix8_32.Replace(stripByteOnly(fn))
buf.WriteString(fn)
buf.WriteString(fix32.Replace(fn))
}
}
if err := ioutil.WriteFile("sais2.go", buf.Bytes(), 0666); err != nil {
log.Fatal(err)
}
}
var fix32 = strings.NewReplacer(
"32", "64",
"int32", "int64",
)
var fix8_32 = strings.NewReplacer(
"_8_32", "_32",
"byte", "int32",
)
func stripByteOnly(s string) string {
lines := strings.SplitAfter(s, "\n")
w := 0
for _, line := range lines {
if !strings.Contains(line, "256") && !strings.Contains(line, "byte-only") {
lines[w] = line
w++
}
}
return strings.Join(lines[:w], "")
}

View File

@ -1,899 +0,0 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Suffix array construction by induced sorting (SAIS).
// See Ge Nong, Sen Zhang, and Wai Hong Chen,
// "Two Efficient Algorithms for Linear Time Suffix Array Construction",
// especially section 3 (https://ieeexplore.ieee.org/document/5582081).
// See also http://zork.net/~st/jottings/sais.html.
//
// With optimizations inspired by Yuta Mori's sais-lite
// (https://sites.google.com/site/yuta256/sais).
//
// And with other new optimizations.
// Many of these functions are parameterized by the sizes of
// the types they operate on. The generator gen.go makes
// copies of these functions for use with other sizes.
// Specifically:
//
// - A function with a name ending in _8_32 takes []byte and []int32 arguments
// and is duplicated into _32_32, _8_64, and _64_64 forms.
// The _32_32 and _64_64_ suffixes are shortened to plain _32 and _64.
// Any lines in the function body that contain the text "byte-only" or "256"
// are stripped when creating _32_32 and _64_64 forms.
// (Those lines are typically 8-bit-specific optimizations.)
//
// - A function with a name ending only in _32 operates on []int32
// and is duplicated into a _64 form. (Note that it may still take a []byte,
// but there is no need for a version of the function in which the []byte
// is widened to a full integer array.)
// The overall runtime of this code is linear in the input size:
// it runs a sequence of linear passes to reduce the problem to
// a subproblem at most half as big, invokes itself recursively,
// and then runs a sequence of linear passes to turn the answer
// for the subproblem into the answer for the original problem.
// This gives T(N) = O(N) + T(N/2) = O(N) + O(N/2) + O(N/4) + ... = O(N).
//
// The outline of the code, with the forward and backward scans
// through O(N)-sized arrays called out, is:
//
// sais_I_N
// placeLMS_I_B
// bucketMax_I_B
// freq_I_B
// <scan +text> (1)
// <scan +freq> (2)
// <scan -text, random bucket> (3)
// induceSubL_I_B
// bucketMin_I_B
// freq_I_B
// <scan +text, often optimized away> (4)
// <scan +freq> (5)
// <scan +sa, random text, random bucket> (6)
// induceSubS_I_B
// bucketMax_I_B
// freq_I_B
// <scan +text, often optimized away> (7)
// <scan +freq> (8)
// <scan -sa, random text, random bucket> (9)
// assignID_I_B
// <scan +sa, random text substrings> (10)
// map_B
// <scan -sa> (11)
// recurse_B
// (recursive call to sais_B_B for a subproblem of size at most 1/2 input, often much smaller)
// unmap_I_B
// <scan -text> (12)
// <scan +sa> (13)
// expand_I_B
// bucketMax_I_B
// freq_I_B
// <scan +text, often optimized away> (14)
// <scan +freq> (15)
// <scan -sa, random text, random bucket> (16)
// induceL_I_B
// bucketMin_I_B
// freq_I_B
// <scan +text, often optimized away> (17)
// <scan +freq> (18)
// <scan +sa, random text, random bucket> (19)
// induceS_I_B
// bucketMax_I_B
// freq_I_B
// <scan +text, often optimized away> (20)
// <scan +freq> (21)
// <scan -sa, random text, random bucket> (22)
//
// Here, _B indicates the suffix array size (_32 or _64) and _I the input size (_8 or _B).
//
// The outline shows there are in general 22 scans through
// O(N)-sized arrays for a given level of the recursion.
// In the top level, operating on 8-bit input text,
// the six freq scans are fixed size (256) instead of potentially
// input-sized. Also, the frequency is counted once and cached
// whenever there is room to do so (there is nearly always room in general,
// and always room at the top level), which eliminates all but
// the first freq_I_B text scans (that is, 5 of the 6).
// So the top level of the recursion only does 22 - 6 - 5 = 11
// input-sized scans and a typical level does 16 scans.
//
// The linear scans do not cost anywhere near as much as
// the random accesses to the text made during a few of
// the scans (specifically #6, #9, #16, #19, #22 marked above).
// In real texts, there is not much but some locality to
// the accesses, due to the repetitive structure of the text
// (the same reason Burrows-Wheeler compression is so effective).
// For random inputs, there is no locality, which makes those
// accesses even more expensive, especially once the text
// no longer fits in cache.
// For example, running on 50 MB of Go source code, induceSubL_8_32
// (which runs only once, at the top level of the recursion)
// takes 0.44s, while on 50 MB of random input, it takes 2.55s.
// Nearly all the relative slowdown is explained by the text access:
//
// c0, c1 := text[k-1], text[k]
//
// That line runs for 0.23s on the Go text and 2.02s on random text.
//go:generate go run gen.go
package astrobwt
// text_32 returns the suffix array for the input text.
// It requires that len(text) fit in an int32
// and that the caller zero sa.
func text_32(text []byte, sa []int32) {
if int(int32(len(text))) != len(text) || len(text) != len(sa) {
panic("suffixarray: misuse of text_32")
}
sais_8_32(text, 256, sa, make([]int32, 2*256))
}
// sais_8_32 computes the suffix array of text.
// The text must contain only values in [0, textMax).
// The suffix array is stored in sa, which the caller
// must ensure is already zeroed.
// The caller must also provide temporary space tmp
// with len(tmp) ≥ textMax. If len(tmp) ≥ 2*textMax
// then the algorithm runs a little faster.
// If sais_8_32 modifies tmp, it sets tmp[0] = -1 on return.
func sais_8_32(text []byte, textMax int, sa, tmp []int32) {
if len(sa) != len(text) || len(tmp) < int(textMax) {
panic("suffixarray: misuse of sais_8_32")
}
// Trivial base cases. Sorting 0 or 1 things is easy.
if len(text) == 0 {
return
}
if len(text) == 1 {
sa[0] = 0
return
}
// Establish slices indexed by text character
// holding character frequency and bucket-sort offsets.
// If there's only enough tmp for one slice,
// we make it the bucket offsets and recompute
// the character frequency each time we need it.
var freq, bucket []int32
if len(tmp) >= 2*textMax {
freq, bucket = tmp[:textMax], tmp[textMax:2*textMax]
freq[0] = -1 // mark as uninitialized
} else {
freq, bucket = nil, tmp[:textMax]
}
// The SAIS algorithm.
// Each of these calls makes one scan through sa.
// See the individual functions for documentation
// about each's role in the algorithm.
numLMS := placeLMS_8_32(text, sa, freq, bucket)
if numLMS <= 1 {
// 0 or 1 items are already sorted. Do nothing.
} else {
induceSubL_8_32(text, sa, freq, bucket)
induceSubS_8_32(text, sa, freq, bucket)
length_8_32(text, sa, numLMS)
maxID := assignID_8_32(text, sa, numLMS)
if maxID < numLMS {
map_32(sa, numLMS)
recurse_32(sa, tmp, numLMS, maxID)
unmap_8_32(text, sa, numLMS)
} else {
// If maxID == numLMS, then each LMS-substring
// is unique, so the relative ordering of two LMS-suffixes
// is determined by just the leading LMS-substring.
// That is, the LMS-suffix sort order matches the
// (simpler) LMS-substring sort order.
// Copy the original LMS-substring order into the
// suffix array destination.
copy(sa, sa[len(sa)-numLMS:])
}
expand_8_32(text, freq, bucket, sa, numLMS)
}
induceL_8_32(text, sa, freq, bucket)
induceS_8_32(text, sa, freq, bucket)
// Mark for caller that we overwrote tmp.
tmp[0] = -1
}
// freq_8_32 returns the character frequencies
// for text, as a slice indexed by character value.
// If freq is nil, freq_8_32 uses and returns bucket.
// If freq is non-nil, freq_8_32 assumes that freq[0] >= 0
// means the frequencies are already computed.
// If the frequency data is overwritten or uninitialized,
// the caller must set freq[0] = -1 to force recomputation
// the next time it is needed.
func freq_8_32(text []byte, freq, bucket []int32) []int32 {
if freq != nil && freq[0] >= 0 {
return freq // already computed
}
if freq == nil {
freq = bucket
}
freq = freq[:256] // eliminate bounds check for freq[c] below
for i := range freq {
freq[i] = 0
}
for _, c := range text {
freq[c]++
}
return freq
}
// bucketMin_8_32 stores into bucket[c] the minimum index
// in the bucket for character c in a bucket-sort of text.
func bucketMin_8_32(text []byte, freq, bucket []int32) {
freq = freq_8_32(text, freq, bucket)
freq = freq[:256] // establish len(freq) = 256, so 0 ≤ i < 256 below
bucket = bucket[:256] // eliminate bounds check for bucket[i] below
total := int32(0)
for i, n := range freq {
bucket[i] = total
total += n
}
}
// bucketMax_8_32 stores into bucket[c] the maximum index
// in the bucket for character c in a bucket-sort of text.
// The bucket indexes for c are [min, max).
// That is, max is one past the final index in that bucket.
func bucketMax_8_32(text []byte, freq, bucket []int32) {
freq = freq_8_32(text, freq, bucket)
freq = freq[:256] // establish len(freq) = 256, so 0 ≤ i < 256 below
bucket = bucket[:256] // eliminate bounds check for bucket[i] below
total := int32(0)
for i, n := range freq {
total += n
bucket[i] = total
}
}
// The SAIS algorithm proceeds in a sequence of scans through sa.
// Each of the following functions implements one scan,
// and the functions appear here in the order they execute in the algorithm.
// placeLMS_8_32 places into sa the indexes of the
// final characters of the LMS substrings of text,
// sorted into the rightmost ends of their correct buckets
// in the suffix array.
//
// The imaginary sentinel character at the end of the text
// is the final character of the final LMS substring, but there
// is no bucket for the imaginary sentinel character,
// which has a smaller value than any real character.
// The caller must therefore pretend that sa[-1] == len(text).
//
// The text indexes of LMS-substring characters are always ≥ 1
// (the first LMS-substring must be preceded by one or more L-type
// characters that are not part of any LMS-substring),
// so using 0 as a “not present” suffix array entry is safe,
// both in this function and in most later functions
// (until induceL_8_32 below).
func placeLMS_8_32(text []byte, sa, freq, bucket []int32) int {
bucketMax_8_32(text, freq, bucket)
numLMS := 0
lastB := int32(-1)
bucket = bucket[:256] // eliminate bounds check for bucket[c1] below
// The next stanza of code (until the blank line) loop backward
// over text, stopping to execute a code body at each position i
// such that text[i] is an L-character and text[i+1] is an S-character.
// That is, i+1 is the position of the start of an LMS-substring.
// These could be hoisted out into a function with a callback,
// but at a significant speed cost. Instead, we just write these
// seven lines a few times in this source file. The copies below
// refer back to the pattern established by this original as the
// "LMS-substring iterator".
//
// In every scan through the text, c0, c1 are successive characters of text.
// In this backward scan, c0 == text[i] and c1 == text[i+1].
// By scanning backward, we can keep track of whether the current
// position is type-S or type-L according to the usual definition:
//
// - position len(text) is type S with text[len(text)] == -1 (the sentinel)
// - position i is type S if text[i] < text[i+1], or if text[i] == text[i+1] && i+1 is type S.
// - position i is type L if text[i] > text[i+1], or if text[i] == text[i+1] && i+1 is type L.
//
// The backward scan lets us maintain the current type,
// update it when we see c0 != c1, and otherwise leave it alone.
// We want to identify all S positions with a preceding L.
// Position len(text) is one such position by definition, but we have
// nowhere to write it down, so we eliminate it by untruthfully
// setting isTypeS = false at the start of the loop.
c0, c1, isTypeS := byte(0), byte(0), false
for i := len(text) - 1; i >= 0; i-- {
c0, c1 = text[i], c0
if c0 < c1 {
isTypeS = true
} else if c0 > c1 && isTypeS {
isTypeS = false
// Bucket the index i+1 for the start of an LMS-substring.
b := bucket[c1] - 1
bucket[c1] = b
sa[b] = int32(i + 1)
lastB = b
numLMS++
}
}
// We recorded the LMS-substring starts but really want the ends.
// Luckily, with two differences, the start indexes and the end indexes are the same.
// The first difference is that the rightmost LMS-substring's end index is len(text),
// so the caller must pretend that sa[-1] == len(text), as noted above.
// The second difference is that the first leftmost LMS-substring start index
// does not end an earlier LMS-substring, so as an optimization we can omit
// that leftmost LMS-substring start index (the last one we wrote).
//
// Exception: if numLMS <= 1, the caller is not going to bother with
// the recursion at all and will treat the result as containing LMS-substring starts.
// In that case, we don't remove the final entry.
if numLMS > 1 {
sa[lastB] = 0
}
return numLMS
}
// induceSubL_8_32 inserts the L-type text indexes of LMS-substrings
// into sa, assuming that the final characters of the LMS-substrings
// are already inserted into sa, sorted by final character, and at the
// right (not left) end of the corresponding character bucket.
// Each LMS-substring has the form (as a regexp) /S+L+S/:
// one or more S-type, one or more L-type, final S-type.
// induceSubL_8_32 leaves behind only the leftmost L-type text
// index for each LMS-substring. That is, it removes the final S-type
// indexes that are present on entry, and it inserts but then removes
// the interior L-type indexes too.
// (Only the leftmost L-type index is needed by induceSubS_8_32.)
func induceSubL_8_32(text []byte, sa, freq, bucket []int32) {
// Initialize positions for left side of character buckets.
bucketMin_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bounds check for bucket[cB] below
// As we scan the array left-to-right, each sa[i] = j > 0 is a correctly
// sorted suffix array entry (for text[j:]) for which we know that j-1 is type L.
// Because j-1 is type L, inserting it into sa now will sort it correctly.
// But we want to distinguish a j-1 with j-2 of type L from type S.
// We can process the former but want to leave the latter for the caller.
// We record the difference by negating j-1 if it is preceded by type S.
// Either way, the insertion (into the text[j-1] bucket) is guaranteed to
// happen at sa[i´] for some i´ > i, that is, in the portion of sa we have
// yet to scan. A single pass therefore sees indexes j, j-1, j-2, j-3,
// and so on, in sorted but not necessarily adjacent order, until it finds
// one preceded by an index of type S, at which point it must stop.
//
// As we scan through the array, we clear the worked entries (sa[i] > 0) to zero,
// and we flip sa[i] < 0 to -sa[i], so that the loop finishes with sa containing
// only the indexes of the leftmost L-type indexes for each LMS-substring.
//
// The suffix array sa therefore serves simultaneously as input, output,
// and a miraculously well-tailored work queue.
// placeLMS_8_32 left out the implicit entry sa[-1] == len(text),
// corresponding to the identified type-L index len(text)-1.
// Process it before the left-to-right scan of sa proper.
// See body in loop for commentary.
k := len(text) - 1
c0, c1 := text[k-1], text[k]
if c0 < c1 {
k = -k
}
// Cache recently used bucket index:
// we're processing suffixes in sorted order
// and accessing buckets indexed by the
// byte before the sorted order, which still
// has very good locality.
// Invariant: b is cached, possibly dirty copy of bucket[cB].
cB := c1
b := bucket[cB]
sa[b] = int32(k)
b++
for i := 0; i < len(sa); i++ {
j := int(sa[i])
if j == 0 {
// Skip empty entry.
continue
}
if j < 0 {
// Leave discovered type-S index for caller.
sa[i] = int32(-j)
continue
}
sa[i] = 0
// Index j was on work queue, meaning k := j-1 is L-type,
// so we can now place k correctly into sa.
// If k-1 is L-type, queue k for processing later in this loop.
// If k-1 is S-type (text[k-1] < text[k]), queue -k to save for the caller.
k := j - 1
c0, c1 := text[k-1], text[k]
if c0 < c1 {
k = -k
}
if cB != c1 {
bucket[cB] = b
cB = c1
b = bucket[cB]
}
sa[b] = int32(k)
b++
}
}
// induceSubS_8_32 inserts the S-type text indexes of LMS-substrings
// into sa, assuming that the leftmost L-type text indexes are already
// inserted into sa, sorted by LMS-substring suffix, and at the
// left end of the corresponding character bucket.
// Each LMS-substring has the form (as a regexp) /S+L+S/:
// one or more S-type, one or more L-type, final S-type.
// induceSubS_8_32 leaves behind only the leftmost S-type text
// index for each LMS-substring, in sorted order, at the right end of sa.
// That is, it removes the L-type indexes that are present on entry,
// and it inserts but then removes the interior S-type indexes too,
// leaving the LMS-substring start indexes packed into sa[len(sa)-numLMS:].
// (Only the LMS-substring start indexes are processed by the recursion.)
func induceSubS_8_32(text []byte, sa, freq, bucket []int32) {
// Initialize positions for right side of character buckets.
bucketMax_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bounds check for bucket[cB] below
// Analogous to induceSubL_8_32 above,
// as we scan the array right-to-left, each sa[i] = j > 0 is a correctly
// sorted suffix array entry (for text[j:]) for which we know that j-1 is type S.
// Because j-1 is type S, inserting it into sa now will sort it correctly.
// But we want to distinguish a j-1 with j-2 of type S from type L.
// We can process the former but want to leave the latter for the caller.
// We record the difference by negating j-1 if it is preceded by type L.
// Either way, the insertion (into the text[j-1] bucket) is guaranteed to
// happen at sa[i´] for some i´ < i, that is, in the portion of sa we have
// yet to scan. A single pass therefore sees indexes j, j-1, j-2, j-3,
// and so on, in sorted but not necessarily adjacent order, until it finds
// one preceded by an index of type L, at which point it must stop.
// That index (preceded by one of type L) is an LMS-substring start.
//
// As we scan through the array, we clear the worked entries (sa[i] > 0) to zero,
// and we flip sa[i] < 0 to -sa[i] and compact into the top of sa,
// so that the loop finishes with the top of sa containing exactly
// the LMS-substring start indexes, sorted by LMS-substring.
// Cache recently used bucket index:
cB := byte(0)
b := bucket[cB]
top := len(sa)
for i := len(sa) - 1; i >= 0; i-- {
j := int(sa[i])
if j == 0 {
// Skip empty entry.
continue
}
sa[i] = 0
if j < 0 {
// Leave discovered LMS-substring start index for caller.
top--
sa[top] = int32(-j)
continue
}
// Index j was on work queue, meaning k := j-1 is S-type,
// so we can now place k correctly into sa.
// If k-1 is S-type, queue k for processing later in this loop.
// If k-1 is L-type (text[k-1] > text[k]), queue -k to save for the caller.
k := j - 1
c1 := text[k]
c0 := text[k-1]
if c0 > c1 {
k = -k
}
if cB != c1 {
bucket[cB] = b
cB = c1
b = bucket[cB]
}
b--
sa[b] = int32(k)
}
}
// length_8_32 computes and records the length of each LMS-substring in text.
// The length of the LMS-substring at index j is stored at sa[j/2],
// avoiding the LMS-substring indexes already stored in the top half of sa.
// (If index j is an LMS-substring start, then index j-1 is type L and cannot be.)
// There are two exceptions, made for optimizations in name_8_32 below.
//
// First, the final LMS-substring is recorded as having length 0, which is otherwise
// impossible, instead of giving it a length that includes the implicit sentinel.
// This ensures the final LMS-substring has length unequal to all others
// and therefore can be detected as different without text comparison
// (it is unequal because it is the only one that ends in the implicit sentinel,
// and the text comparison would be problematic since the implicit sentinel
// is not actually present at text[len(text)]).
//
// Second, to avoid text comparison entirely, if an LMS-substring is very short,
// sa[j/2] records its actual text instead of its length, so that if two such
// substrings have matching “length,” the text need not be read at all.
// The definition of “very short” is that the text bytes must pack into an uint32,
// and the unsigned encoding e must be ≥ len(text), so that it can be
// distinguished from a valid length.
func length_8_32(text []byte, sa []int32, numLMS int) {
end := 0 // index of current LMS-substring end (0 indicates final LMS-substring)
// The encoding of N text bytes into a “length” word
// adds 1 to each byte, packs them into the bottom
// N*8 bits of a word, and then bitwise inverts the result.
// That is, the text sequence A B C (hex 41 42 43)
// encodes as ^uint32(0x42_43_44).
// LMS-substrings can never start or end with 0xFF.
// Adding 1 ensures the encoded byte sequence never
// starts or ends with 0x00, so that present bytes can be
// distinguished from zero-padding in the top bits,
// so the length need not be separately encoded.
// Inverting the bytes increases the chance that a
// 4-byte encoding will still be ≥ len(text).
// In particular, if the first byte is ASCII (<= 0x7E, so +1 <= 0x7F)
// then the high bit of the inversion will be set,
// making it clearly not a valid length (it would be a negative one).
//
// cx holds the pre-inverted encoding (the packed incremented bytes).
cx := uint32(0) // byte-only
// This stanza (until the blank line) is the "LMS-substring iterator",
// described in placeLMS_8_32 above, with one line added to maintain cx.
c0, c1, isTypeS := byte(0), byte(0), false
for i := len(text) - 1; i >= 0; i-- {
c0, c1 = text[i], c0
cx = cx<<8 | uint32(c1+1) // byte-only
if c0 < c1 {
isTypeS = true
} else if c0 > c1 && isTypeS {
isTypeS = false
// Index j = i+1 is the start of an LMS-substring.
// Compute length or encoded text to store in sa[j/2].
j := i + 1
var code int32
if end == 0 {
code = 0
} else {
code = int32(end - j)
if code <= 32/8 && ^cx >= uint32(len(text)) { // byte-only
code = int32(^cx) // byte-only
} // byte-only
}
sa[j>>1] = code
end = j + 1
cx = uint32(c1 + 1) // byte-only
}
}
}
// assignID_8_32 assigns a dense ID numbering to the
// set of LMS-substrings respecting string ordering and equality,
// returning the maximum assigned ID.
// For example given the input "ababab", the LMS-substrings
// are "aba", "aba", and "ab", renumbered as 2 2 1.
// sa[len(sa)-numLMS:] holds the LMS-substring indexes
// sorted in string order, so to assign numbers we can
// consider each in turn, removing adjacent duplicates.
// The new ID for the LMS-substring at index j is written to sa[j/2],
// overwriting the length previously stored there (by length_8_32 above).
func assignID_8_32(text []byte, sa []int32, numLMS int) int {
id := 0
lastLen := int32(-1) // impossible
lastPos := int32(0)
for _, j := range sa[len(sa)-numLMS:] {
// Is the LMS-substring at index j new, or is it the same as the last one we saw?
n := sa[j/2]
if n != lastLen {
goto New
}
if uint32(n) >= uint32(len(text)) {
// “Length” is really encoded full text, and they match.
goto Same
}
{
// Compare actual texts.
n := int(n)
this := text[j:][:n]
last := text[lastPos:][:n]
for i := 0; i < n; i++ {
if this[i] != last[i] {
goto New
}
}
goto Same
}
New:
id++
lastPos = j
lastLen = n
Same:
sa[j/2] = int32(id)
}
return id
}
// map_32 maps the LMS-substrings in text to their new IDs,
// producing the subproblem for the recursion.
// The mapping itself was mostly applied by assignID_8_32:
// sa[i] is either 0, the ID for the LMS-substring at index 2*i,
// or the ID for the LMS-substring at index 2*i+1.
// To produce the subproblem we need only remove the zeros
// and change ID into ID-1 (our IDs start at 1, but text chars start at 0).
//
// map_32 packs the result, which is the input to the recursion,
// into the top of sa, so that the recursion result can be stored
// in the bottom of sa, which sets up for expand_8_32 well.
func map_32(sa []int32, numLMS int) {
w := len(sa)
for i := len(sa) / 2; i >= 0; i-- {
j := sa[i]
if j > 0 {
w--
sa[w] = j - 1
}
}
}
// recurse_32 calls sais_32 recursively to solve the subproblem we've built.
// The subproblem is at the right end of sa, the suffix array result will be
// written at the left end of sa, and the middle of sa is available for use as
// temporary frequency and bucket storage.
func recurse_32(sa, oldTmp []int32, numLMS, maxID int) {
dst, saTmp, text := sa[:numLMS], sa[numLMS:len(sa)-numLMS], sa[len(sa)-numLMS:]
// Set up temporary space for recursive call.
// We must pass sais_32 a tmp buffer wiith at least maxID entries.
//
// The subproblem is guaranteed to have length at most len(sa)/2,
// so that sa can hold both the subproblem and its suffix array.
// Nearly all the time, however, the subproblem has length < len(sa)/3,
// in which case there is a subproblem-sized middle of sa that
// we can reuse for temporary space (saTmp).
// When recurse_32 is called from sais_8_32, oldTmp is length 512
// (from text_32), and saTmp will typically be much larger, so we'll use saTmp.
// When deeper recursions come back to recurse_32, now oldTmp is
// the saTmp from the top-most recursion, it is typically larger than
// the current saTmp (because the current sa gets smaller and smaller
// as the recursion gets deeper), and we keep reusing that top-most
// large saTmp instead of the offered smaller ones.
//
// Why is the subproblem length so often just under len(sa)/3?
// See Nong, Zhang, and Chen, section 3.6 for a plausible explanation.
// In brief, the len(sa)/2 case would correspond to an SLSLSLSLSLSL pattern
// in the input, perfect alternation of larger and smaller input bytes.
// Real text doesn't do that. If each L-type index is randomly followed
// by either an L-type or S-type index, then half the substrings will
// be of the form SLS, but the other half will be longer. Of that half,
// half (a quarter overall) will be SLLS; an eighth will be SLLLS, and so on.
// Not counting the final S in each (which overlaps the first S in the next),
// This works out to an average length 2×½ + 3×¼ + 4×⅛ + ... = 3.
// The space we need is further reduced by the fact that many of the
// short patterns like SLS will often be the same character sequences
// repeated throughout the text, reducing maxID relative to numLMS.
//
// For short inputs, the averages may not run in our favor, but then we
// can often fall back to using the length-512 tmp available in the
// top-most call. (Also a short allocation would not be a big deal.)
//
// For pathological inputs, we fall back to allocating a new tmp of length
// max(maxID, numLMS/2). This level of the recursion needs maxID,
// and all deeper levels of the recursion will need no more than numLMS/2,
// so this one allocation is guaranteed to suffice for the entire stack
// of recursive calls.
tmp := oldTmp
if len(tmp) < len(saTmp) {
tmp = saTmp
}
if len(tmp) < numLMS {
// TestSAIS/forcealloc reaches this code.
n := maxID
if n < numLMS/2 {
n = numLMS / 2
}
tmp = make([]int32, n)
}
// sais_32 requires that the caller arrange to clear dst,
// because in general the caller may know dst is
// freshly-allocated and already cleared. But this one is not.
for i := range dst {
dst[i] = 0
}
sais_32(text, maxID, dst, tmp)
}
// unmap_8_32 unmaps the subproblem back to the original.
// sa[:numLMS] is the LMS-substring numbers, which don't matter much anymore.
// sa[len(sa)-numLMS:] is the sorted list of those LMS-substring numbers.
// The key part is that if the list says K that means the K'th substring.
// We can replace sa[:numLMS] with the indexes of the LMS-substrings.
// Then if the list says K it really means sa[K].
// Having mapped the list back to LMS-substring indexes,
// we can place those into the right buckets.
func unmap_8_32(text []byte, sa []int32, numLMS int) {
unmap := sa[len(sa)-numLMS:]
j := len(unmap)
// "LMS-substring iterator" (see placeLMS_8_32 above).
c0, c1, isTypeS := byte(0), byte(0), false
for i := len(text) - 1; i >= 0; i-- {
c0, c1 = text[i], c0
if c0 < c1 {
isTypeS = true
} else if c0 > c1 && isTypeS {
isTypeS = false
// Populate inverse map.
j--
unmap[j] = int32(i + 1)
}
}
// Apply inverse map to subproblem suffix array.
sa = sa[:numLMS]
for i := 0; i < len(sa); i++ {
sa[i] = unmap[sa[i]]
}
}
// expand_8_32 distributes the compacted, sorted LMS-suffix indexes
// from sa[:numLMS] into the tops of the appropriate buckets in sa,
// preserving the sorted order and making room for the L-type indexes
// to be slotted into the sorted sequence by induceL_8_32.
func expand_8_32(text []byte, freq, bucket, sa []int32, numLMS int) {
bucketMax_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bound check for bucket[c] below
// Loop backward through sa, always tracking
// the next index to populate from sa[:numLMS].
// When we get to one, populate it.
// Zero the rest of the slots; they have dead values in them.
x := numLMS - 1
saX := sa[x]
c := text[saX]
b := bucket[c] - 1
bucket[c] = b
for i := len(sa) - 1; i >= 0; i-- {
if i != int(b) {
sa[i] = 0
continue
}
sa[i] = saX
// Load next entry to put down (if any).
if x > 0 {
x--
saX = sa[x] // TODO bounds check
c = text[saX]
b = bucket[c] - 1
bucket[c] = b
}
}
}
// induceL_8_32 inserts L-type text indexes into sa,
// assuming that the leftmost S-type indexes are inserted
// into sa, in sorted order, in the right bucket halves.
// It leaves all the L-type indexes in sa, but the
// leftmost L-type indexes are negated, to mark them
// for processing by induceS_8_32.
func induceL_8_32(text []byte, sa, freq, bucket []int32) {
// Initialize positions for left side of character buckets.
bucketMin_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bounds check for bucket[cB] below
// This scan is similar to the one in induceSubL_8_32 above.
// That one arranges to clear all but the leftmost L-type indexes.
// This scan leaves all the L-type indexes and the original S-type
// indexes, but it negates the positive leftmost L-type indexes
// (the ones that induceS_8_32 needs to process).
// expand_8_32 left out the implicit entry sa[-1] == len(text),
// corresponding to the identified type-L index len(text)-1.
// Process it before the left-to-right scan of sa proper.
// See body in loop for commentary.
k := len(text) - 1
c0, c1 := text[k-1], text[k]
if c0 < c1 {
k = -k
}
// Cache recently used bucket index.
cB := c1
b := bucket[cB]
sa[b] = int32(k)
b++
for i := 0; i < len(sa); i++ {
j := int(sa[i])
if j <= 0 {
// Skip empty or negated entry (including negated zero).
continue
}
// Index j was on work queue, meaning k := j-1 is L-type,
// so we can now place k correctly into sa.
// If k-1 is L-type, queue k for processing later in this loop.
// If k-1 is S-type (text[k-1] < text[k]), queue -k to save for the caller.
// If k is zero, k-1 doesn't exist, so we only need to leave it
// for the caller. The caller can't tell the difference between
// an empty slot and a non-empty zero, but there's no need
// to distinguish them anyway: the final suffix array will end up
// with one zero somewhere, and that will be a real zero.
k := j - 1
c1 := text[k]
if k > 0 {
if c0 := text[k-1]; c0 < c1 {
k = -k
}
}
if cB != c1 {
bucket[cB] = b
cB = c1
b = bucket[cB]
}
sa[b] = int32(k)
b++
}
}
func induceS_8_32(text []byte, sa, freq, bucket []int32) {
// Initialize positions for right side of character buckets.
bucketMax_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bounds check for bucket[cB] below
cB := byte(0)
b := bucket[cB]
for i := len(sa) - 1; i >= 0; i-- {
j := int(sa[i])
if j >= 0 {
// Skip non-flagged entry.
// (This loop can't see an empty entry; 0 means the real zero index.)
continue
}
// Negative j is a work queue entry; rewrite to positive j for final suffix array.
j = -j
sa[i] = int32(j)
// Index j was on work queue (encoded as -j but now decoded),
// meaning k := j-1 is L-type,
// so we can now place k correctly into sa.
// If k-1 is S-type, queue -k for processing later in this loop.
// If k-1 is L-type (text[k-1] > text[k]), queue k to save for the caller.
// If k is zero, k-1 doesn't exist, so we only need to leave it
// for the caller.
k := j - 1
c1 := text[k]
if k > 0 {
if c0 := text[k-1]; c0 <= c1 {
k = -k
}
}
if cB != c1 {
bucket[cB] = b
cB = c1
b = bucket[cB]
}
b--
sa[b] = int32(k)
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,385 +0,0 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package suffixarray implements substring search in logarithmic time using
// an in-memory suffix array.
//
// Example use:
//
// // create index for some data
// index := suffixarray.New(data)
//
// // lookup byte slice s
// offsets1 := index.Lookup(s, -1) // the list of all indices where s occurs in data
// offsets2 := index.Lookup(s, 3) // the list of at most 3 indices where s occurs in data
//
package astrobwt
import (
"bytes"
"encoding/binary"
"errors"
"io"
"math"
"regexp"
"sort"
)
// Can change for testing
var maxData32 int = realMaxData32
const realMaxData32 = math.MaxInt32
// Index implements a suffix array for fast substring search.
type Index struct {
data []byte
sa ints // suffix array for data; sa.len() == len(data)
}
// An ints is either an []int32 or an []int64.
// That is, one of them is empty, and one is the real data.
// The int64 form is used when len(data) > maxData32
type ints struct {
int32 []int32
int64 []int64
}
func (a *ints) len() int {
return len(a.int32) + len(a.int64)
}
func (a *ints) get(i int) int64 {
if a.int32 != nil {
return int64(a.int32[i])
}
return a.int64[i]
}
func (a *ints) set(i int, v int64) {
if a.int32 != nil {
a.int32[i] = int32(v)
} else {
a.int64[i] = v
}
}
func (a *ints) slice(i, j int) ints {
if a.int32 != nil {
return ints{a.int32[i:j], nil}
}
return ints{nil, a.int64[i:j]}
}
// New creates a new Index for data.
// Index creation time is O(N) for N = len(data).
func New(data []byte) *Index {
ix := &Index{data: data}
if len(data) <= maxData32 {
ix.sa.int32 = make([]int32, len(data))
text_32(data, ix.sa.int32)
} else {
ix.sa.int64 = make([]int64, len(data))
text_64(data, ix.sa.int64)
}
return ix
}
// writeInt writes an int x to w using buf to buffer the write.
func writeInt(w io.Writer, buf []byte, x int) error {
binary.PutVarint(buf, int64(x))
_, err := w.Write(buf[0:binary.MaxVarintLen64])
return err
}
// readInt reads an int x from r using buf to buffer the read and returns x.
func readInt(r io.Reader, buf []byte) (int64, error) {
_, err := io.ReadFull(r, buf[0:binary.MaxVarintLen64]) // ok to continue with error
x, _ := binary.Varint(buf)
return x, err
}
// writeSlice writes data[:n] to w and returns n.
// It uses buf to buffer the write.
func writeSlice(w io.Writer, buf []byte, data ints) (n int, err error) {
// encode as many elements as fit into buf
p := binary.MaxVarintLen64
m := data.len()
for ; n < m && p+binary.MaxVarintLen64 <= len(buf); n++ {
p += binary.PutUvarint(buf[p:], uint64(data.get(n)))
}
// update buffer size
binary.PutVarint(buf, int64(p))
// write buffer
_, err = w.Write(buf[0:p])
return
}
var errTooBig = errors.New("suffixarray: data too large")
// readSlice reads data[:n] from r and returns n.
// It uses buf to buffer the read.
func readSlice(r io.Reader, buf []byte, data ints) (n int, err error) {
// read buffer size
var size64 int64
size64, err = readInt(r, buf)
if err != nil {
return
}
if int64(int(size64)) != size64 || int(size64) < 0 {
// We never write chunks this big anyway.
return 0, errTooBig
}
size := int(size64)
// read buffer w/o the size
if _, err = io.ReadFull(r, buf[binary.MaxVarintLen64:size]); err != nil {
return
}
// decode as many elements as present in buf
for p := binary.MaxVarintLen64; p < size; n++ {
x, w := binary.Uvarint(buf[p:])
data.set(n, int64(x))
p += w
}
return
}
const bufSize = 16 << 10 // reasonable for BenchmarkSaveRestore
// Read reads the index from r into x; x must not be nil.
func (x *Index) Read(r io.Reader) error {
// buffer for all reads
buf := make([]byte, bufSize)
// read length
n64, err := readInt(r, buf)
if err != nil {
return err
}
if int64(int(n64)) != n64 || int(n64) < 0 {
return errTooBig
}
n := int(n64)
// allocate space
if 2*n < cap(x.data) || cap(x.data) < n || x.sa.int32 != nil && n > maxData32 || x.sa.int64 != nil && n <= maxData32 {
// new data is significantly smaller or larger than
// existing buffers - allocate new ones
x.data = make([]byte, n)
x.sa.int32 = nil
x.sa.int64 = nil
if n <= maxData32 {
x.sa.int32 = make([]int32, n)
} else {
x.sa.int64 = make([]int64, n)
}
} else {
// re-use existing buffers
x.data = x.data[0:n]
x.sa = x.sa.slice(0, n)
}
// read data
if _, err := io.ReadFull(r, x.data); err != nil {
return err
}
// read index
sa := x.sa
for sa.len() > 0 {
n, err := readSlice(r, buf, sa)
if err != nil {
return err
}
sa = sa.slice(n, sa.len())
}
return nil
}
// Write writes the index x to w.
func (x *Index) Write(w io.Writer) error {
// buffer for all writes
buf := make([]byte, bufSize)
// write length
if err := writeInt(w, buf, len(x.data)); err != nil {
return err
}
// write data
if _, err := w.Write(x.data); err != nil {
return err
}
// write index
sa := x.sa
for sa.len() > 0 {
n, err := writeSlice(w, buf, sa)
if err != nil {
return err
}
sa = sa.slice(n, sa.len())
}
return nil
}
// Bytes returns the data over which the index was created.
// It must not be modified.
//
func (x *Index) Bytes() []byte {
return x.data
}
func (x *Index) at(i int) []byte {
return x.data[x.sa.get(i):]
}
// lookupAll returns a slice into the matching region of the index.
// The runtime is O(log(N)*len(s)).
func (x *Index) lookupAll(s []byte) ints {
// find matching suffix index range [i:j]
// find the first index where s would be the prefix
i := sort.Search(x.sa.len(), func(i int) bool { return bytes.Compare(x.at(i), s) >= 0 })
// starting at i, find the first index at which s is not a prefix
j := i + sort.Search(x.sa.len()-i, func(j int) bool { return !bytes.HasPrefix(x.at(j+i), s) })
return x.sa.slice(i, j)
}
// Lookup returns an unsorted list of at most n indices where the byte string s
// occurs in the indexed data. If n < 0, all occurrences are returned.
// The result is nil if s is empty, s is not found, or n == 0.
// Lookup time is O(log(N)*len(s) + len(result)) where N is the
// size of the indexed data.
//
func (x *Index) Lookup(s []byte, n int) (result []int) {
if len(s) > 0 && n != 0 {
matches := x.lookupAll(s)
count := matches.len()
if n < 0 || count < n {
n = count
}
// 0 <= n <= count
if n > 0 {
result = make([]int, n)
if matches.int32 != nil {
for i := range result {
result[i] = int(matches.int32[i])
}
} else {
for i := range result {
result[i] = int(matches.int64[i])
}
}
}
}
return
}
// FindAllIndex returns a sorted list of non-overlapping matches of the
// regular expression r, where a match is a pair of indices specifying
// the matched slice of x.Bytes(). If n < 0, all matches are returned
// in successive order. Otherwise, at most n matches are returned and
// they may not be successive. The result is nil if there are no matches,
// or if n == 0.
//
func (x *Index) FindAllIndex(r *regexp.Regexp, n int) (result [][]int) {
// a non-empty literal prefix is used to determine possible
// match start indices with Lookup
prefix, complete := r.LiteralPrefix()
lit := []byte(prefix)
// worst-case scenario: no literal prefix
if prefix == "" {
return r.FindAllIndex(x.data, n)
}
// if regexp is a literal just use Lookup and convert its
// result into match pairs
if complete {
// Lookup returns indices that may belong to overlapping matches.
// After eliminating them, we may end up with fewer than n matches.
// If we don't have enough at the end, redo the search with an
// increased value n1, but only if Lookup returned all the requested
// indices in the first place (if it returned fewer than that then
// there cannot be more).
for n1 := n; ; n1 += 2 * (n - len(result)) /* overflow ok */ {
indices := x.Lookup(lit, n1)
if len(indices) == 0 {
return
}
sort.Ints(indices)
pairs := make([]int, 2*len(indices))
result = make([][]int, len(indices))
count := 0
prev := 0
for _, i := range indices {
if count == n {
break
}
// ignore indices leading to overlapping matches
if prev <= i {
j := 2 * count
pairs[j+0] = i
pairs[j+1] = i + len(lit)
result[count] = pairs[j : j+2]
count++
prev = i + len(lit)
}
}
result = result[0:count]
if len(result) >= n || len(indices) != n1 {
// found all matches or there's no chance to find more
// (n and n1 can be negative)
break
}
}
if len(result) == 0 {
result = nil
}
return
}
// regexp has a non-empty literal prefix; Lookup(lit) computes
// the indices of possible complete matches; use these as starting
// points for anchored searches
// (regexp "^" matches beginning of input, not beginning of line)
r = regexp.MustCompile("^" + r.String()) // compiles because r compiled
// same comment about Lookup applies here as in the loop above
for n1 := n; ; n1 += 2 * (n - len(result)) /* overflow ok */ {
indices := x.Lookup(lit, n1)
if len(indices) == 0 {
return
}
sort.Ints(indices)
result = result[0:0]
prev := 0
for _, i := range indices {
if len(result) == n {
break
}
m := r.FindIndex(x.data[i:]) // anchored search - will not run off
// ignore indices leading to overlapping matches
if m != nil && prev <= i {
m[0] = i // correct m
m[1] += i
result = append(result, m)
prev = m[1]
}
}
if len(result) >= n || len(indices) != n1 {
// found all matches or there's no chance to find more
// (n and n1 can be negative)
break
}
}
if len(result) == 0 {
result = nil
}
return
}

View File

@ -1,615 +0,0 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package astrobwt
import (
"bytes"
"fmt"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
"regexp"
"sort"
"strings"
"testing"
)
type testCase struct {
name string // name of test case
source string // source to index
patterns []string // patterns to lookup
}
var testCases = []testCase{
{
"empty string",
"",
[]string{
"",
"foo",
"(foo)",
".*",
"a*",
},
},
{
"all a's",
"aaaaaaaaaa", // 10 a's
[]string{
"",
"a",
"aa",
"aaa",
"aaaa",
"aaaaa",
"aaaaaa",
"aaaaaaa",
"aaaaaaaa",
"aaaaaaaaa",
"aaaaaaaaaa",
"aaaaaaaaaaa", // 11 a's
".",
".*",
"a+",
"aa+",
"aaaa[b]?",
"aaa*",
},
},
{
"abc",
"abc",
[]string{
"a",
"b",
"c",
"ab",
"bc",
"abc",
"a.c",
"a(b|c)",
"abc?",
},
},
{
"barbara*3",
"barbarabarbarabarbara",
[]string{
"a",
"bar",
"rab",
"arab",
"barbar",
"bara?bar",
},
},
{
"typing drill",
"Now is the time for all good men to come to the aid of their country.",
[]string{
"Now",
"the time",
"to come the aid",
"is the time for all good men to come to the aid of their",
"to (come|the)?",
},
},
{
"godoc simulation",
"package main\n\nimport(\n \"rand\"\n ",
[]string{},
},
}
// find all occurrences of s in source; report at most n occurrences
func find(src, s string, n int) []int {
var res []int
if s != "" && n != 0 {
// find at most n occurrences of s in src
for i := -1; n < 0 || len(res) < n; {
j := strings.Index(src[i+1:], s)
if j < 0 {
break
}
i += j + 1
res = append(res, i)
}
}
return res
}
func testLookup(t *testing.T, tc *testCase, x *Index, s string, n int) {
res := x.Lookup([]byte(s), n)
exp := find(tc.source, s, n)
// check that the lengths match
if len(res) != len(exp) {
t.Errorf("test %q, lookup %q (n = %d): expected %d results; got %d", tc.name, s, n, len(exp), len(res))
}
// if n >= 0 the number of results is limited --- unless n >= all results,
// we may obtain different positions from the Index and from find (because
// Index may not find the results in the same order as find) => in general
// we cannot simply check that the res and exp lists are equal
// check that each result is in fact a correct match and there are no duplicates
sort.Ints(res)
for i, r := range res {
if r < 0 || len(tc.source) <= r {
t.Errorf("test %q, lookup %q, result %d (n = %d): index %d out of range [0, %d[", tc.name, s, i, n, r, len(tc.source))
} else if !strings.HasPrefix(tc.source[r:], s) {
t.Errorf("test %q, lookup %q, result %d (n = %d): index %d not a match", tc.name, s, i, n, r)
}
if i > 0 && res[i-1] == r {
t.Errorf("test %q, lookup %q, result %d (n = %d): found duplicate index %d", tc.name, s, i, n, r)
}
}
if n < 0 {
// all results computed - sorted res and exp must be equal
for i, r := range res {
e := exp[i]
if r != e {
t.Errorf("test %q, lookup %q, result %d: expected index %d; got %d", tc.name, s, i, e, r)
}
}
}
}
func testFindAllIndex(t *testing.T, tc *testCase, x *Index, rx *regexp.Regexp, n int) {
res := x.FindAllIndex(rx, n)
exp := rx.FindAllStringIndex(tc.source, n)
// check that the lengths match
if len(res) != len(exp) {
t.Errorf("test %q, FindAllIndex %q (n = %d): expected %d results; got %d", tc.name, rx, n, len(exp), len(res))
}
// if n >= 0 the number of results is limited --- unless n >= all results,
// we may obtain different positions from the Index and from regexp (because
// Index may not find the results in the same order as regexp) => in general
// we cannot simply check that the res and exp lists are equal
// check that each result is in fact a correct match and the result is sorted
for i, r := range res {
if r[0] < 0 || r[0] > r[1] || len(tc.source) < r[1] {
t.Errorf("test %q, FindAllIndex %q, result %d (n == %d): illegal match [%d, %d]", tc.name, rx, i, n, r[0], r[1])
} else if !rx.MatchString(tc.source[r[0]:r[1]]) {
t.Errorf("test %q, FindAllIndex %q, result %d (n = %d): [%d, %d] not a match", tc.name, rx, i, n, r[0], r[1])
}
}
if n < 0 {
// all results computed - sorted res and exp must be equal
for i, r := range res {
e := exp[i]
if r[0] != e[0] || r[1] != e[1] {
t.Errorf("test %q, FindAllIndex %q, result %d: expected match [%d, %d]; got [%d, %d]",
tc.name, rx, i, e[0], e[1], r[0], r[1])
}
}
}
}
func testLookups(t *testing.T, tc *testCase, x *Index, n int) {
for _, pat := range tc.patterns {
testLookup(t, tc, x, pat, n)
if rx, err := regexp.Compile(pat); err == nil {
testFindAllIndex(t, tc, x, rx, n)
}
}
}
// index is used to hide the sort.Interface
type index Index
func (x *index) Len() int { return x.sa.len() }
func (x *index) Less(i, j int) bool { return bytes.Compare(x.at(i), x.at(j)) < 0 }
func (x *index) Swap(i, j int) {
if x.sa.int32 != nil {
x.sa.int32[i], x.sa.int32[j] = x.sa.int32[j], x.sa.int32[i]
} else {
x.sa.int64[i], x.sa.int64[j] = x.sa.int64[j], x.sa.int64[i]
}
}
func (x *index) at(i int) []byte {
return x.data[x.sa.get(i):]
}
func testConstruction(t *testing.T, tc *testCase, x *Index) {
if !sort.IsSorted((*index)(x)) {
t.Errorf("failed testConstruction %s", tc.name)
}
}
func equal(x, y *Index) bool {
if !bytes.Equal(x.data, y.data) {
return false
}
if x.sa.len() != y.sa.len() {
return false
}
n := x.sa.len()
for i := 0; i < n; i++ {
if x.sa.get(i) != y.sa.get(i) {
return false
}
}
return true
}
// returns the serialized index size
func testSaveRestore(t *testing.T, tc *testCase, x *Index) int {
var buf bytes.Buffer
if err := x.Write(&buf); err != nil {
t.Errorf("failed writing index %s (%s)", tc.name, err)
}
size := buf.Len()
var y Index
if err := y.Read(bytes.NewReader(buf.Bytes())); err != nil {
t.Errorf("failed reading index %s (%s)", tc.name, err)
}
if !equal(x, &y) {
t.Errorf("restored index doesn't match saved index %s", tc.name)
}
old := maxData32
defer func() {
maxData32 = old
}()
// Reread as forced 32.
y = Index{}
maxData32 = realMaxData32
if err := y.Read(bytes.NewReader(buf.Bytes())); err != nil {
t.Errorf("failed reading index %s (%s)", tc.name, err)
}
if !equal(x, &y) {
t.Errorf("restored index doesn't match saved index %s", tc.name)
}
// Reread as forced 64.
y = Index{}
maxData32 = -1
if err := y.Read(bytes.NewReader(buf.Bytes())); err != nil {
t.Errorf("failed reading index %s (%s)", tc.name, err)
}
if !equal(x, &y) {
t.Errorf("restored index doesn't match saved index %s", tc.name)
}
return size
}
func testIndex(t *testing.T) {
for _, tc := range testCases {
x := New([]byte(tc.source))
testConstruction(t, &tc, x)
testSaveRestore(t, &tc, x)
testLookups(t, &tc, x, 0)
testLookups(t, &tc, x, 1)
testLookups(t, &tc, x, 10)
testLookups(t, &tc, x, 2e9)
testLookups(t, &tc, x, -1)
}
}
func TestIndex32(t *testing.T) {
testIndex(t)
}
func TestIndex64(t *testing.T) {
maxData32 = -1
defer func() {
maxData32 = realMaxData32
}()
testIndex(t)
}
func TestNew32(t *testing.T) {
test(t, func(x []byte) []int {
sa := make([]int32, len(x))
text_32(x, sa)
out := make([]int, len(sa))
for i, v := range sa {
out[i] = int(v)
}
return out
})
}
func TestNew64(t *testing.T) {
test(t, func(x []byte) []int {
sa := make([]int64, len(x))
text_64(x, sa)
out := make([]int, len(sa))
for i, v := range sa {
out[i] = int(v)
}
return out
})
}
// test tests an arbitrary suffix array construction function.
// Generates many inputs, builds and checks suffix arrays.
func test(t *testing.T, build func([]byte) []int) {
t.Run("ababab...", func(t *testing.T) {
// Very repetitive input has numLMS = len(x)/2-1
// at top level, the largest it can be.
// But maxID is only two (aba and ab$).
size := 100000
if testing.Short() {
size = 10000
}
x := make([]byte, size)
for i := range x {
x[i] = "ab"[i%2]
}
testSA(t, x, build)
})
t.Run("forcealloc", func(t *testing.T) {
// Construct a pathological input that forces
// recurse_32 to allocate a new temporary buffer.
// The input must have more than N/3 LMS-substrings,
// which we arrange by repeating an SLSLSLSLSLSL pattern
// like ababab... above, but then we must also arrange
// for a large number of distinct LMS-substrings.
// We use this pattern:
// 1 255 1 254 1 253 1 ... 1 2 1 255 2 254 2 253 2 252 2 ...
// This gives approximately 2¹⁵ distinct LMS-substrings.
// We need to repeat at least one substring, though,
// or else the recursion can be bypassed entirely.
x := make([]byte, 100000, 100001)
lo := byte(1)
hi := byte(255)
for i := range x {
if i%2 == 0 {
x[i] = lo
} else {
x[i] = hi
hi--
if hi <= lo {
lo++
if lo == 0 {
lo = 1
}
hi = 255
}
}
}
x[:cap(x)][len(x)] = 0 // for sais.New
testSA(t, x, build)
})
t.Run("exhaustive2", func(t *testing.T) {
// All inputs over {0,1} up to length 21.
// Runs in about 10 seconds on my laptop.
x := make([]byte, 30)
numFail := 0
for n := 0; n <= 21; n++ {
if n > 12 && testing.Short() {
break
}
x[n] = 0 // for sais.New
testRec(t, x[:n], 0, 2, &numFail, build)
}
})
t.Run("exhaustive3", func(t *testing.T) {
// All inputs over {0,1,2} up to length 14.
// Runs in about 10 seconds on my laptop.
x := make([]byte, 30)
numFail := 0
for n := 0; n <= 14; n++ {
if n > 8 && testing.Short() {
break
}
x[n] = 0 // for sais.New
testRec(t, x[:n], 0, 3, &numFail, build)
}
})
}
// testRec fills x[i:] with all possible combinations of values in [1,max]
// and then calls testSA(t, x, build) for each one.
func testRec(t *testing.T, x []byte, i, max int, numFail *int, build func([]byte) []int) {
if i < len(x) {
for x[i] = 1; x[i] <= byte(max); x[i]++ {
testRec(t, x, i+1, max, numFail, build)
}
return
}
if !testSA(t, x, build) {
*numFail++
if *numFail >= 10 {
t.Errorf("stopping after %d failures", *numFail)
t.FailNow()
}
}
}
// testSA tests the suffix array build function on the input x.
// It constructs the suffix array and then checks that it is correct.
func testSA(t *testing.T, x []byte, build func([]byte) []int) bool {
defer func() {
if e := recover(); e != nil {
t.Logf("build %v", x)
panic(e)
}
}()
sa := build(x)
if len(sa) != len(x) {
t.Errorf("build %v: len(sa) = %d, want %d", x, len(sa), len(x))
return false
}
for i := 0; i+1 < len(sa); i++ {
if sa[i] < 0 || sa[i] >= len(x) || sa[i+1] < 0 || sa[i+1] >= len(x) {
t.Errorf("build %s: sa out of range: %v\n", x, sa)
return false
}
if bytes.Compare(x[sa[i]:], x[sa[i+1]:]) >= 0 {
t.Errorf("build %v -> %v\nsa[%d:] = %d,%d out of order", x, sa, i, sa[i], sa[i+1])
return false
}
}
return true
}
var (
benchdata = make([]byte, 1e6)
benchrand = make([]byte, 1e6)
)
// Of all possible inputs, the random bytes have the least amount of substring
// repetition, and the repeated bytes have the most. For most algorithms,
// the running time of every input will be between these two.
func benchmarkNew(b *testing.B, random bool) {
b.ReportAllocs()
b.StopTimer()
data := benchdata
if random {
data = benchrand
if data[0] == 0 {
for i := range data {
data[i] = byte(rand.Intn(256))
}
}
}
b.StartTimer()
b.SetBytes(int64(len(data)))
for i := 0; i < b.N; i++ {
New(data)
}
}
func makeText(name string) ([]byte, error) {
var data []byte
switch name {
case "opticks":
var err error
data, err = ioutil.ReadFile("../../testdata/Isaac.Newton-Opticks.txt")
if err != nil {
return nil, err
}
case "go":
err := filepath.Walk("../..", func(path string, info os.FileInfo, err error) error {
if err == nil && strings.HasSuffix(path, ".go") && !info.IsDir() {
file, err := ioutil.ReadFile(path)
if err != nil {
return err
}
data = append(data, file...)
}
return nil
})
if err != nil {
return nil, err
}
case "zero":
data = make([]byte, 50e6)
case "rand":
data = make([]byte, 50e6)
for i := range data {
data[i] = byte(rand.Intn(256))
}
}
return data, nil
}
func setBits(bits int) (cleanup func()) {
if bits == 32 {
maxData32 = realMaxData32
} else {
maxData32 = -1 // force use of 64-bit code
}
return func() {
maxData32 = realMaxData32
}
}
func BenchmarkNew(b *testing.B) {
for _, text := range []string{"opticks", "go", "zero", "rand"} {
b.Run("text="+text, func(b *testing.B) {
data, err := makeText(text)
if err != nil {
b.Fatal(err)
}
if testing.Short() && len(data) > 5e6 {
data = data[:5e6]
}
for _, size := range []int{100e3, 500e3, 1e6, 5e6, 10e6, 50e6} {
if len(data) < size {
continue
}
data := data[:size]
name := fmt.Sprintf("%dK", size/1e3)
if size >= 1e6 {
name = fmt.Sprintf("%dM", size/1e6)
}
b.Run("size="+name, func(b *testing.B) {
for _, bits := range []int{32, 64} {
if ^uint(0) == 0xffffffff && bits == 64 {
continue
}
b.Run(fmt.Sprintf("bits=%d", bits), func(b *testing.B) {
cleanup := setBits(bits)
defer cleanup()
b.SetBytes(int64(len(data)))
b.ReportAllocs()
for i := 0; i < b.N; i++ {
New(data)
}
})
}
})
}
})
}
}
func BenchmarkSaveRestore(b *testing.B) {
r := rand.New(rand.NewSource(0x5a77a1)) // guarantee always same sequence
data := make([]byte, 1<<20) // 1MB of data to index
for i := range data {
data[i] = byte(r.Intn(256))
}
for _, bits := range []int{32, 64} {
if ^uint(0) == 0xffffffff && bits == 64 {
continue
}
b.Run(fmt.Sprintf("bits=%d", bits), func(b *testing.B) {
cleanup := setBits(bits)
defer cleanup()
b.StopTimer()
x := New(data)
size := testSaveRestore(nil, nil, x) // verify correctness
buf := bytes.NewBuffer(make([]byte, size)) // avoid growing
b.SetBytes(int64(size))
b.StartTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
buf.Reset()
if err := x.Write(buf); err != nil {
b.Fatal(err)
}
var y Index
if err := y.Read(buf); err != nil {
b.Fatal(err)
}
}
})
}
}

View File

@ -1,90 +0,0 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -1,411 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package block
import "fmt"
//import "sort"
import "bytes"
import "runtime/debug"
import "encoding/hex"
import "encoding/binary"
import "golang.org/x/crypto/sha3"
import "github.com/romana/rlog"
import "github.com/deroproject/derohe/cryptography/crypto"
//import "github.com/deroproject/derosuite/config"
import "github.com/deroproject/derohe/astrobwt"
import "github.com/deroproject/derohe/transaction"
// these are defined in file
//https://github.com/monero-project/monero/src/cryptonote_basic/cryptonote_basic.h
type Block_Header struct {
Major_Version uint64 `json:"major_version"`
Minor_Version uint64 `json:"minor_version"`
Timestamp uint64 `json:"timestamp"`
Height uint64 `json:"height"`
Nonce uint32 `json:"nonce"` // TODO make nonce 32 byte array for infinite work capacity
ExtraNonce [32]byte `json:"-"`
Miner_TX transaction.Transaction `json:"miner_tx"`
}
type Block struct {
Block_Header
Proof [32]byte `json:"-"` // proof is being used to record balance root hash
Tips []crypto.Hash `json:"tips"`
Tx_hashes []crypto.Hash `json:"tx_hashes"`
}
// we process incoming blocks in this format
type Complete_Block struct {
Bl *Block
Txs []*transaction.Transaction
}
// see spec here https://cryptonote.org/cns/cns003.txt
// this function gets the block identifier hash
// this has been simplified and varint length has been removed
func (bl *Block) GetHash() (hash crypto.Hash) {
long_header := bl.GetBlockWork()
// keccak hash of this above blob, gives the block id
return sha3.Sum256(long_header)
}
// converts a block, into a getwork style work, ready for either submitting the block
// or doing Pow Calculations
func (bl *Block) GetBlockWork() []byte {
var buf []byte // bitcoin/litecoin getworks are 80 bytes
var scratch [8]byte
buf = append(buf, []byte{byte(bl.Major_Version), byte(bl.Minor_Version), 0, 0, 0, 0, 0}...) // 0 first 7 bytes are version in little endia format
binary.LittleEndian.PutUint32(buf[2:6], uint32(bl.Timestamp))
header_hash := sha3.Sum256(bl.getserializedheaderforwork()) // 0 + 7
buf = append(buf, header_hash[:]...) // 0 + 7 + 32 = 39
binary.LittleEndian.PutUint32(scratch[0:4], bl.Nonce) // check whether it needs to be big endian
buf = append(buf, scratch[:4]...) // 0 + 7 + 32 + 4 = 43
// next place the ExtraNonce
buf = append(buf, bl.ExtraNonce[:]...) // total 7 + 32 + 4 + 32
buf = append(buf, 0) // total 7 + 32 + 4 + 32 + 1 = 76
if len(buf) != 76 {
panic(fmt.Sprintf("Getwork not equal to 76 bytes actual %d", len(buf)))
}
return buf
}
// copy the nonce and the extra nonce from the getwork to the block
func (bl *Block) CopyNonceFromBlockWork(work []byte) (err error) {
if len(work) < 74 {
return fmt.Errorf("work buffer is Invalid")
}
bl.Timestamp = uint64(binary.LittleEndian.Uint32(work[2:]))
bl.Nonce = binary.LittleEndian.Uint32(work[7+32:])
copy(bl.ExtraNonce[:], work[7+32+4:75])
return
}
// copy the nonce and the extra nonce from the getwork to the block
func (bl *Block) SetExtraNonce(extranonce []byte) (err error) {
if len(extranonce) == 0 {
return fmt.Errorf("extra nonce is invalid")
}
max := len(extranonce)
if max > 32 {
max = 32
}
copy(bl.ExtraNonce[:], extranonce[0:max])
return
}
// clear extra nonce
func (bl *Block) ClearExtraNonce() {
for i := range bl.ExtraNonce {
bl.ExtraNonce[i] = 0
}
}
// clear nonce
func (bl *Block) ClearNonce() {
bl.Nonce = 0
}
// Get PoW hash , this is very slow function
func (bl *Block) GetPoWHash() (hash crypto.Hash) {
long_header := bl.GetBlockWork()
rlog.Tracef(9, "longheader %x\n", long_header)
tmphash := astrobwt.POW_0alloc(long_header) // new astrobwt algorithm
copy(hash[:], tmphash[:32])
return
}
// serialize block header for calculating PoW
func (bl *Block) getserializedheaderforwork() []byte {
var serialised bytes.Buffer
buf := make([]byte, binary.MaxVarintLen64)
n := binary.PutUvarint(buf, uint64(bl.Major_Version))
serialised.Write(buf[:n])
n = binary.PutUvarint(buf, uint64(bl.Minor_Version))
serialised.Write(buf[:n])
// it is placed in pow
//n = binary.PutUvarint(buf, bl.Timestamp)
//serialised.Write(buf[:n])
// write miner tx
serialised.Write(bl.Miner_TX.Serialize())
// write tips,, merkle tree should be replaced with something faster
tips_treehash := bl.GetTipsHash()
n = binary.PutUvarint(buf, uint64(len(tips_treehash)))
serialised.Write(buf[:n])
serialised.Write(tips_treehash[:]) // actual tips hash
tx_treehash := bl.GetTXSHash() // hash of all transactions
n = binary.PutUvarint(buf, uint64(len(bl.Tx_hashes))) // count of all transactions
serialised.Write(buf[:n])
serialised.Write(tx_treehash[:]) // actual transactions hash
return serialised.Bytes()
}
// serialize block header
func (bl *Block) SerializeHeader() []byte {
var serialised bytes.Buffer
buf := make([]byte, binary.MaxVarintLen64)
n := binary.PutUvarint(buf, uint64(bl.Major_Version))
serialised.Write(buf[:n])
n = binary.PutUvarint(buf, uint64(bl.Minor_Version))
serialised.Write(buf[:n])
n = binary.PutUvarint(buf, bl.Timestamp)
serialised.Write(buf[:n])
n = binary.PutUvarint(buf, bl.Height)
serialised.Write(buf[:n])
binary.LittleEndian.PutUint32(buf[0:8], bl.Nonce) // check whether it needs to be big endian
serialised.Write(buf[:4])
serialised.Write(bl.ExtraNonce[:])
// write miner address
serialised.Write(bl.Miner_TX.Serialize())
return serialised.Bytes()
}
// serialize entire block ( block_header + miner_tx + tx_list )
func (bl *Block) Serialize() []byte {
var serialized bytes.Buffer
buf := make([]byte, binary.MaxVarintLen64)
header := bl.SerializeHeader()
serialized.Write(header)
serialized.Write(bl.Proof[:]) // write proof NOT implemented
// miner tx should always be coinbase
//minex_tx := bl.Miner_tx.Serialize()
//serialized.Write(minex_tx)
n := binary.PutUvarint(buf, uint64(len(bl.Tips)))
serialized.Write(buf[:n])
for _, hash := range bl.Tips {
serialized.Write(hash[:])
}
//fmt.Printf("serializing tx hashes %d\n", len(bl.Tx_hashes))
n = binary.PutUvarint(buf, uint64(len(bl.Tx_hashes)))
serialized.Write(buf[:n])
for _, hash := range bl.Tx_hashes {
serialized.Write(hash[:])
}
return serialized.Bytes()
}
// get block transactions tree hash
func (bl *Block) GetTipsHash() (result crypto.Hash) {
/*if len(bl.Tips) == 0 { // case for genesis block
panic("Block does NOT refer any tips")
}*/
// add all the remaining hashes
h := sha3.New256()
for i := range bl.Tips {
h.Write(bl.Tips[i][:])
}
r := h.Sum(nil)
copy(result[:], r)
return
}
// get block transactions
// we have discarded the merkle tree and have shifted to a plain version
func (bl *Block) GetTXSHash() (result crypto.Hash) {
h := sha3.New256()
for i := range bl.Tx_hashes {
h.Write(bl.Tx_hashes[i][:])
}
r := h.Sum(nil)
copy(result[:], r)
return
}
// only parses header
func (bl *Block) DeserializeHeader(buf []byte) (err error) {
done := 0
bl.Major_Version, done = binary.Uvarint(buf)
if done <= 0 {
return fmt.Errorf("Invalid Major Version in Block\n")
}
buf = buf[done:]
bl.Minor_Version, done = binary.Uvarint(buf)
if done <= 0 {
return fmt.Errorf("Invalid Minor Version in Block\n")
}
buf = buf[done:]
bl.Timestamp, done = binary.Uvarint(buf)
if done <= 0 {
return fmt.Errorf("Invalid Timestamp in Block\n")
}
buf = buf[done:]
bl.Height, done = binary.Uvarint(buf)
if done <= 0 {
return fmt.Errorf("Invalid Height in Block\n")
}
buf = buf[done:]
//copy(bl.Prev_Hash[:], buf[:32]) // hash is always 32 byte
//buf = buf[32:]
bl.Nonce = binary.LittleEndian.Uint32(buf)
buf = buf[4:]
copy(bl.ExtraNonce[:], buf[0:32])
buf = buf[32:]
// parse miner tx
err = bl.Miner_TX.DeserializeHeader(buf)
if err != nil {
return err
}
return
}
//parse entire block completely
func (bl *Block) Deserialize(buf []byte) (err error) {
done := 0
defer func() {
if r := recover(); r != nil {
fmt.Printf("Panic while deserialising block, block hex_dump below to make a testcase/debug\n")
fmt.Printf("%s\n", hex.EncodeToString(buf))
fmt.Printf("Recovered while parsing block, Stack trace below block_hash ")
fmt.Printf("Stack trace \n%s", debug.Stack())
err = fmt.Errorf("Invalid Block")
return
}
}()
err = bl.DeserializeHeader(buf)
if err != nil {
return fmt.Errorf("Block Header could not be parsed %s\n", err)
}
buf = buf[len(bl.SerializeHeader()):] // skup number of bytes processed
// read 32 byte proof
copy(bl.Proof[:], buf[0:32])
buf = buf[32:]
// header finished here
// read and parse transaction
/*err = bl.Miner_tx.DeserializeHeader(buf)
if err != nil {
return fmt.Errorf("Cannot parse miner TX %x", buf)
}
// if tx was parse, make sure it's coin base
if len(bl.Miner_tx.Vin) != 1 || bl.Miner_tx.Vin[0].(transaction.Txin_gen).Height > config.MAX_CHAIN_HEIGHT {
// serialize transaction again to get the tx size, so as parsing could continue
return fmt.Errorf("Invalid Miner TX")
}
miner_tx_serialized_size := bl.Miner_tx.Serialize()
buf = buf[len(miner_tx_serialized_size):]
*/
tips_count, done := binary.Uvarint(buf)
if done <= 0 || done > 1 {
return fmt.Errorf("Invalid Tips count in Block\n")
}
buf = buf[done:]
// remember first tx is merkle root
for i := uint64(0); i < tips_count; i++ {
//fmt.Printf("Parsing transaction hash %d tx_count %d\n", i, tx_count)
var h crypto.Hash
copy(h[:], buf[:32])
buf = buf[32:]
bl.Tips = append(bl.Tips, h)
}
//fmt.Printf("miner tx %x\n", miner_tx_serialized_size)
// read number of transactions
tx_count, done := binary.Uvarint(buf)
if done <= 0 {
return fmt.Errorf("Invalid Tx count in Block\n")
}
buf = buf[done:]
// remember first tx is merkle root
for i := uint64(0); i < tx_count; i++ {
//fmt.Printf("Parsing transaction hash %d tx_count %d\n", i, tx_count)
var h crypto.Hash
copy(h[:], buf[:32])
buf = buf[32:]
bl.Tx_hashes = append(bl.Tx_hashes, h)
}
//fmt.Printf("%d member in tx hashes \n",len(bl.Tx_hashes))
return
}

View File

@ -1,385 +0,0 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package block
//import "bytes"
import "testing"
import "encoding/hex"
import "github.com/deroproject/derohe/config"
//import "github.com/deroproject/derohe/crypto"
func Test_Generic_block_serdes(t *testing.T) {
var bl, bldecoded Block
genesis_tx_bytes, _ := hex.DecodeString(config.Mainnet.Genesis_Tx)
err := bl.Miner_TX.DeserializeHeader(genesis_tx_bytes)
if err != nil {
t.Errorf("Deserialization test failed for Genesis TX err %s\n", err)
}
serialized := bl.Serialize()
err = bldecoded.Deserialize(serialized)
if err != nil {
t.Errorf("Deserialization test failed for NULL block err %s\n", err)
}
}
// this tests whether the PoW depends on everything in the BLOCK except Proof
func Test_PoW_Dependancy(t *testing.T) {
var bl Block
genesis_tx_bytes, _ := hex.DecodeString(config.Mainnet.Genesis_Tx)
err := bl.Miner_TX.DeserializeHeader(genesis_tx_bytes)
if err != nil {
t.Errorf("Deserialization test failed for Genesis TX err %s\n", err)
}
Original_PoW := bl.GetPoWHash()
{
temp_bl := bl
temp_bl.Major_Version++
if Original_PoW == temp_bl.GetPoWHash() {
t.Fatalf("POW Skipping Major Version")
}
}
{
temp_bl := bl
temp_bl.Minor_Version++
if Original_PoW == temp_bl.GetPoWHash() {
t.Fatalf("POW Skipping Minor Version")
}
}
{
temp_bl := bl
temp_bl.Timestamp++
if Original_PoW == temp_bl.GetPoWHash() {
t.Fatalf("POW Skipping Timestamp Version")
}
}
{
temp_bl := bl
temp_bl.Miner_TX.Version++
if Original_PoW == temp_bl.GetPoWHash() {
t.Fatalf("POW Skipping Miner_TX")
}
}
{
temp_bl := bl
temp_bl.Nonce++
if Original_PoW == temp_bl.GetPoWHash() {
t.Fatalf("POW Skipping Nonce")
}
}
{
temp_bl := bl
temp_bl.ExtraNonce[31] = 1
if Original_PoW == temp_bl.GetPoWHash() {
t.Fatalf("POW Skipping Extra Nonce")
}
}
{
temp_bl := bl
temp_bl.Tips = append(temp_bl.Tips, Original_PoW)
if Original_PoW == temp_bl.GetPoWHash() {
t.Fatalf("POW Skipping Tips")
}
}
{
temp_bl := bl
temp_bl.Tx_hashes = append(temp_bl.Tx_hashes, Original_PoW)
if Original_PoW == temp_bl.GetPoWHash() {
t.Fatalf("POW Skipping TXs")
}
}
{
temp_bl := bl
temp_bl.Proof[31] = 1
if Original_PoW != temp_bl.GetPoWHash() {
t.Fatalf("POW Including Proof")
}
}
}
/*
func Test_testnet_Genesis_block_serdes(t *testing.T) {
testnet_genesis_block_hex := "010100112700000000000000000000000000000000000000000000000000000000000000000000023c01ff0001ffffffffffff07020bf6522f9152fa26cd1fc5c022b1a9e13dab697f3acf4b4d0ca6950a867a194321011d92826d0656958865a035264725799f39f6988faa97d532f972895de849496d0000000000000000000000000000000000000000000000000000000000000000000000"
testnet_genesis_block, _ := hex.DecodeString(testnet_genesis_block_hex)
var bl Block
err := bl.Deserialize(testnet_genesis_block)
if err != nil {
t.Errorf("Deserialization test failed for NULL block err %s\n", err)
}
// test the block serializer and deserializer whether it gives the same
serialized := bl.Serialize()
if !bytes.Equal(serialized, testnet_genesis_block) {
t.Errorf("Serialization test failed for Genesis block %X\n", serialized)
}
// test block id
if bl.GetHash() != config.Testnet.Genesis_Block_Hash {
t.Error("genesis block ID failed \n")
}
hash := bl.GetHash()
bl.SetExtraNonce(hash[:])
for i := range hash {
if hash[i] != bl.ExtraNonce[i] {
t.Fatalf("Extra nonce test failed")
}
}
if bl.SetExtraNonce(hash[:0]) == nil { // this should fail
t.Fatalf("Extra nonce test failed")
}
if bl.SetExtraNonce(append([]byte{0}, hash[:]...)) != nil { // this should pass
t.Fatalf("Extra nonce test failed")
}
bl.ClearExtraNonce()
for i := range hash {
if 0 != bl.ExtraNonce[i] {
t.Fatalf("Extra nonce clearing test failed")
}
}
bl.Nonce = 99
bl.ClearNonce()
if bl.Nonce != 0 {
t.Fatalf("Nonce clearing failed")
}
bl.Deserialize(testnet_genesis_block)
block_work := bl.GetBlockWork()
bl.SetExtraNonce(hash[:])
bl.Nonce = 99
bl.CopyNonceFromBlockWork(block_work)
if bl.GetHash() != config.Testnet.Genesis_Block_Hash {
t.Fatalf("Copynonce failed")
}
if nil == bl.CopyNonceFromBlockWork(hash[:]) { // this should give an error
t.Fatalf("Copynonce test failed")
}
//if bl.GetReward() != 35184372088831 {
// t.Error("genesis block reward failed \n")
//}
}
*/
/*
func Test_Genesis_block_serdes(t *testing.T) {
mainnet_genesis_block_hex := "010000000000000000000000000000000000000000000000000000000000000000000010270000023c01ff0001ffffffffffff07020bf6522f9152fa26cd1fc5c022b1a9e13dab697f3acf4b4d0ca6950a867a194321011d92826d0656958865a035264725799f39f6988faa97d532f972895de849496d0000"
mainnet_genesis_block, _ := hex.DecodeString(mainnet_genesis_block_hex)
var bl Block
err := bl.Deserialize(mainnet_genesis_block)
if err != nil {
t.Errorf("Deserialization test failed for NULL block err %s\n", err)
}
// test the block serializer and deserializer whether it gives the same
serialized := bl.Serialize()
if !bytes.Equal(serialized, mainnet_genesis_block) {
t.Errorf("Serialization test failed for Genesis block %X\n", serialized)
}
// calculate POW hash
powhash := bl.GetPoWHash()
if powhash != crypto.Hash([32]byte{0xa7, 0x3b, 0xd3, 0x7a, 0xba, 0x34, 0x54, 0x77, 0x6b, 0x40, 0x73, 0x38, 0x54, 0xa8, 0x34, 0x9f, 0xe6, 0x35, 0x9e, 0xb2, 0xc9, 0x1d, 0x93, 0xbc, 0x72, 0x7c, 0x69, 0x43, 0x1c, 0x1d, 0x1f, 0x95}) {
t.Errorf("genesis block POW failed %x\n", powhash[:])
}
// test block id
if bl.GetHash() != config.Mainnet.Genesis_Block_Hash {
t.Error("genesis block ID failed \n")
}
if bl.GetReward() != 35184372088831 {
t.Error("genesis block reward failed \n")
}
}
func Test_Block_38373_serdes(t *testing.T) {
block_hex := "0606f0cac5d405b325cd7b2cb9f7d9500f37b5faf8dacd1506a73a6261b476d1f8aea4d59e54d93989000002a1ac0201ffe5ab0201e6fcee8183d1060288195982ed85017ba561f276f17986c54be81001057d3d696be5ec49d99648192b010877b53197c749557b97aad154d4a85dff4498158ec8e16cb9034562676b091d0208000000009699cce20001d0e1a493c61ba77865f17b27223474bf93115267a596258cb291fbc18ac9cd20"
block, _ := hex.DecodeString(block_hex)
var bl Block
err := bl.Deserialize(block)
if err != nil {
t.Errorf("Deserialization test failed for NULL block err %s\n", err)
}
// test the block serializer and deserializer whether it gives the same
serialized := bl.Serialize()
if !bytes.Equal(serialized, block) {
t.Errorf("Serialization test failed for block %X\n", serialized)
}
// test block hash
if bl.GetHash().String() != "02727780cade8a026c01dc0e0b9a908bf6f82ca1fe3ca61f83377a276c42c8b1" {
t.Errorf("block hash failed \n")
}
powhash := bl.GetPoWHash()
if powhash != crypto.HashHexToHash("e918f3452df59edaeed6dfec1524adc4a191498e9aa02a709a20f97303000000") {
t.Errorf("block POW failed %x\n", powhash[:])
}
}
func Test_Block_38374_serdes(t *testing.T) {
block_hex := "0606b7ccc5d40502727780cade8a026c01dc0e0b9a908bf6f82ca1fe3ca61f83377a276c42c8b11700800002a2ac0201ffe6ab0201f0e588d08edc06021a61261e226bad3dace02ce380c8da5abde1567cff8bb78069dae79e3a778ac72b01b62fda03efb8860fb6dd270b177f2e7a56ef7f9dd35a4af8852512653b191136020800000000e899cce2001734a9ff779afd4d1fce7a30815402fd7f7ce694be95ed69ff42e498e40af25c5511b52aff7b16df5e0712a3a5b59913f59658cb7e44201b1a78631bd87d7c3e1dc66c9fc7d5260f6f0fa99914f7a1f35d3ddb3e2a04af516f8135f6f607acfe5314f2c9c0dbe58bc981527ff90fa236b2663ee6d295ff1b5ad4add1c30cb0393d5e287c7ca687f04701485174bd0c7b2ac4a1b8982dd6e1ef8df569f7bf03668d12c99b329118c00d30e341808e94ff8ec31374104b2a37b785def153216d8bab52c3bd48d408e6d96e344344b243a3911e058909e1f26aac10482a4af1fcc86d23116a483e45d705256261ee6233ebc9b4f8993580ed2d9d7c598ae58a445134af1a325d26222418f518df2c8997c29f4495237ba6271fb2d517dd6f66c9c1ced406e5823594deb5f3952a9e80eae87ce2df8a8290cc1c3f12e474bfa38ab12845088a1f543790d8a7b7cf77e757e5299d28d7e206520b259bc55a63c3a8c987c6215f6fb186f6d87ccf299965de42a004ca38aa0d4dd16144adf2d7ce31fa74bc3bce1708e03ed2396b678c85d8d3f4d7ee76f5c13b53312c80a4240d9e6495508159aeab1f330bc331e3b81310f5ba749063677c62ee5bde87129b15241ee895e0437a808b9b03c77b86b8b641dc6bfbc70206415c2b2e497a6bc0e4dcb2ea24b75f1caec50cd2ab6426e91f41f11545d02c01f0530f23d667c4e04f16989b11ee6ade7b7a210e744fdbff45aab0e09bc718e847a5acd68c6da306e0ae9c9c5a97eae96a11968dfcdd414f8f4957ea45fd21f49fef889b86d3298224c3a2d21c4ccc9ff0fff6f04cb4a3998e5cc935afbfbfc79af766227a60a32275ad8480448b06fe78cbdaaa03acab10a6265154bf92bcec87a055e770f8c69581319a5db766b16050ddf8b448d6c784d1ec48072c702c41a864e5965c12eb450c36466f481a577fbeef6d89e8cfcdaea42e3c0dc8066b4681868b57270917c5b192d3a1457fb56bd85f2a58af0979dc1b1e6279c08e2a5013cb5643d21b17495d778dd8"
block, _ := hex.DecodeString(block_hex)
var bl, bl2 Block
err := bl.Deserialize(block)
if err != nil {
t.Errorf("Deserialization test failed for NULL block err %s\n", err)
}
// test the block serializer and deserializer whether it gives the same
serialized := bl.Serialize()
if !bytes.Equal(serialized, block) {
t.Errorf("Serialization test failed for block %X\n", serialized)
}
// test block hash
if bl.GetHash().String() != "d76d83e03c1d5d223c666c2cbcaa781fb74e53f8eb183a927aba81f44108bf13" {
t.Errorf("block hash failed \n")
}
powhash := bl.GetPoWHash()
if powhash != crypto.HashHexToHash("7457a3d344b4c3bb57f505b79c8c915ab0364657f9577a858137f39d02000000") {
t.Errorf(" block POW failed %x\n", powhash[:])
}
err = bl2.DeserializeHeader(block)
if bl.Major_Version != bl2.Major_Version ||
bl.Minor_Version != bl2.Minor_Version ||
bl.Timestamp != bl2.Timestamp ||
bl.Prev_Hash.String() != bl2.Prev_Hash.String() ||
bl.Nonce != bl2.Nonce {
t.Errorf(" block Deserialize header failed %x\n", powhash[:])
}
}
func Test_Treehash_Panic(t *testing.T) {
defer func() {
if r := recover(); r == nil {
t.Errorf("Treehash did not panic on 0 inputs")
}
}()
// The following is the code under test
var hashes []crypto.Hash
TreeHash(hashes)
}
*/
// test all invalid edge cases, which will return error
func Test_Block_Edge_Cases(t *testing.T) {
tests := []struct {
name string
blockhex string
}{
{
name: "Invalid Major Version",
blockhex: "80808080808080808080", // Major_Version is taking more than 9 bytes, trigger error
},
{
name: "Invalid Minor Version",
blockhex: "0280808080808080808080", // Mijor_Version is taking more than 9 bytes, trigger error
},
{
name: "Invalid timestamp",
blockhex: "020280808080808080808080", // timestamp is taking more than 9 bytes, trigger error
},
{
name: "Incomplete header",
blockhex: "020255", // prev hash is not provided, controlled panic
},
}
for _, test := range tests {
block, err := hex.DecodeString(test.blockhex)
if err != nil {
t.Fatalf("Block hex could not be hex decoded")
}
//t.Logf("%s failed", test.name)
var bl Block
err = bl.Deserialize(block)
if err == nil {
t.Fatalf("%s failed", test.name)
}
}
}
/*
// this edge case occurred in monero and affected all CryptoNote coins
// bug occured when > 512 transactions were present, causing monero network to split and halt
// test case from monero block 202612 bbd604d2ba11ba27935e006ed39c9bfdd99b76bf4a50654bc1e1e61217962698
// the test is empty because we do NOT support v1 transactions
// however, this test needs to be added for future attacks
func Test_Treehash_Egde_Case(t *testing.T) {
}
*/

File diff suppressed because it is too large Load Diff

View File

@ -1,58 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
//import "fmt"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/rpc"
// this function is only used by the RPC and is not used by the core and should be moved to RPC interface
/* fill up the above structure from the blockchain */
func (chain *Blockchain) GetBlockHeader(hash crypto.Hash) (result rpc.BlockHeader_Print, err error) {
bl, err := chain.Load_BL_FROM_ID(hash)
if err != nil {
return
}
result.TopoHeight = -1
if chain.Is_Block_Topological_order(hash) {
result.TopoHeight = chain.Load_Block_Topological_order(hash)
}
result.Height = chain.Load_Height_for_BL_ID(hash)
result.Depth = chain.Get_Height() - result.Height
result.Difficulty = chain.Load_Block_Difficulty(hash).String()
result.Hash = hash.String()
result.Major_Version = uint64(bl.Major_Version)
result.Minor_Version = uint64(bl.Minor_Version)
result.Nonce = uint64(bl.Nonce)
result.Orphan_Status = chain.Is_Block_Orphan(hash)
if result.TopoHeight >= chain.LocatePruneTopo()+10 { // this result may/may not be valid at just above prune heights
result.SyncBlock = chain.IsBlockSyncBlockHeight(hash)
}
result.SideBlock = chain.Isblock_SideBlock(hash)
//result.Reward = chain.Load_Block_Total_Reward(dbtx, hash)
result.TXCount = int64(len(bl.Tx_hashes))
for i := range bl.Tips {
result.Tips = append(result.Tips, bl.Tips[i].String())
}
//result.Prev_Hash = bl.Prev_Hash.String()
result.Timestamp = bl.Timestamp
return
}

View File

@ -1,375 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
//import "fmt"
import "math/big"
//import "github.com/romana/rlog"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/globals"
var (
// bigZero is 0 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigZero = big.NewInt(0)
// bigOne is 1 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigOne = big.NewInt(1)
// oneLsh256 is 1 shifted left 256 bits. It is defined here to avoid
// the overhead of creating it multiple times.
oneLsh256 = new(big.Int).Lsh(bigOne, 256)
// enabling this will simulation mode with hard coded difficulty set to 1
// the variable is knowingly not exported, so no one can tinker with it
//simulation = false // simulation mode is disabled
)
// HashToBig converts a PoW has into a big.Int that can be used to
// perform math comparisons.
func HashToBig(buf crypto.Hash) *big.Int {
// A Hash is in little-endian, but the big package wants the bytes in
// big-endian, so reverse them.
blen := len(buf) // its hardcoded 32 bytes, so why do len but lets do it
for i := 0; i < blen/2; i++ {
buf[i], buf[blen-1-i] = buf[blen-1-i], buf[i]
}
return new(big.Int).SetBytes(buf[:])
}
// this function calculates the difficulty in big num form
func ConvertDifficultyToBig(difficultyi uint64) *big.Int {
if difficultyi == 0 {
panic("difficulty can never be zero")
}
// (1 << 256) / (difficultyNum )
difficulty := new(big.Int).SetUint64(difficultyi)
denominator := new(big.Int).Add(difficulty, bigZero) // above 2 lines can be merged
return new(big.Int).Div(oneLsh256, denominator)
}
func ConvertIntegerDifficultyToBig(difficultyi *big.Int) *big.Int {
if difficultyi.Cmp(bigZero) == 0 { // if work_pow is less than difficulty
panic("difficulty can never be zero")
}
return new(big.Int).Div(oneLsh256, difficultyi)
}
// this function check whether the pow hash meets difficulty criteria
func CheckPowHash(pow_hash crypto.Hash, difficulty uint64) bool {
big_difficulty := ConvertDifficultyToBig(difficulty)
big_pow_hash := HashToBig(pow_hash)
if big_pow_hash.Cmp(big_difficulty) <= 0 { // if work_pow is less than difficulty
return true
}
return false
}
// this function check whether the pow hash meets difficulty criteria
// however, it take diff in bigint format
func CheckPowHashBig(pow_hash crypto.Hash, big_difficulty_integer *big.Int) bool {
big_pow_hash := HashToBig(pow_hash)
big_difficulty := ConvertIntegerDifficultyToBig(big_difficulty_integer)
if big_pow_hash.Cmp(big_difficulty) <= 0 { // if work_pow is less than difficulty
return true
}
return false
}
// this function finds a common base which can be used to compare tips based on cumulative difficulty
func (chain *Blockchain) find_best_tip_cumulative_difficulty(tips []crypto.Hash) (best crypto.Hash) {
tips_scores := make([]BlockScore, len(tips), len(tips))
for i := range tips {
tips_scores[i].BLID = tips[i] // we should chose the lowest weight
tips_scores[i].Cumulative_Difficulty = chain.Load_Block_Cumulative_Difficulty(tips[i])
}
sort_descending_by_cumulative_difficulty(tips_scores)
best = tips_scores[0].BLID
// base_height = scores[0].Weight
return best
}
// confirms whether the actual tip difficulty is withing 9% deviation with reference
// actual tip cannot be less than 91% of main tip
// if yes tip is okay, else tip should be declared stale
// both the tips should be in the store
func (chain *Blockchain) validate_tips(reference, actual crypto.Hash) (result bool) {
reference_diff := chain.Load_Block_Difficulty(reference)
actual_diff := chain.Load_Block_Difficulty(actual)
// multiply by 91
reference91 := new(big.Int).Mul(reference_diff, new(big.Int).SetUint64(91))
// divide by 100
reference91.Div(reference91, new(big.Int).SetUint64(100))
if reference91.Cmp(actual_diff) < 0 {
return true
} else {
return false
}
}
// when creating a new block, current_time in utc + chain_block_time must be added
// while verifying the block, expected time stamp should be replaced from what is in blocks header
// in DERO atlantis difficulty is based on previous tips
// get difficulty at specific tips,
// algorithm is as follows choose biggest difficulty tip (// division is integer and not floating point)
// diff = (parent_diff + (parent_diff / 100 * max(1 - (parent_timestamp - parent_parent_timestamp) // (chain_block_time*2//3), -1))
// this should be more thoroughly evaluated
// NOTE: we need to evaluate if the mining adversary gains something, if the they set the time diff to 1
// we need to do more simulations and evaluations
func (chain *Blockchain) Get_Difficulty_At_Tips(tips []crypto.Hash) *big.Int {
var MinimumDifficulty *big.Int
if globals.IsMainnet() {
MinimumDifficulty = new(big.Int).SetUint64(config.MAINNET_MINIMUM_DIFFICULTY) // this must be controllable parameter
} else {
MinimumDifficulty = new(big.Int).SetUint64(config.TESTNET_MINIMUM_DIFFICULTY) // this must be controllable parameter
}
//MinimumDifficulty := new(big.Int).SetUint64(131072) // TODO tthis must be controllable parameter
GenesisDifficulty := new(big.Int).SetUint64(1)
if chain.simulator == true {
return GenesisDifficulty
}
if len(tips) == 0 { // genesis block difficulty is 1
return GenesisDifficulty // it should be configurable via params
}
height := chain.Calculate_Height_At_Tips(tips)
// hard fork version 1 has difficulty set to 1
/*if 1 == chain.Get_Current_Version_at_Height(height) {
return new(big.Int).SetUint64(1)
}*/
/*
// if we are hardforking from 1 to 2
// we can start from high difficulty to find the right point
if height >= 1 && chain.Get_Current_Version_at_Height(height-1) == 1 && chain.Get_Current_Version_at_Height(height) == 2 {
if globals.IsMainnet() {
bootstrap_difficulty := new(big.Int).SetUint64(config.MAINNET_BOOTSTRAP_DIFFICULTY) // return bootstrap mainnet difficulty
rlog.Infof("Returning bootstrap difficulty %s at height %d", bootstrap_difficulty.String(), height)
return bootstrap_difficulty
} else {
bootstrap_difficulty := new(big.Int).SetUint64(config.TESTNET_BOOTSTRAP_DIFFICULTY)
rlog.Infof("Returning bootstrap difficulty %s at height %d", bootstrap_difficulty.String(), height)
return bootstrap_difficulty // return bootstrap difficulty for testnet
}
}
// if we are hardforking from 3 to 4
// we can start from high difficulty to find the right point
if height >= 1 && chain.Get_Current_Version_at_Height(height-1) <= 3 && chain.Get_Current_Version_at_Height(height) == 4 {
if globals.IsMainnet() {
bootstrap_difficulty := new(big.Int).SetUint64(config.MAINNET_BOOTSTRAP_DIFFICULTY_hf4) // return bootstrap mainnet difficulty
rlog.Infof("Returning bootstrap difficulty %s at height %d", bootstrap_difficulty.String(), height)
return bootstrap_difficulty
} else {
bootstrap_difficulty := new(big.Int).SetUint64(config.TESTNET_BOOTSTRAP_DIFFICULTY)
rlog.Infof("Returning bootstrap difficulty %s at height %d", bootstrap_difficulty.String(), height)
return bootstrap_difficulty // return bootstrap difficulty for testnet
}
}
*/
// for testing purposes, not possible on mainchain
if height < 3 && chain.Get_Current_Version_at_Height(height) <= 1 {
return MinimumDifficulty
}
/*
// build all blocks whivh are reachale
// process only which are close to the chain
reachable_blocks := chain.BuildReachableBlocks(dbtx,tips)
var difficulty_sum big.Int // used to calculate average difficulty
var average_difficulty big.Int
var lowest_average_difficulty big.Int
var block_count int64
for k,_ := range reachable_blocks{
height_of_k := chain.Load_Height_for_BL_ID(dbtx,k)
if (height - height_of_k) <= ((config.STABLE_LIMIT*3)/4) {
block_count++
difficulty_of_k := chain.Load_Block_Difficulty(dbtx, k)
difficulty_sum.Add(&difficulty_sum, difficulty_of_k)
}
}
// used to rate limit maximum drop over a certain number of blocks
average_difficulty.Div(&difficulty_sum,new(big.Int).SetInt64(block_count))
average_difficulty.Mul(&average_difficulty,new(big.Int).SetUint64(92)) //max 10 % drop
average_difficulty.Div(&average_difficulty,new(big.Int).SetUint64(100))
lowest_average_difficulty.Set(&average_difficulty) // difficulty can never drop less than this
*/
biggest_tip := chain.find_best_tip_cumulative_difficulty(tips)
biggest_difficulty := chain.Load_Block_Difficulty(biggest_tip)
// take the time from the most heavy block
parent_highest_time := chain.Load_Block_Timestamp(biggest_tip)
// find parents parents tip which hash highest tip
parent_past := chain.Get_Block_Past(biggest_tip)
past_biggest_tip := chain.find_best_tip_cumulative_difficulty(parent_past)
parent_parent_highest_time := chain.Load_Block_Timestamp(past_biggest_tip)
if biggest_difficulty.Cmp(MinimumDifficulty) < 0 {
biggest_difficulty.Set(MinimumDifficulty)
}
// create 3 ranges, used for physical verification
/*
switch {
case (parent_highest_time - parent_parent_highest_time) <= 6: // increase diff
logger.Infof(" increase diff")
case (parent_highest_time - parent_parent_highest_time) >= 12: // decrease diff
logger.Infof(" decrease diff")
default :// between 6 to 12, 7,8,9,10,11 do nothing, return previous difficulty
logger.Infof("stable diff diff")
}*/
bigTime := new(big.Int).SetUint64(parent_highest_time)
bigParentTime := new(big.Int).SetUint64(parent_parent_highest_time)
// holds intermediate values to make the algo easier to read & audit
x := new(big.Int)
y := new(big.Int)
// 1 - (block_timestamp - parent_timestamp) // ((config.BLOCK_TIME*2)/3)
// the above creates the following ranges 0-5 , increase diff 6-11 keep it constant, above 12 and above decrease
big1 := new(big.Int).SetUint64(1)
block_time := config.BLOCK_TIME
// if chain.Get_Current_Version_at_Height(height) >= 4 {
// block_time = config.BLOCK_TIME_hf4
//}
big_block_chain_time_range := new(big.Int).SetUint64((block_time * 2) / 3)
DifficultyBoundDivisor := new(big.Int).SetUint64(100) // granularity of 100 steps to increase or decrease difficulty
bigmaxdifficulydrop := new(big.Int).SetInt64(-2) // this should ideally be .05% of difficuly bound divisor, but currentlt its 0.5 %
x.Sub(bigTime, bigParentTime)
x.Div(x, big_block_chain_time_range)
//logger.Infof(" block time - parent time %d %s / 6",parent_highest_time - parent_parent_highest_time, x.String())
x.Sub(big1, x)
//logger.Infof("x %s biggest %s lowest average %s ", x.String(), biggest_difficulty, lowest_average_difficulty.String())
// max(1 - (block_timestamp - parent_timestamp) // chain_block_time, -99)
if x.Cmp(bigmaxdifficulydrop) < 0 {
x.Set(bigmaxdifficulydrop)
}
// logger.Infof("x %s biggest %s ", x.String(), biggest_difficulty)
// (parent_diff + parent_diff // 2048 * max(1 - (block_timestamp - parent_timestamp) // 10, -99))
y.Div(biggest_difficulty, DifficultyBoundDivisor)
// decreases are 1/2 of increases
// this will cause the network to adjust slower to big difficulty drops
// but has more benefits
/*if x.Sign() < 0 {
logger.Infof("decrease will be 1//2 ")
y.Div(y, new(big.Int).SetUint64(2))
}*/
//logger.Infof("max increase/decrease %s x %s", y.String(), x.String())
x.Mul(y, x)
x.Add(biggest_difficulty, x)
/*
// if difficulty drop is more than X% than the average, limit it here
if x.Cmp(&lowest_average_difficulty) < 0{
x.Set(&lowest_average_difficulty)
}
*/
//
// minimum difficulty can ever be
if x.Cmp(MinimumDifficulty) < 0 {
x.Set(MinimumDifficulty)
}
// logger.Infof("Final diff %s biggest %s lowest average %s ", x.String(), biggest_difficulty, lowest_average_difficulty.String())
return x
}
func (chain *Blockchain) VerifyPoW(bl *block.Block) (verified bool) {
verified = false
//block_work := bl.GetBlockWork()
//PoW := crypto.Scrypt_1024_1_1_256(block_work)
//PoW := crypto.Keccak256(block_work)
PoW := bl.GetPoWHash()
block_difficulty := chain.Get_Difficulty_At_Tips(bl.Tips)
// test new difficulty checksm whether they are equivalent to integer math
/*if CheckPowHash(PoW, block_difficulty.Uint64()) != CheckPowHashBig(PoW, block_difficulty) {
logger.Panicf("Difficuly mismatch between big and uint64 diff ")
}*/
if CheckPowHashBig(PoW, block_difficulty) == true {
return true
}
/* *
if CheckPowHash(PoW, block_difficulty.Uint64()) == true {
return true
}*/
return false
}
// this function calculates difficulty on the basis of previous difficulty and number of blocks
// THIS is the ideal algorithm for us as it will be optimal based on the number of orphan blocks
// we may deploy it when the block reward becomes insignificant in comparision to fees
// basically tail emission kicks in or we need to optimally increase number of blocks
// the algorithm does NOT work if the network has a single miner !!!
// this algorithm will work without the concept of time

View File

@ -1,90 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "encoding/hex"
import "github.com/romana/rlog"
//import "github.com/deroproject/derosuite/address"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/globals"
//import "github.com/deroproject/derosuite/config"
/*
func Create_Miner_Transaction(height uint64, median_size uint64, already_generated_coins uint64,
current_block_size uint64, fee uint64,
miner_address address.Address, nonce []byte,
max_outs uint64, hard_fork uint64) (tx *transaction.Transaction, err error) {
return nil, nil
}
*/
// genesis transaction hash 5a18d9489bcd353aeaf4a19323d04e90353f98f0d7cc2a030cfd76e19495547d
// genesis amount 35184372088831
func Generate_Genesis_Block() (bl block.Block) {
genesis_tx_blob, err := hex.DecodeString(globals.Config.Genesis_Tx)
if err != nil {
panic("Failed to hex decode genesis Tx " + err.Error())
}
err = bl.Miner_TX.DeserializeHeader(genesis_tx_blob)
if err != nil {
panic(fmt.Sprintf("Failed to parse genesis tx err %s hex %s ", err, globals.Config.Genesis_Tx))
}
if !bl.Miner_TX.IsPremine() {
panic("miner tx not premine")
}
//rlog.Tracef(2, "Hash of Genesis Tx %x\n", bl.Miner_tx.GetHash())
// verify whether tx is coinbase and valid
// setup genesis block header
bl.Major_Version = 1
bl.Minor_Version = 1
bl.Timestamp = 0 // first block timestamp
var zerohash crypto.Hash
_ = zerohash
//bl.Tips = append(bl.Tips,zerohash)
//bl.Prev_hash is automatic zero
bl.Nonce = globals.Config.Genesis_Nonce
rlog.Tracef(2, "Hash of genesis block is %x", bl.GetHash())
serialized := bl.Serialize()
var bl2 block.Block
err = bl2.Deserialize(serialized)
if err != nil {
panic(fmt.Sprintf("error while serdes genesis block err %s", err))
}
if bl.GetHash() != bl2.GetHash() {
panic("hash mismatch serdes genesis block")
}
//rlog.Tracef(2, "Genesis Block PoW %x\n", bl.GetPoWHash())
return
}

View File

@ -1,43 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "bytes"
import "testing"
//import "github.com/deroproject/derohe/block"
func Test_Genesis_block(t *testing.T) {
bl := Generate_Genesis_Block()
//var bl block.Block
serialized := bl.Serialize()
err := bl.Deserialize(serialized)
if err != nil {
t.Error("Deserialization test failed for genesis block\n")
}
serialized2 := bl.Serialize()
if !bytes.Equal(serialized, serialized2) {
t.Errorf("serdes test failed for genesis block \n%x\n%x\n", serialized, serialized2)
return
}
}

View File

@ -1,264 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
// the voting for hard fork works as follows
// block major version remains contant, while block minor version contains the next hard fork number,
// at trigger height, the last window_size blocks are counted as folllows
// if Major_Version == minor version, it is a negative vote
// if minor_version > major_version, it is positive vote
// if threshold votes are positive, the next hard fork triggers
// this is work in progress
// hard forking is integrated deep within the the blockchain as almost anything can be replaced in DERO without disruption
const default_voting_window_size = 6000 // this many votes will counted
const default_vote_percent = 62 // 62 percent votes means the hard fork is locked in
type Hard_fork struct {
Version int64 // which version will trigger
Height int64 // at what height hard fork will come into effect, trigger block
Window_size int64 // how many votes to count (x number of votes)
Threshold int64 // between 0 and 99 // percent number of votes required to lock in hardfork, 0 = mandatory
Votes int64 // number of votes in favor
Voted bool // whether voting resulted in hardfork
}
// current mainnet_hard_forks
var mainnet_hard_forks = []Hard_fork{
// {1, 0,0,0,0,true}, // dummy entry so as we can directly use the fork index into this entry
{1, 0, 0, 0, 0, true}, // version 1 hard fork where genesis block landed and chain migration occurs
// version 1 has difficulty hardcoded to 1
//{2, 95551, 0, 0, 0, true}, // version 2 hard fork where Atlantis bootstraps , it's mandatory
// {3, 721000, 0, 0, 0, true}, // version 3 hard fork emission fix, it's mandatory
}
// current testnet_hard_forks
var testnet_hard_forks = []Hard_fork{
{1, 0, 0, 0, 0, true}, // version 1 hard fork where genesis block landed
//{3, 0, 0, 0, 0, true}, // version 3 hard fork where we started , it's mandatory
//{4, 3, 0, 0, 0, true}, // version 4 hard fork where we change mining algorithm it's mandatory
}
// current simulation_hard_forks
// these can be tampered with for testing and other purposes
// this variable is exported so as simulation can play/test hard fork code
var Simulation_hard_forks = []Hard_fork{
{1, 0, 0, 0, 0, true}, // version 1 hard fork where genesis block landed
{2, 1, 0, 0, 0, true}, // version 2 hard fork where we started , it's mandatory
}
// at init time, suitable versions are selected
var current_hard_forks []Hard_fork
// init suitable structure based on mainnet/testnet selection at runtime
func init_hard_forks(params map[string]interface{}) {
// if simulation , load simulation features
if params["--simulator"] == true {
current_hard_forks = Simulation_hard_forks // enable simulator mode hard forks
logger.Debugf("simulator hardforks are online")
} else {
if globals.IsMainnet() {
current_hard_forks = mainnet_hard_forks
logger.Debugf("mainnet hardforks are online")
} else {
current_hard_forks = testnet_hard_forks
logger.Debugf("testnet hardforks are online")
}
}
// if voting in progress, load all votes from db, since we do not store votes in disk,
// we will load all necessary blocks, counting votes
}
// check block version validity at specific height according to current network
func (chain *Blockchain) Check_Block_Version(bl *block.Block) (result bool) {
height := chain.Calculate_Height_At_Tips(bl.Tips)
if height == 0 && bl.Major_Version == 1 { // handle genesis block as exception
return true
}
// all blocks except genesis block land here
if bl.Major_Version == uint64(chain.Get_Current_Version_at_Height(height)) {
return true
}
return
}
// this func will recount votes, set whether the version is voted in
// only the main chain blocks are counted in
// this func must be called with chain in lock state
/*
func (chain *Blockchain) Recount_Votes() {
height := chain.Load_Height_for_BL_ID(chain.Get_Top_ID())
for i := len(current_hard_forks) - 1; i > 0; i-- {
// count votes only if voting is in progress
if 0 != current_hard_forks[i].Window_size && // if window_size > 0
height <= current_hard_forks[i].Height &&
height >= (current_hard_forks[i].Height-current_hard_forks[i].Window_size) { // start voting when required
hard_fork_locked := false
current_hard_forks[i].Votes = 0 // make votes zero, before counting
for j := height; j >= (current_hard_forks[i].Height - current_hard_forks[i].Window_size); j-- {
// load each block, and count the votes
hash, err := chain.Load_BL_ID_at_Height(j)
if err == nil {
bl, err := chain.Load_BL_FROM_ID(hash)
if err == nil {
if bl.Minor_Version == uint64(current_hard_forks[i].Version) {
current_hard_forks[i].Votes++
}
} else {
logger.Warnf("err loading block (%s) at height %d, chain height %d err %s", hash, j, height, err)
}
} else {
logger.Warnf("err loading block at height %d, chain height %d err %s", j, height, err)
}
}
// if necessary votes have been accumulated , lock in the hard fork
if ((current_hard_forks[i].Votes * 100) / current_hard_forks[i].Window_size) >= current_hard_forks[i].Threshold {
hard_fork_locked = true
}
current_hard_forks[i].Voted = hard_fork_locked // keep it as per status
}
}
}
*/
// this function returns number of information whether hf is going on scheduled, everything is okay etc
func (chain *Blockchain) Get_HF_info() (state int, enabled bool, earliest_height, threshold, version, votes, window int64) {
state = 2 // default is everything is okay
enabled = true
topoheight := chain.Load_TOPO_HEIGHT()
block_id, err := chain.Load_Block_Topological_order_at_index(topoheight)
if err != nil {
return
}
bl, err := chain.Load_BL_FROM_ID(block_id)
if err != nil {
logger.Warnf("err loading block (%s) at topo height %d err %s", block_id, topoheight, err)
}
height := chain.Load_Height_for_BL_ID(block_id)
version = chain.Get_Current_Version_at_Height(height)
// check top block to see if the network is going through a hard fork
if bl.Major_Version != bl.Minor_Version { // network is going through voting
state = 0
enabled = false
}
if bl.Minor_Version != uint64(chain.Get_Ideal_Version_at_Height(height)) {
// we are NOT voting for the hard fork ( or we are already broken), issue warning to user, that we need an upgrade NOW
state = 1
enabled = false
version = int64(bl.Minor_Version)
}
if state == 0 { // we know our state is good, report back, good info
for i := range current_hard_forks {
if version == current_hard_forks[i].Version {
earliest_height = current_hard_forks[i].Height
threshold = current_hard_forks[i].Threshold
version = current_hard_forks[i].Version
votes = current_hard_forks[i].Votes
window = current_hard_forks[i].Window_size
}
}
}
return
}
// current hard fork version , block major version
// we may be at genesis block height
func (chain *Blockchain) Get_Current_Version() int64 { // it is last version voted or mandatory update
return chain.Get_Current_Version_at_Height(chain.Get_Height())
}
func (chain *Blockchain) Get_Current_BlockTime() uint64 { // it is last version voted or mandatory update
block_time := config.BLOCK_TIME
//if chain.Get_Current_Version() >= 4 {
// block_time= config.BLOCK_TIME_hf4
// }
return block_time
}
func (chain *Blockchain) Get_Current_Version_at_Height(height int64) int64 {
for i := len(current_hard_forks) - 1; i >= 0; i-- {
//logger.Infof("i %d height %d hf height %d",i, height,current_hard_forks[i].Height )
if height >= current_hard_forks[i].Height {
// if it was a mandatory fork handle it directly
if current_hard_forks[i].Threshold == 0 {
return current_hard_forks[i].Version
}
if current_hard_forks[i].Voted { // if the version was voted in, select it, other wise try lower
return current_hard_forks[i].Version
}
}
}
return 0
}
// if we are voting, return the next expected version
func (chain *Blockchain) Get_Ideal_Version() int64 {
return chain.Get_Ideal_Version_at_Height(chain.Get_Height())
}
// used to cast vote
func (chain *Blockchain) Get_Ideal_Version_at_Height(height int64) int64 {
for i := len(current_hard_forks) - 1; i > 0; i-- {
// only voted during the period required
if height <= current_hard_forks[i].Height &&
height >= (current_hard_forks[i].Height-current_hard_forks[i].Window_size) { // start voting when required
return current_hard_forks[i].Version
}
}
// if we are not voting, return current version
return chain.Get_Current_Version_at_Height(height)
}
/*
// if the block major version is more than what we have in our index, display warning to user
func (chain *Blockchain) Display_Warning_If_Blocks_are_New(bl *block.Block) {
// check the biggest fork
if current_hard_forks[len(current_hard_forks )-1].version < bl.Major_Version {
logger.Warnf("We have seen new blocks floating with version number bigger than ours, please update the software")
}
return
}
*/

View File

@ -1,90 +0,0 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -1,610 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package mempool
import "os"
import "fmt"
import "sync"
import "sort"
import "time"
import "sync/atomic"
import "path/filepath"
import "encoding/hex"
import "encoding/json"
import "github.com/romana/rlog"
import log "github.com/sirupsen/logrus"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/cryptography/crypto"
// this is only used for sorting and nothing else
type TX_Sorting_struct struct {
FeesPerByte uint64 // this is fees per byte
Hash crypto.Hash // transaction hash
Size uint64 // transaction size
}
// NOTE: do NOT consider this code as useless, as it is used to avooid double spending attacks within the block and within the pool
// let me explain, since we are a state machine, we add block to our blockchain
// so, if a double spending attack comes, 2 transactions with same inputs, we reject one of them
// the algo is documented somewhere else which explains the entire process
// at this point in time, this is an ultrafast written mempool,
// it will not scale for more than 10000 transactions but is good enough for now
// we can always come back and rewrite it
// NOTE: the pool is now persistant
type Mempool struct {
txs sync.Map //map[crypto.Hash]*mempool_object
key_images sync.Map //map[crypto.Hash]bool // contains key images of all txs
sorted_by_fee []crypto.Hash // contains txids sorted by fees
sorted []TX_Sorting_struct // contains TX sorting information, so as new block can be forged easily
modified bool // used to monitor whethel mem pool contents have changed,
height uint64 // track blockchain height
P2P_TX_Relayer p2p_TX_Relayer // actual pointer, setup by the dero daemon during runtime
relayer chan crypto.Hash // used for immediate relay
// global variable , but don't see it utilisation here except fot tx verification
//chain *Blockchain
Exit_Mutex chan bool
sync.Mutex
}
// this object is serialized and deserialized
type mempool_object struct {
Tx *transaction.Transaction
Added uint64 // time in epoch format
Height uint64 // at which height the tx unlocks in the mempool
Relayed int // relayed count
RelayedAt int64 // when was tx last relayed
Size uint64 // size in bytes of the TX
FEEperBYTE uint64 // fee per byte
}
var loggerpool *log.Entry
// marshal object as json
func (obj *mempool_object) MarshalJSON() ([]byte, error) {
return json.Marshal(&struct {
Tx string `json:"tx"` // hex encoding
Added uint64 `json:"added"`
Height uint64 `json:"height"`
Relayed int `json:"relayed"`
RelayedAt int64 `json:"relayedat"`
}{
Tx: hex.EncodeToString(obj.Tx.Serialize()),
Added: obj.Added,
Height: obj.Height,
Relayed: obj.Relayed,
RelayedAt: obj.RelayedAt,
})
}
// unmarshal object from json encoding
func (obj *mempool_object) UnmarshalJSON(data []byte) error {
aux := &struct {
Tx string `json:"tx"`
Added uint64 `json:"added"`
Height uint64 `json:"height"`
Relayed int `json:"relayed"`
RelayedAt int64 `json:"relayedat"`
}{}
if err := json.Unmarshal(data, &aux); err != nil {
return err
}
obj.Added = aux.Added
obj.Height = aux.Height
obj.Relayed = aux.Relayed
obj.RelayedAt = aux.RelayedAt
tx_bytes, err := hex.DecodeString(aux.Tx)
if err != nil {
return err
}
obj.Size = uint64(len(tx_bytes))
obj.Tx = &transaction.Transaction{}
err = obj.Tx.DeserializeHeader(tx_bytes)
if err == nil {
obj.FEEperBYTE = obj.Tx.Fees() / obj.Size
}
return err
}
func Init_Mempool(params map[string]interface{}) (*Mempool, error) {
var mempool Mempool
//mempool.chain = params["chain"].(*Blockchain)
loggerpool = globals.Logger.WithFields(log.Fields{"com": "POOL"}) // all components must use this logger
loggerpool.Infof("Mempool started")
atomic.AddUint32(&globals.Subsystem_Active, 1) // increment subsystem
mempool.relayer = make(chan crypto.Hash, 1024*10)
mempool.Exit_Mutex = make(chan bool)
// initialize maps
//mempool.txs = map[crypto.Hash]*mempool_object{}
//mempool.key_images = map[crypto.Hash]bool{}
//TODO load any trasactions saved at previous exit
mempool_file := filepath.Join(globals.GetDataDirectory(), "mempool.json")
file, err := os.Open(mempool_file)
if err != nil {
loggerpool.Warnf("Error opening mempool data file %s err %s", mempool_file, err)
} else {
defer file.Close()
var objects []mempool_object
decoder := json.NewDecoder(file)
err = decoder.Decode(&objects)
if err != nil {
loggerpool.Warnf("Error unmarshalling mempool data err %s", err)
} else { // successfully unmarshalled data, add it to mempool
loggerpool.Debugf("Will try to load %d txs from mempool file", (len(objects)))
for i := range objects {
result := mempool.Mempool_Add_TX(objects[i].Tx, 0)
if result { // setup time
//mempool.txs[objects[i].Tx.GetHash()] = &objects[i] // setup time and other artifacts
mempool.txs.Store(objects[i].Tx.GetHash(), &objects[i])
}
}
}
}
go mempool.Relayer_and_Cleaner()
return &mempool, nil
}
// this is created per incoming block and then discarded
// This does not require shutting down and will be garbage collected automatically
/*
func Init_Block_Mempool(params map[string]interface{}) (*Mempool, error) {
var mempool Mempool
// initialize maps
//mempool.txs = map[crypto.Hash]*mempool_object{}
//mempool.key_images = map[crypto.Hash]bool{}
return &mempool, nil
}
*/
func (pool *Mempool) HouseKeeping(height uint64) {
pool.height = height
// this code is executed in conditions which are as follows
// we have to purge old txs which can no longer be mined
var delete_list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*mempool_object)
if height >= (v.Tx.Height + 3* config.BLOCK_BATCH_SIZE + 1) { // if we have moved 1 heights, chances are reorg are almost nil
delete_list = append(delete_list, txhash)
}
return true
})
for i := range delete_list {
pool.Mempool_Delete_TX(delete_list[i])
}
}
func (pool *Mempool) Shutdown() {
//TODO save mempool tx somewhere
close(pool.Exit_Mutex) // stop relaying
pool.Lock()
defer pool.Unlock()
mempool_file := filepath.Join(globals.GetDataDirectory(), "mempool.json")
// collect all txs in pool and serialize them and store them
var objects []mempool_object
pool.txs.Range(func(k, value interface{}) bool {
v := value.(*mempool_object)
objects = append(objects, *v)
return true
})
/*for _, v := range pool.txs {
objects = append(objects, *v)
}*/
var file, err = os.Create(mempool_file)
if err == nil {
defer file.Close()
encoder := json.NewEncoder(file)
encoder.SetIndent("", "\t")
err = encoder.Encode(objects)
if err != nil {
loggerpool.Warnf("Error marshaling mempool data err %s", err)
}
} else {
loggerpool.Warnf("Error creating new file to store mempool data file %s err %s", mempool_file, err)
}
loggerpool.Infof("Succesfully saved %d txs to file", (len(objects)))
loggerpool.Infof("Mempool stopped")
atomic.AddUint32(&globals.Subsystem_Active, ^uint32(0)) // this decrement 1 fom subsystem
}
// start pool monitoring for changes for some specific time
// this is required so as we can add or discard transactions while selecting work for mining
func (pool *Mempool) Monitor() {
pool.Lock()
pool.modified = false
pool.Unlock()
}
// return whether pool contents have changed
func (pool *Mempool) HasChanged() (result bool) {
pool.Lock()
result = pool.modified
pool.Unlock()
return
}
// a tx should only be added to pool after verification is complete
func (pool *Mempool) Mempool_Add_TX(tx *transaction.Transaction, Height uint64) (result bool) {
result = false
pool.Lock()
defer pool.Unlock()
var object mempool_object
tx_hash := crypto.Hash(tx.GetHash())
if pool.Mempool_Keyimage_Spent(tx.Payloads[0].Proof.Nonce1()) {
rlog.Debugf("Rejecting TX, since nonce already seen %x", tx_hash)
return false
}
if pool.Mempool_Keyimage_Spent(tx.Payloads[0].Proof.Nonce2()) {
rlog.Debugf("Rejecting TX, since nonce already seen %x", tx_hash)
return false
}
// check if tx already exists, skip it
if _, ok := pool.txs.Load(tx_hash); ok {
//rlog.Debugf("Pool already contains %s, skipping", tx_hash)
return false
}
// add all the key images to check double spend attack within the pool
//TODO
// for i := 0; i < len(tx.Vin); i++ {
// pool.key_images.Store(tx.Vin[i].(transaction.Txin_to_key).K_image,true) // add element to map for next check
// }
pool.key_images.Store(tx.Payloads[0].Proof.Nonce1(), true)
pool.key_images.Store(tx.Payloads[0].Proof.Nonce2(), true)
// we are here means we can add it to pool
object.Tx = tx
object.Height = Height
object.Added = uint64(time.Now().UTC().Unix())
object.Size = uint64(len(tx.Serialize()))
object.FEEperBYTE = tx.Fees() / object.Size
pool.txs.Store(tx_hash, &object)
pool.relayer <- tx_hash
pool.modified = true // pool has been modified
//pool.sort_list() // sort and update pool list
return true
}
// check whether a tx exists in the pool
func (pool *Mempool) Mempool_TX_Exist(txid crypto.Hash) (result bool) {
//pool.Lock()
//defer pool.Unlock()
if _, ok := pool.txs.Load(txid); ok {
return true
}
return false
}
// check whether a keyimage exists in the pool
func (pool *Mempool) Mempool_Keyimage_Spent(ki crypto.Hash) (result bool) {
//pool.Lock()
//defer pool.Unlock()
if _, ok := pool.key_images.Load(ki); ok {
return true
}
return false
}
// delete specific tx from pool and return it
// if nil is returned Tx was not found in pool
func (pool *Mempool) Mempool_Delete_TX(txid crypto.Hash) (tx *transaction.Transaction) {
//pool.Lock()
//defer pool.Unlock()
var ok bool
var objecti interface{}
// check if tx already exists, skip it
if objecti, ok = pool.txs.Load(txid); !ok {
rlog.Warnf("Pool does NOT contain %s, returning nil", txid)
return nil
}
// we reached here means, we have the tx remove it from our list, do maintainance cleapup and discard it
object := objecti.(*mempool_object)
tx = object.Tx
pool.txs.Delete(txid)
// remove all the key images
//TODO
// for i := 0; i < len(object.Tx.Vin); i++ {
// pool.key_images.Delete(object.Tx.Vin[i].(transaction.Txin_to_key).K_image)
// }
pool.key_images.Delete(tx.Payloads[0].Proof.Nonce1())
pool.key_images.Delete(tx.Payloads[0].Proof.Nonce2())
//pool.sort_list() // sort and update pool list
pool.modified = true // pool has been modified
return object.Tx // return the tx
}
// get specific tx from mem pool without removing it
func (pool *Mempool) Mempool_Get_TX(txid crypto.Hash) (tx *transaction.Transaction) {
// pool.Lock()
// defer pool.Unlock()
var ok bool
var objecti interface{}
if objecti, ok = pool.txs.Load(txid); !ok {
//loggerpool.Warnf("Pool does NOT contain %s, returning nil", txid)
return nil
}
// we reached here means, we have the tx, return the pointer back
//object := pool.txs[txid]
object := objecti.(*mempool_object)
return object.Tx
}
// return list of all txs in pool
func (pool *Mempool) Mempool_List_TX() []crypto.Hash {
// pool.Lock()
// defer pool.Unlock()
var list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
//v := value.(*mempool_object)
//objects = append(objects, *v)
list = append(list, txhash)
return true
})
//pool.sort_list() // sort and update pool list
// list should be as big as spurce list
//list := make([]crypto.Hash, len(pool.sorted_by_fee), len(pool.sorted_by_fee))
//copy(list, pool.sorted_by_fee) // return list sorted by fees
return list
}
// passes back sorting information and length information for easier new block forging
func (pool *Mempool) Mempool_List_TX_SortedInfo() []TX_Sorting_struct {
// pool.Lock()
// defer pool.Unlock()
_, data := pool.sort_list() // sort and update pool list
return data
/* // list should be as big as spurce list
list := make([]TX_Sorting_struct, len(pool.sorted), len(pool.sorted))
copy(list, pool.sorted) // return list sorted by fees
return list
*/
}
// print current mempool txs
// TODO add sorting
func (pool *Mempool) Mempool_Print() {
pool.Lock()
defer pool.Unlock()
var klist []crypto.Hash
var vlist []*mempool_object
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*mempool_object)
//objects = append(objects, *v)
klist = append(klist, txhash)
vlist = append(vlist, v)
return true
})
fmt.Printf("Total TX in mempool = %d\n", len(klist))
fmt.Printf("%20s %14s %7s %7s %6s %32s\n", "Added", "Last Relayed", "Relayed", "Size", "Height", "TXID")
for i := range klist {
k := klist[i]
v := vlist[i]
fmt.Printf("%20s %14s %7d %7d %6d %32s\n", time.Unix(int64(v.Added), 0).UTC().Format(time.RFC3339), time.Duration(v.RelayedAt)*time.Second, v.Relayed,
len(v.Tx.Serialize()), v.Tx.Height, k)
}
}
// flush mempool
func (pool *Mempool) Mempool_flush() {
var list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
//v := value.(*mempool_object)
//objects = append(objects, *v)
list = append(list, txhash)
return true
})
fmt.Printf("Total TX in mempool = %d \n", len(list))
fmt.Printf("Flushing mempool \n")
for i := range list {
pool.Mempool_Delete_TX(list[i])
}
}
// sorts the pool internally
// this function assummes lock is already taken
// ??? if we selecting transactions randomly, why to keep them sorted
func (pool *Mempool) sort_list() ([]crypto.Hash, []TX_Sorting_struct) {
data := make([]TX_Sorting_struct, 0, 512) // we are rarely expectingmore than this entries in mempool
// collect data from pool for sorting
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*mempool_object)
if v.Height <= pool.height {
data = append(data, TX_Sorting_struct{Hash: txhash, FeesPerByte: v.FEEperBYTE, Size: v.Size})
}
return true
})
// inverted comparision sort to do descending sort
sort.SliceStable(data, func(i, j int) bool { return data[i].FeesPerByte > data[j].FeesPerByte })
sorted_list := make([]crypto.Hash, 0, len(data))
//pool.sorted_by_fee = pool.sorted_by_fee[:0] // empty old slice
for i := range data {
sorted_list = append(sorted_list, data[i].Hash)
}
//pool.sorted = data
return sorted_list, data
}
type p2p_TX_Relayer func(*transaction.Transaction, uint64) int // function type, exported in p2p but cannot use due to cyclic dependency
// this tx relayer keeps on relaying tx and cleaning mempool
// if a tx has been relayed less than 10 peers, tx relaying is agressive
// otherwise the tx are relayed every 30 minutes, till it has been relayed to 20
// then the tx is relayed every 3 hours, just in case
func (pool *Mempool) Relayer_and_Cleaner() {
for {
select {
case txid := <-pool.relayer:
if objecti, ok := pool.txs.Load(txid); !ok {
break
} else {
// we reached here means, we have the tx, return the pointer back
object := objecti.(*mempool_object)
if pool.P2P_TX_Relayer != nil {
relayed_count := pool.P2P_TX_Relayer(object.Tx, 0)
//relayed_count := 0
if relayed_count > 0 {
object.Relayed += relayed_count
rlog.Tracef(1, "Relayed %s to %d peers (%d %d)", txid, relayed_count, object.Relayed, (time.Now().Unix() - object.RelayedAt))
object.RelayedAt = time.Now().Unix()
}
}
}
case <-pool.Exit_Mutex:
return
case <-time.After(400 * time.Millisecond):
}
sent_count := 0
//pool.Lock()
//loggerpool.Warnf("send Pool lock taken")
pool.txs.Range(func(ktmp, value interface{}) bool {
k := ktmp.(crypto.Hash)
v := value.(*mempool_object)
select { // exit fast of possible
case <-pool.Exit_Mutex:
return false
default:
}
if sent_count > 200 { // send a burst of 200 txs max in 1 go
return false
}
if v.Height <= pool.height { // only carry out activities for valid txs
if v.Relayed < 10 || // relay it now
(v.Relayed >= 4 && v.Relayed <= 20 && (time.Now().Unix()-v.RelayedAt) > 5) || // relay it now
(time.Now().Unix()-v.RelayedAt) > 4 {
if pool.P2P_TX_Relayer != nil {
relayed_count := pool.P2P_TX_Relayer(v.Tx, 0)
//relayed_count := 0
if relayed_count > 0 {
v.Relayed += relayed_count
sent_count++
//loggerpool.Debugf("%d %d\n",time.Now().Unix(), v.RelayedAt)
rlog.Tracef(1, "Relayed %s to %d peers (%d %d)", k, relayed_count, v.Relayed, (time.Now().Unix() - v.RelayedAt))
v.RelayedAt = time.Now().Unix()
//loggerpool.Debugf("%d %d",time.Now().Unix(), v.RelayedAt)
}
}
}
}
return true
})
// loggerpool.Warnf("send Pool lock released")
//pool.Unlock()
}
}

File diff suppressed because one or more lines are too long

View File

@ -1,456 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "time"
import "sync"
import "strings"
import "math/rand"
import "runtime/debug"
import "golang.org/x/xerrors"
import "golang.org/x/time/rate"
import "github.com/romana/rlog"
// this file creates the blobs which can be used to mine new blocks
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/rpc"
//import "github.com/deroproject/derohe/emission"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/graviton"
//NOTE: this function is quite naughty due to chicken and egg problem
// we cannot calculate reward until we known blocksize ( exactly upto byte size )
// also note that we cannot calculate blocksize until we know reward
// how we do it
// reward field is a varint, ( can be 8 bytes )
// so max deviation can be 8 bytes, due to reward
// so we do a bruterforce till the reward is obtained but lets try to KISS
// the top hash over which to do mining now ( it should already be in the chain)
// this is work in progress
// TODO we need to rework fees algorithm, to improve its quality and lower fees
func (chain *Blockchain) Create_new_miner_block(miner_address rpc.Address, tx *transaction.Transaction) (cbl *block.Complete_Block, bl block.Block) {
//chain.Lock()
//defer chain.Unlock()
var err error
cbl = &block.Complete_Block{}
if tx != nil { // make sure tx is registration and it valid
if tx.IsRegistration() {
if tx.IsRegistrationValid() {
} else {
err = fmt.Errorf("Registration TX is invalid")
}
} else {
err = fmt.Errorf("TX is not registration")
}
if err != nil {
panic(err)
}
}
topoheight := chain.Load_TOPO_HEIGHT()
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
panic(err)
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
panic(err)
}
balance_tree, err := ss.GetTree(config.BALANCE_TREE)
if err != nil {
panic(err)
}
// use best 3 tips
//tips := chain.SortTips( chain.Load_TIPS_ATOMIC())
tips := chain.SortTips(chain.Get_TIPS())
for i := range tips {
if len(bl.Tips) >= 3 {
break
}
if !chain.verifyNonReachabilitytips(append([]crypto.Hash{tips[i]}, bl.Tips...)) { // avoid any tips which fail reachability test
continue
}
if len(bl.Tips) == 0 || (len(bl.Tips) >= 1 && chain.Load_Height_for_BL_ID(bl.Tips[0]) >= chain.Load_Height_for_BL_ID(tips[i]) && chain.Load_Height_for_BL_ID(bl.Tips[0])-chain.Load_Height_for_BL_ID(tips[i]) <= config.STABLE_LIMIT/2) {
bl.Tips = append(bl.Tips, tips[i])
}
}
//fmt.Printf("miner block placing tips %+v\n", bl.Tips)
height := chain.Calculate_Height_At_Tips(bl.Tips) // we are 1 higher than previous highest tip
var tx_hash_list_included []crypto.Hash // these tx will be included ( due to block size limit )
sizeoftxs := uint64(0) // size of all non coinbase tx included within this block
//fees_collected := uint64(0)
nonce_map,err := chain.BuildNonces(bl.Tips)
if err != nil {
panic(err)
}
local_nonce_map := map[crypto.Hash]bool{}
_ = sizeoftxs
// add upto 100 registration tx each registration tx is 99 bytes, so 100 tx will take 9900 bytes or 10KB
{
tx_hash_list_sorted := chain.Regpool.Regpool_List_TX() // hash of all tx expected to be included within this block , sorted by fees
for i := range tx_hash_list_sorted {
tx := chain.Regpool.Regpool_Get_TX(tx_hash_list_sorted[i])
if tx != nil {
_, err = balance_tree.Get(tx.MinerAddress[:])
if err != nil {
if xerrors.Is(err, graviton.ErrNotFound) { // address needs registration
cbl.Txs = append(cbl.Txs, tx)
tx_hash_list_included = append(tx_hash_list_included, tx_hash_list_sorted[i])
} else {
panic(err)
}
}
}
}
}
//rlog.Infof("Total tx in pool %d", len(tx_hash_list_sorted))
//reachable_key_images := chain.BuildReachabilityKeyImages(dbtx, &bl) // this requires only bl.Tips
// select 10% tx based on fees
// select 90% tx randomly
// random selection helps us to easily reach 80 TPS
// first of lets find the tx fees collected by consuming txs from mempool
tx_hash_list_sorted := chain.Mempool.Mempool_List_TX_SortedInfo() // hash of all tx expected to be included within this block , sorted by fees
i := 0
for ; i < len(tx_hash_list_sorted); i++ {
if (sizeoftxs + tx_hash_list_sorted[i].Size) > (10*config.STARGATE_HE_MAX_BLOCK_SIZE)/100 { // limit block to max possible
break
}
tx := chain.Mempool.Mempool_Get_TX(tx_hash_list_sorted[i].Hash)
if tx != nil && Verify_Transaction_NonCoinbase_Height(tx,uint64(height)) {
/*
// skip and delete any mempool tx
if chain.Verify_Transaction_NonCoinbase_DoubleSpend_Check( tx) == false {
chain.Mempool.Mempool_Delete_TX(tx_hash_list_sorted[i].Hash)
continue
}
failed := false
for j := 0; j < len(tx.Vin); j++ {
if _, ok := reachable_key_images[tx.Vin[j].(transaction.Txin_to_key).K_image]; ok {
rlog.Warnf("TX already in history, but tx %s is still in mempool HOW ?? skipping it", tx_hash_list_sorted[i].Hash)
failed = true
break
}
}
if failed {
continue
}
*/
if nonce_map[tx.Payloads[0].Proof.Nonce1()] || nonce_map[tx.Payloads[0].Proof.Nonce1()] ||
local_nonce_map[tx.Payloads[0].Proof.Nonce1()] || local_nonce_map[tx.Payloads[0].Proof.Nonce1()] {
continue // skip this tx
}
cbl.Txs = append(cbl.Txs, tx)
tx_hash_list_included = append(tx_hash_list_included, tx_hash_list_sorted[i].Hash)
local_nonce_map[tx.Payloads[0].Proof.Nonce1()] = true
local_nonce_map[tx.Payloads[0].Proof.Nonce2()] = true
rlog.Tracef(1, "Adding Top Sorted tx %s to Complete_Block current size %.2f KB max possible %.2f KB\n", tx_hash_list_sorted[i].Hash, float32(sizeoftxs+tx_hash_list_sorted[i].Size)/1024.0, float32(config.STARGATE_HE_MAX_BLOCK_SIZE)/1024.0)
sizeoftxs += tx_hash_list_sorted[i].Size
}
}
// any left over transactions, should be randomly selected
tx_hash_list_sorted = tx_hash_list_sorted[i:]
// do random shuffling, can we get away with len/2 random shuffling
rand.Shuffle(len(tx_hash_list_sorted), func(i, j int) {
tx_hash_list_sorted[i], tx_hash_list_sorted[j] = tx_hash_list_sorted[j], tx_hash_list_sorted[i]
})
// if we were crossing limit, transactions would be randomly selected
// otherwise they will sorted by fees
// now select as much as possible
for i := range tx_hash_list_sorted {
if (sizeoftxs + tx_hash_list_sorted[i].Size) > (config.STARGATE_HE_MAX_BLOCK_SIZE) { // limit block to max possible
break
}
tx := chain.Mempool.Mempool_Get_TX(tx_hash_list_sorted[i].Hash)
if tx != nil && Verify_Transaction_NonCoinbase_Height(tx, uint64(height)){
if nonce_map[tx.Payloads[0].Proof.Nonce1()] || nonce_map[tx.Payloads[0].Proof.Nonce1()] ||
local_nonce_map[tx.Payloads[0].Proof.Nonce1()] || local_nonce_map[tx.Payloads[0].Proof.Nonce1()] {
continue // skip this tx
}
cbl.Txs = append(cbl.Txs, tx)
tx_hash_list_included = append(tx_hash_list_included, tx_hash_list_sorted[i].Hash)
local_nonce_map[tx.Payloads[0].Proof.Nonce1()] = true
local_nonce_map[tx.Payloads[0].Proof.Nonce2()] = true
rlog.Tracef(1, "Adding Random tx %s to Complete_Block current size %.2f KB max possible %.2f KB\n", tx_hash_list_sorted[i].Hash, float32(sizeoftxs+tx_hash_list_sorted[i].Size)/1024.0, float32(config.STARGATE_HE_MAX_BLOCK_SIZE)/1024.0)
sizeoftxs += tx_hash_list_sorted[i].Size
}
}
// collect tx list + their fees
// now we have all major parts of block, assemble the block
bl.Major_Version = uint64(chain.Get_Current_Version_at_Height(height))
bl.Minor_Version = uint64(chain.Get_Ideal_Version_at_Height(height)) // This is used for hard fork voting,
bl.Height = uint64(height)
bl.Timestamp = uint64(uint64(time.Now().UTC().Unix()))
//bl.Miner_TX, err = Create_Miner_TX2(int64(bl.Major_Version), height, miner_address)
//if err != nil {
// logger.Warnf("Error while creating miner block, err %s", err)
//}
bl.Miner_TX.Version = 1
bl.Miner_TX.TransactionType = transaction.COINBASE // what about unregistered users
copy(bl.Miner_TX.MinerAddress[:], miner_address.Compressed())
// check whether the
_, err = balance_tree.Get(bl.Miner_TX.MinerAddress[:])
if err != nil {
if xerrors.Is(err, graviton.ErrNotFound) { // address needs registration
bl.Miner_TX.TransactionType = transaction.REGISTRATION
if tx == nil {
err = fmt.Errorf("Signature is exactly 64 bytes in size")
return
}
copy(bl.Miner_TX.C[:], tx.C[:32])
copy(bl.Miner_TX.S[:], tx.S[:32])
} else {
panic(err)
}
}
//bl.Prev_Hash = top_hash
bl.Nonce = rand.New(globals.NewCryptoRandSource()).Uint32() // nonce can be changed by the template header
for i := range tx_hash_list_included {
bl.Tx_hashes = append(bl.Tx_hashes, tx_hash_list_included[i])
}
cbl.Bl = &bl
//logger.Infof("miner block %+v address %X", bl, miner_address.Compressed())
return
}
// returns a new block template ready for mining
// block template has the following format
// miner block header in hex +
// miner tx in hex +
// 2 bytes ( inhex 4 bytes for number of tx )
// tx hashes that follow
var cache_block block.Block
var cache_block_mutex sync.Mutex
func (chain *Blockchain) Create_new_block_template_mining(top_hash crypto.Hash, miner_address rpc.Address, reserve_space int) (bl block.Block, blockhashing_blob string, block_template_blob string, reserved_pos int) {
rlog.Debugf("Mining block will give reward to %s", miner_address)
cache_block_mutex.Lock()
defer cache_block_mutex.Unlock()
if (cache_block.Timestamp+1) < (uint64(uint64(time.Now().UTC().Unix()))) || (cache_block.Timestamp > 0 && int64(cache_block.Height) != chain.Get_Height()+1) {
_, bl = chain.Create_new_miner_block(miner_address, nil)
cache_block = bl // setup cache for 1 sec
} else {
bl = cache_block
copy(bl.Miner_TX.MinerAddress[:], miner_address.Compressed())
}
blockhashing_blob = fmt.Sprintf("%x", bl.GetBlockWork())
// block template is all the parts of the block in dismantled form
// first is the block header
// then comes the miner tx
// then comes all the tx headers
block_template_blob = fmt.Sprintf("%x", bl.Serialize())
// lets locate extra nonce
pos := strings.Index(blockhashing_blob, "0000000000000000000000000000000000000000000000000000000000000000")
pos = pos / 2 // we searched in hexadecimal form but we need to give position in byte form
reserved_pos = pos
return
}
// rate limiter is deployed, in case RPC is exposed over internet
// someone should not be just giving fake inputs and delay chain syncing
var accept_limiter = rate.NewLimiter(1.0, 4) // 1 block per sec, burst of 4 blocks is okay
var accept_lock = sync.Mutex{}
var duplicate_height_check = map[uint64]bool{}
// accept work given by us
// we should verify that the transaction list supplied back by the miner exists in the mempool
// otherwise the miner is trying to attack the network
func (chain *Blockchain) Accept_new_block(block_template []byte, blockhashing_blob []byte) (blid crypto.Hash, result bool, err error) {
if globals.Arguments["--sync-node"].(bool) {
globals.Logger.Warnf("Mining is deactivated since daemon is running with --sync-mode, please check program options.")
return blid, false, fmt.Errorf("Please deactivate --sync-node option before mining")
}
accept_lock.Lock()
defer accept_lock.Unlock()
cbl := &block.Complete_Block{}
bl := block.Block{}
//logger.Infof("Incoming block for accepting %x", block_template)
// safety so if anything wrong happens, verification fails
defer func() {
if r := recover(); r != nil {
logger.Warnf("Recovered while accepting new block, Stack trace below ")
logger.Warnf("Stack trace \n%s", debug.Stack())
err = fmt.Errorf("Error while parsing block")
}
}()
err = bl.Deserialize(block_template)
if err != nil {
logger.Warnf("Error parsing submitted work block template err %s", err)
return
}
length_of_block_header := len(bl.Serialize())
template_data := block_template[length_of_block_header:]
if len(blockhashing_blob) >= 2 {
err = bl.CopyNonceFromBlockWork(blockhashing_blob)
if err != nil {
logger.Warnf("Submitted block has been rejected, since blockhashing_blob is invalid")
return
}
}
if len(template_data) != 0 {
logger.Warnf("Extra bytes (%d) left over while accepting block from mining pool %x", len(template_data), template_data)
}
// if we reach here, everything looks ok
// try to craft a complete block by grabbing entire tx from the mempool
//logger.Debugf("Block parsed successfully")
blid = bl.GetHash()
// if a duplicate block is being sent, reject the block
if _, ok := duplicate_height_check[bl.Height]; ok {
logger.Warnf("Block %s rejected by chain due to duplicate hwork.", bl.GetHash())
err = fmt.Errorf("Error duplicate work")
return
}
// lets build up the complete block
// collect tx list + their fees
var tx *transaction.Transaction
for i := range bl.Tx_hashes {
tx = chain.Mempool.Mempool_Get_TX(bl.Tx_hashes[i])
if tx != nil {
cbl.Txs = append(cbl.Txs, tx)
continue
} else {
tx = chain.Regpool.Regpool_Get_TX(bl.Tx_hashes[i])
if tx != nil {
cbl.Txs = append(cbl.Txs, tx)
continue
}
}
var tx_bytes []byte
if tx_bytes, err = chain.Store.Block_tx_store.ReadTX(bl.Tx_hashes[i]); err != nil {
return
}
tx = &transaction.Transaction{}
if err = tx.DeserializeHeader(tx_bytes); err != nil {
return
}
if err != nil {
logger.Warnf("Tx %s not found in pool or DB, rejecting submitted block", bl.Tx_hashes[i])
return
}
cbl.Txs = append(cbl.Txs, tx)
}
cbl.Bl = &bl // the block is now complete, lets try to add it to chain
if !accept_limiter.Allow() { // if rate limiter allows, then add block to chain
logger.Warnf("Block %s rejected by chain.", bl.GetHash())
return
}
err, result = chain.Add_Complete_Block(cbl)
if result {
duplicate_height_check[bl.Height] = true
logger.Infof("Block %s successfully accepted at height %d, Notifying Network", bl.GetHash(), bl.Height)
cache_block_mutex.Lock()
defer cache_block_mutex.Unlock()
cache_block.Timestamp = 0 // expire cache block
if !chain.simulator { // if not in simulator mode, relay block to the chain
chain.P2P_Block_Relayer(cbl, 0) // lets relay the block to network
}
} else {
logger.Warnf("Block Rejected %s error %s", bl.GetHash(), err)
}
return
}

View File

@ -1,339 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
// this file will prune the history of the blockchain, making it light weight
// the pruner works like this
// identify a point in history before which all history is discarded
// the entire thing works cryptographically and thus everything is cryptographically verified
// this function is the only one which does not work in append-only
import "os"
import "fmt"
import "path/filepath"
import "github.com/romana/rlog"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/graviton"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/globals"
const CHUNK_SIZE = 100000 // write 100000 account chunks, actually we should be writing atleast 100,000 accounts
func ByteCountIEC(b int64) string {
const unit = 1024
if b < unit {
return fmt.Sprintf("%d B", b)
}
div, exp := int64(unit), 0
for n := b / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %ciB",
float64(b)/float64(div), "KMGTPE"[exp])
}
func DirSize(path string) int64 {
var size int64
err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
if err != nil {
return nil
}
if !info.IsDir() {
size += info.Size()
}
return err
})
_ = err
return size
}
func Prune_Blockchain(prune_topo int64) (err error) {
var store storage
// initialize store
current_path := filepath.Join(globals.GetDataDirectory())
if store.Balance_store, err = graviton.NewDiskStore(filepath.Join(current_path, "balances")); err == nil {
if err = store.Topo_store.Open(current_path); err == nil {
store.Block_tx_store.basedir = current_path
} else {
return err
}
}
max_topoheight := store.Topo_store.Count()
for ; max_topoheight >= 0; max_topoheight-- {
if toporecord, err := store.Topo_store.Read(max_topoheight); err == nil {
if !toporecord.IsClean() {
break
}
}
}
//prune_topoheight := max_topoheight - 97
prune_topoheight := prune_topo
if max_topoheight-prune_topoheight < 50 {
return fmt.Errorf("We need atleast 50 blocks to prune")
}
err = rewrite_graviton_store(&store, prune_topoheight, max_topoheight)
discard_blocks_and_transactions(&store, prune_topoheight)
// close original store and move new store in the same place
store.Balance_store.Close()
old_path := filepath.Join(current_path, "balances")
new_path := filepath.Join(current_path, "balances_new")
globals.Logger.Infof("Old balance tree size %s", ByteCountIEC(DirSize(old_path)))
globals.Logger.Infof("balance tree after pruning history size %s", ByteCountIEC(DirSize(new_path)))
os.RemoveAll(old_path)
return os.Rename(new_path, old_path)
}
// first lets free space by discarding blocks, and txs before the historical point
// any error while deleting should be considered non fatal
func discard_blocks_and_transactions(store *storage, topoheight int64) {
globals.Logger.Infof("Block store before pruning %s", ByteCountIEC(DirSize(filepath.Join(store.Block_tx_store.basedir, "bltx_store"))))
for i := int64(0); i < topoheight-20; i++ { // donot some more blocks for sanity currently
if toporecord, err := store.Topo_store.Read(i); err == nil {
blid := toporecord.BLOCK_ID
var bl block.Block
if block_data, err := store.Block_tx_store.ReadBlock(blid); err == nil {
if err = bl.Deserialize(block_data); err == nil { // we should deserialize the block here
for _, txhash := range bl.Tx_hashes { // we also have to purge the tx hashes
_ = store.Block_tx_store.DeleteTX(txhash) // delete tx hashes
//fmt.Printf("DeleteTX %x\n", txhash)
}
}
// lets delete the block data also
_ = store.Block_tx_store.DeleteBlock(blid)
//fmt.Printf("DeleteBlock %x\n", blid)
}
}
}
globals.Logger.Infof("Block store after pruning size %s", ByteCountIEC(DirSize(filepath.Join(store.Block_tx_store.basedir, "bltx_store"))))
}
// this will rewrite the graviton store
func rewrite_graviton_store(store *storage, prune_topoheight int64, max_topoheight int64) (err error) {
var write_store *graviton.Store
writebalancestorepath := filepath.Join(store.Block_tx_store.basedir, "balances_new")
if write_store, err = graviton.NewDiskStore(writebalancestorepath); err != nil {
return err
}
toporecord, err := store.Topo_store.Read(prune_topoheight)
if err != nil {
return err
}
var major_copy uint64
{ // do the heavy lifting, merge all changes before this topoheight
var old_ss, write_ss *graviton.Snapshot
var old_balance_tree, write_balance_tree *graviton.Tree
if old_ss, err = store.Balance_store.LoadSnapshot(toporecord.State_Version); err == nil {
if old_balance_tree, err = old_ss.GetTree(config.BALANCE_TREE); err == nil {
if write_ss, err = write_store.LoadSnapshot(0); err == nil {
if write_balance_tree, err = write_ss.GetTree(config.BALANCE_TREE); err == nil {
var latest_commit_version uint64
latest_commit_version, err = clone_entire_tree(old_balance_tree, write_balance_tree)
//fmt.Printf("cloned entire tree version %d err '%s'\n", latest_commit_version, err)
major_copy = latest_commit_version
}
}
}
}
if err != nil {
return err
}
}
// now we must do block to block changes till the top block
{
var new_entries []int64
var commit_versions []uint64
for i := prune_topoheight; i < max_topoheight; i++ {
var old_toporecord, new_toporecord TopoRecord
var old_ss, new_ss, write_ss *graviton.Snapshot
var old_balance_tree, new_balance_tree, write_tree *graviton.Tree
// fetch old tree data
old_topo := i
new_topo := i + 1
err = nil
if old_toporecord, err = store.Topo_store.Read(old_topo); err == nil {
if old_ss, err = store.Balance_store.LoadSnapshot(old_toporecord.State_Version); err == nil {
if old_balance_tree, err = old_ss.GetTree(config.BALANCE_TREE); err == nil {
// fetch new tree data
if new_toporecord, err = store.Topo_store.Read(new_topo); err == nil {
if new_ss, err = store.Balance_store.LoadSnapshot(new_toporecord.State_Version); err == nil {
if new_balance_tree, err = new_ss.GetTree(config.BALANCE_TREE); err == nil {
// fetch tree where to write it
if write_ss, err = write_store.LoadSnapshot(0); err == nil {
if write_tree, err = write_ss.GetTree(config.BALANCE_TREE); err == nil {
// new_balance_tree.Graph("/tmp/original.dot")
// write_tree.Graph("/tmp/writable.dot")
// fmt.Printf("writing new graph\n")
latest_commit_version, err := clone_tree_changes(old_balance_tree, new_balance_tree, write_tree)
rlog.Infof("cloned tree changes from %d(%d) to %d(%d) , wrote version %d err '%s'", old_topo, old_toporecord.State_Version, new_topo, new_toporecord.State_Version, latest_commit_version, err)
new_entries = append(new_entries, new_topo)
commit_versions = append(commit_versions, latest_commit_version)
if write_hash, err := write_tree.Hash(); err == nil {
if new_hash, err := new_balance_tree.Hash(); err == nil {
// if this ever fails, means we have somthing nasty going on
// maybe graviton or some disk corruption
if new_hash != write_hash {
fmt.Printf("wrt %x \nnew %x \n", write_hash, new_hash)
panic("corruption")
}
}
}
} else {
//fmt.Printf("err from graviton internal %s\n", err)
return err // this is irrepairable damage
}
}
}
}
}
}
}
}
if err != nil {
//fmt.Printf("err from gravitonnnnnnnnnn %s\n", err)
return err
}
}
// now lets store all the commit versions in 1 go
for i, topo := range new_entries {
if old_toporecord, err := store.Topo_store.Read(topo); err == nil {
//fmt.Printf("writing toporecord %d version %d\n",topo, commit_versions[i])
store.Topo_store.Write(topo, old_toporecord.BLOCK_ID, commit_versions[i], old_toporecord.Height)
} else {
fmt.Printf("err reading/writing toporecord %d %s\n", topo, err)
}
}
}
// now overwrite the starting topo mapping
for i := int64(0); i <= prune_topoheight; i++ { // overwrite the entries in the topomap
if toporecord, err := store.Topo_store.Read(i); err == nil {
store.Topo_store.Write(i, toporecord.BLOCK_ID, major_copy, toporecord.Height)
} else {
fmt.Printf("err writing toporecord %s\n", err)
return err // this is irrepairable damage
}
}
// now lets remove the old graviton db
write_store.Close()
return
}
// clone tree changes between 2 versions (old_tree, new_tree and then commit them to write_tree)
func clone_tree_changes(old_tree, new_tree, write_tree *graviton.Tree) (latest_commit_version uint64, err error) {
if old_tree.IsDirty() || new_tree.IsDirty() || write_tree.IsDirty() {
panic("trees cannot be dirty")
}
insert_count := 0
modify_count := 0
insert_handler := func(k, v []byte) {
insert_count++
//fmt.Printf("insert %x %x\n",k,v)
write_tree.Put(k, v)
}
modify_handler := func(k, v []byte) { // modification receives old value
modify_count++
new_value, _ := new_tree.Get(k)
write_tree.Put(k, new_value)
}
graviton.Diff(old_tree, new_tree, nil, modify_handler, insert_handler)
//fmt.Printf("insert count %d modify_count %d\n", insert_count, modify_count)
if write_tree.IsDirty() {
return graviton.Commit(write_tree)
} else {
return write_tree.GetVersion(), nil
}
}
// clone entire tree in chunks
func clone_entire_tree(old_tree, new_tree *graviton.Tree) (latest_commit_version uint64, err error) {
c := old_tree.Cursor()
object_counter := int64(0)
for k, v, err := c.First(); err == nil; k, v, err = c.Next() {
if object_counter != 0 && object_counter%CHUNK_SIZE == 0 {
if latest_commit_version, err = graviton.Commit(new_tree); err != nil {
fmt.Printf("err while cloingggggggggggg %s\n", err)
return 0, err
}
}
new_tree.Put(k, v)
object_counter++
}
//if new_tree.IsDirty() {
if latest_commit_version, err = graviton.Commit(new_tree); err != nil {
fmt.Printf("err while cloingggggggggggg qqqqqqqqqqqq %s\n", err)
return 0, err
}
//}
/*old_hash,erro := old_tree.Hash()
new_hash,errn := new_tree.Hash()
fmt.Printf("old %x err %x\nmew %x err %s \n", old_hash,erro,new_hash,errn )
*/
return latest_commit_version, nil
}

View File

@ -1,90 +0,0 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -1,537 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package regpool
import "os"
import "fmt"
import "sync"
import "time"
import "sync/atomic"
import "path/filepath"
import "encoding/hex"
import "encoding/json"
import "github.com/romana/rlog"
import log "github.com/sirupsen/logrus"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/cryptography/crypto"
// this is only used for sorting and nothing else
type TX_Sorting_struct struct {
FeesPerByte uint64 // this is fees per byte
Hash crypto.Hash // transaction hash
Size uint64 // transaction size
}
// NOTE: do NOT consider this code as useless, as it is used to avooid double spending attacks within the block and within the pool
// let me explain, since we are a state machine, we add block to our blockchain
// so, if a double spending attack comes, 2 transactions with same inputs, we reject one of them
// the algo is documented somewhere else which explains the entire process
// at this point in time, this is an ultrafast written regpool,
// it will not scale for more than 10000 transactions but is good enough for now
// we can always come back and rewrite it
// NOTE: the pool is now persistant
type Regpool struct {
txs sync.Map //map[crypto.Hash]*regpool_object
address_map sync.Map //map[crypto.Hash]bool // contains key images of all txs
sorted_by_fee []crypto.Hash // contains txids sorted by fees
sorted []TX_Sorting_struct // contains TX sorting information, so as new block can be forged easily
modified bool // used to monitor whethel mem pool contents have changed,
height uint64 // track blockchain height
relayer chan crypto.Hash // used for immediate relay
P2P_TX_Relayer p2p_TX_Relayer // actual pointer, setup by the dero daemon during runtime
// global variable , but don't see it utilisation here except fot tx verification
//chain *Blockchain
Exit_Mutex chan bool
sync.Mutex
}
// this object is serialized and deserialized
type regpool_object struct {
Tx *transaction.Transaction
Added uint64 // time in epoch format
Height uint64 // at which height the tx unlocks in the regpool
Relayed int // relayed count
RelayedAt int64 // when was tx last relayed
Size uint64 // size in bytes of the TX
FEEperBYTE uint64 // fee per byte
}
var loggerpool *log.Entry
// marshal object as json
func (obj *regpool_object) MarshalJSON() ([]byte, error) {
return json.Marshal(&struct {
Tx string `json:"tx"` // hex encoding
Added uint64 `json:"added"`
Height uint64 `json:"height"`
Relayed int `json:"relayed"`
RelayedAt int64 `json:"relayedat"`
}{
Tx: hex.EncodeToString(obj.Tx.Serialize()),
Added: obj.Added,
Height: obj.Height,
Relayed: obj.Relayed,
RelayedAt: obj.RelayedAt,
})
}
// unmarshal object from json encoding
func (obj *regpool_object) UnmarshalJSON(data []byte) error {
aux := &struct {
Tx string `json:"tx"`
Added uint64 `json:"added"`
Height uint64 `json:"height"`
Relayed int `json:"relayed"`
RelayedAt int64 `json:"relayedat"`
}{}
if err := json.Unmarshal(data, &aux); err != nil {
return err
}
obj.Added = aux.Added
obj.Height = aux.Height
obj.Relayed = aux.Relayed
obj.RelayedAt = aux.RelayedAt
tx_bytes, err := hex.DecodeString(aux.Tx)
if err != nil {
return err
}
obj.Size = uint64(len(tx_bytes))
obj.Tx = &transaction.Transaction{}
err = obj.Tx.DeserializeHeader(tx_bytes)
if err == nil {
obj.FEEperBYTE = 0
}
return err
}
func Init_Regpool(params map[string]interface{}) (*Regpool, error) {
var regpool Regpool
//regpool.chain = params["chain"].(*Blockchain)
loggerpool = globals.Logger.WithFields(log.Fields{"com": "REGPOOL"}) // all components must use this logger
loggerpool.Infof("Regpool started")
atomic.AddUint32(&globals.Subsystem_Active, 1) // increment subsystem
regpool.relayer = make(chan crypto.Hash, 1024*10)
regpool.Exit_Mutex = make(chan bool)
// initialize maps
//regpool.txs = map[crypto.Hash]*regpool_object{}
//regpool.address_map = map[crypto.Hash]bool{}
//TODO load any trasactions saved at previous exit
regpool_file := filepath.Join(globals.GetDataDirectory(), "regpool.json")
file, err := os.Open(regpool_file)
if err != nil {
loggerpool.Warnf("Error opening regpool data file %s err %s", regpool_file, err)
} else {
defer file.Close()
var objects []regpool_object
decoder := json.NewDecoder(file)
err = decoder.Decode(&objects)
if err != nil {
loggerpool.Warnf("Error unmarshalling regpool data err %s", err)
} else { // successfully unmarshalled data, add it to regpool
loggerpool.Debugf("Will try to load %d txs from regpool file", (len(objects)))
for i := range objects {
result := regpool.Regpool_Add_TX(objects[i].Tx, 0)
if result { // setup time
//regpool.txs[objects[i].Tx.GetHash()] = &objects[i] // setup time and other artifacts
regpool.txs.Store(objects[i].Tx.GetHash(), &objects[i])
}
}
}
}
go regpool.Relayer_and_Cleaner()
return &regpool, nil
}
// this is created per incoming block and then discarded
// This does not require shutting down and will be garbage collected automatically
//func Init_Block_Regpool(params map[string]interface{}) (*Regpool, error) {
// var regpool Regpool
// return &regpool, nil
//}
func (pool *Regpool) HouseKeeping(height uint64, Verifier func(*transaction.Transaction) bool) {
pool.height = height
// this code is executed in conditions where a registered user tries to register again
var delete_list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*regpool_object)
if !Verifier(v.Tx) { // this tx user has already registered
delete_list = append(delete_list, txhash)
}
return true
})
for i := range delete_list {
pool.Regpool_Delete_TX(delete_list[i])
}
}
func (pool *Regpool) Shutdown() {
//TODO save regpool tx somewhere
close(pool.Exit_Mutex) // stop relaying
pool.Lock()
defer pool.Unlock()
regpool_file := filepath.Join(globals.GetDataDirectory(), "regpool.json")
// collect all txs in pool and serialize them and store them
var objects []regpool_object
pool.txs.Range(func(k, value interface{}) bool {
v := value.(*regpool_object)
objects = append(objects, *v)
return true
})
/*for _, v := range pool.txs {
objects = append(objects, *v)
}*/
var file, err = os.Create(regpool_file)
if err == nil {
defer file.Close()
encoder := json.NewEncoder(file)
encoder.SetIndent("", "\t")
err = encoder.Encode(objects)
if err != nil {
loggerpool.Warnf("Error marshaling regpool data err %s", err)
}
} else {
loggerpool.Warnf("Error creating new file to store regpool data file %s err %s", regpool_file, err)
}
loggerpool.Infof("Succesfully saved %d txs to file", (len(objects)))
loggerpool.Infof("Regpool stopped")
atomic.AddUint32(&globals.Subsystem_Active, ^uint32(0)) // this decrement 1 fom subsystem
}
// start pool monitoring for changes for some specific time
// this is required so as we can add or discard transactions while selecting work for mining
func (pool *Regpool) Monitor() {
pool.Lock()
pool.modified = false
pool.Unlock()
}
// return whether pool contents have changed
func (pool *Regpool) HasChanged() (result bool) {
pool.Lock()
result = pool.modified
pool.Unlock()
return
}
// a tx should only be added to pool after verification is complete
func (pool *Regpool) Regpool_Add_TX(tx *transaction.Transaction, Height uint64) (result bool) {
result = false
pool.Lock()
defer pool.Unlock()
if !tx.IsRegistration() {
return false
}
var object regpool_object
if pool.Regpool_Address_Present(tx.MinerAddress) {
// loggerpool.Infof("Rejecting TX, since address already has registration information")
return false
}
tx_hash := crypto.Hash(tx.GetHash())
// check if tx already exists, skip it
if _, ok := pool.txs.Load(tx_hash); ok {
//rlog.Debugf("Pool already contains %s, skipping", tx_hash)
return false
}
if !tx.IsRegistrationValid() {
return false
}
// add all the key images to check double spend attack within the pool
//TODO
// for i := 0; i < len(tx.Vin); i++ {
// pool.address_map.Store(tx.Vin[i].(transaction.Txin_to_key).K_image,true) // add element to map for next check
// }
pool.address_map.Store(tx.MinerAddress, true)
// we are here means we can add it to pool
object.Tx = tx
object.Height = Height
object.Added = uint64(time.Now().UTC().Unix())
object.Size = uint64(len(tx.Serialize()))
pool.txs.Store(tx_hash, &object)
pool.relayer <- tx_hash
pool.modified = true // pool has been modified
//pool.sort_list() // sort and update pool list
return true
}
// check whether a tx exists in the pool
func (pool *Regpool) Regpool_TX_Exist(txid crypto.Hash) (result bool) {
//pool.Lock()
//defer pool.Unlock()
if _, ok := pool.txs.Load(txid); ok {
return true
}
return false
}
// check whether a keyimage exists in the pool
func (pool *Regpool) Regpool_Address_Present(ki [33]byte) (result bool) {
//pool.Lock()
//defer pool.Unlock()
if _, ok := pool.address_map.Load(ki); ok {
return true
}
return false
}
// delete specific tx from pool and return it
// if nil is returned Tx was not found in pool
func (pool *Regpool) Regpool_Delete_TX(txid crypto.Hash) (tx *transaction.Transaction) {
//pool.Lock()
//defer pool.Unlock()
var ok bool
var objecti interface{}
// check if tx already exists, skip it
if objecti, ok = pool.txs.Load(txid); !ok {
rlog.Warnf("Pool does NOT contain %s, returning nil", txid)
return nil
}
// we reached here means, we have the tx remove it from our list, do maintainance cleapup and discard it
object := objecti.(*regpool_object)
tx = object.Tx
pool.txs.Delete(txid)
// remove all the key images
//TODO
// for i := 0; i < len(object.Tx.Vin); i++ {
// pool.address_map.Delete(object.Tx.Vin[i].(transaction.Txin_to_key).K_image)
// }
pool.address_map.Delete(tx.MinerAddress)
//pool.sort_list() // sort and update pool list
pool.modified = true // pool has been modified
return object.Tx // return the tx
}
// get specific tx from mem pool without removing it
func (pool *Regpool) Regpool_Get_TX(txid crypto.Hash) (tx *transaction.Transaction) {
// pool.Lock()
// defer pool.Unlock()
var ok bool
var objecti interface{}
if objecti, ok = pool.txs.Load(txid); !ok {
//loggerpool.Warnf("Pool does NOT contain %s, returning nil", txid)
return nil
}
// we reached here means, we have the tx, return the pointer back
//object := pool.txs[txid]
object := objecti.(*regpool_object)
return object.Tx
}
// return list of all txs in pool
func (pool *Regpool) Regpool_List_TX() []crypto.Hash {
// pool.Lock()
// defer pool.Unlock()
var list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
//v := value.(*regpool_object)
//objects = append(objects, *v)
list = append(list, txhash)
return true
})
//pool.sort_list() // sort and update pool list
// list should be as big as spurce list
//list := make([]crypto.Hash, len(pool.sorted_by_fee), len(pool.sorted_by_fee))
//copy(list, pool.sorted_by_fee) // return list sorted by fees
return list
}
// print current regpool txs
// TODO add sorting
func (pool *Regpool) Regpool_Print() {
pool.Lock()
defer pool.Unlock()
var klist []crypto.Hash
var vlist []*regpool_object
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*regpool_object)
//objects = append(objects, *v)
klist = append(klist, txhash)
vlist = append(vlist, v)
return true
})
fmt.Printf("Total TX in regpool = %d\n", len(klist))
fmt.Printf("%20s %14s %7s %7s %6s %32s\n", "Added", "Last Relayed", "Relayed", "Size", "Height", "TXID")
for i := range klist {
k := klist[i]
v := vlist[i]
fmt.Printf("%20s %14s %7d %7d %6d %32s\n", time.Unix(int64(v.Added), 0).UTC().Format(time.RFC3339), time.Duration(v.RelayedAt)*time.Second, v.Relayed,
len(v.Tx.Serialize()), v.Height, k)
}
}
// flush regpool
func (pool *Regpool) Regpool_flush() {
var list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
//v := value.(*regpool_object)
//objects = append(objects, *v)
list = append(list, txhash)
return true
})
fmt.Printf("Total TX in regpool = %d \n", len(list))
fmt.Printf("Flushing regpool \n")
for i := range list {
pool.Regpool_Delete_TX(list[i])
}
}
type p2p_TX_Relayer func(*transaction.Transaction, uint64) int // function type, exported in p2p but cannot use due to cyclic dependency
// this tx relayer keeps on relaying tx and cleaning regpool
// if a tx has been relayed less than 10 peers, tx relaying is agressive
// otherwise the tx are relayed every 30 minutes, till it has been relayed to 20
// then the tx is relayed every 3 hours, just in case
func (pool *Regpool) Relayer_and_Cleaner() {
for {
select {
case txid := <-pool.relayer:
if objecti, ok := pool.txs.Load(txid); !ok {
break
} else {
// we reached here means, we have the tx, return the pointer back
object := objecti.(*regpool_object)
if pool.P2P_TX_Relayer != nil {
relayed_count := pool.P2P_TX_Relayer(object.Tx, 0)
//relayed_count := 0
if relayed_count > 0 {
object.Relayed += relayed_count
rlog.Tracef(1, "Relayed %s to %d peers (%d %d)", txid, relayed_count, object.Relayed, (time.Now().Unix() - object.RelayedAt))
object.RelayedAt = time.Now().Unix()
}
}
}
case <-pool.Exit_Mutex:
return
case <-time.After(400 * time.Millisecond):
}
pool.txs.Range(func(ktmp, value interface{}) bool {
k := ktmp.(crypto.Hash)
v := value.(*regpool_object)
select { // exit fast of possible
case <-pool.Exit_Mutex:
return false
default:
}
if v.Relayed < 10 || // relay it now
(v.Relayed >= 4 && v.Relayed <= 20 && (time.Now().Unix()-v.RelayedAt) > 5) || // relay it now
(time.Now().Unix()-v.RelayedAt) > 4 {
if pool.P2P_TX_Relayer != nil {
relayed_count := pool.P2P_TX_Relayer(v.Tx, 0)
//relayed_count := 0
if relayed_count > 0 {
v.Relayed += relayed_count
//loggerpool.Debugf("%d %d\n",time.Now().Unix(), v.RelayedAt)
rlog.Tracef(1, "Relayed %s to %d peers (%d %d)", k, relayed_count, v.Relayed, (time.Now().Unix() - v.RelayedAt))
v.RelayedAt = time.Now().Unix()
//loggerpool.Debugf("%d %d",time.Now().Unix(), v.RelayedAt)
}
}
}
return true
})
// loggerpool.Warnf("send Pool lock released")
//pool.Unlock()
}
}

File diff suppressed because one or more lines are too long

View File

@ -1,306 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
// this file implements necessary structure to SC handling
import "fmt"
import "runtime/debug"
import "encoding/binary"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/dvm"
//import "github.com/deroproject/graviton"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/transaction"
import "github.com/romana/rlog"
// currently we 2 contract types
// 1 OPEN
// 2 PRIVATE
type SC_META_DATA struct {
Type byte // 0 Open, 1 Private
Balance uint64
DataHash crypto.Hash // hash of SC data tree is here, so as the meta tree verifies all SC DATA
}
// serialize the structure
func (meta SC_META_DATA) MarshalBinary() (buf []byte) {
buf = make([]byte, 41, 41)
buf[0] = meta.Type
binary.LittleEndian.PutUint64(buf[1:], meta.Balance)
copy(buf[1+8+len(meta.DataHash):], meta.DataHash[:])
return
}
func (meta *SC_META_DATA) UnmarshalBinary(buf []byte) (err error) {
if len(buf) != 1+8+32 {
return fmt.Errorf("input buffer should be of 41 bytes in length")
}
meta.Type = buf[0]
meta.Balance = binary.LittleEndian.Uint64(buf[1:])
copy(meta.DataHash[:], buf[1+8+len(meta.DataHash):])
return nil
}
func SC_Meta_Key(scid crypto.Hash) []byte {
return scid[:]
}
func SC_Code_Key(scid crypto.Hash) []byte {
return dvm.Variable{Type: dvm.String, Value: "C"}.MarshalBinaryPanic()
}
// this will process the SC transaction
// the tx should only be processed , if it has been processed
func (chain *Blockchain) execute_sc_function(w_sc_tree *Tree_Wrapper, data_tree *Tree_Wrapper, scid crypto.Hash, bl_height, bl_topoheight uint64, bl_hash crypto.Hash, tx transaction.Transaction, entrypoint string, hard_fork_version_current int64) (gas uint64, err error) {
defer func() {
// safety so if anything wrong happens, verification fails
if r := recover(); r != nil {
if err == nil {
err = fmt.Errorf("Stack trace \n%s", debug.Stack())
}
logger.Warnf("Recovered while rewinding chain, Stack trace below block_hash ")
logger.Warnf("Stack trace \n%s", debug.Stack())
}
}()
//if !tx.Verify_SC_Signature() { // if tx is not SC TX, or Signature could not be verified skip it
// return
//}
tx_hash := tx.GetHash()
tx_store := dvm.Initialize_TX_store()
// used as value loader from disk
// this function is used to load any data required by the SC
diskloader := func(key dvm.DataKey, found *uint64) (result dvm.Variable) {
var exists bool
if result, exists = chain.LoadSCValue(data_tree, key.SCID, key.MarshalBinaryPanic()); exists {
*found = uint64(1)
}
//fmt.Printf("Loading from disk %+v result %+v found status %+v \n", key, result, exists)
return
}
diskloader_raw := func(key []byte) (value []byte, found bool) {
var err error
value, err = data_tree.Get(key[:])
if err != nil {
return value, false
}
if len(value) == 0 {
return value, false
}
//fmt.Printf("Loading from disk %+v result %+v found status %+v \n", key, result, exists)
return value, true
}
balance, sc_parsed, found := chain.ReadSC(w_sc_tree, data_tree, scid)
if !found {
rlog.Warnf("SC not found %s", scid)
err = fmt.Errorf("SC not found %s", scid)
return
}
// if we found the SC in parsed form, check whether entrypoint is found
function, ok := sc_parsed.Functions[entrypoint]
if !ok {
rlog.Warnf("stored SC does not contain entrypoint '%s' scid %s \n", entrypoint, scid)
err = fmt.Errorf("stored SC does not contain entrypoint '%s' scid %s \n", entrypoint, scid)
return
}
_ = function
//fmt.Printf("entrypoint found '%s' scid %s\n", entrypoint, scid)
//if len(sc_tx.Params) == 0 { // initialize params if not initialized earlier
// sc_tx.Params = map[string]string{}
//}
//sc_tx.Params["value"] = fmt.Sprintf("%d", sc_tx.Value) // overide value
tx_store.DiskLoader = diskloader // hook up loading from chain
tx_store.DiskLoaderRaw = diskloader_raw
tx_store.BalanceAtStart = balance
tx_store.SCID = scid
//fmt.Printf("tx store %v\n", tx_store)
// setup block hash, height, topoheight correctly
state := &dvm.Shared_State{
Store: tx_store,
Chain_inputs: &dvm.Blockchain_Input{
BL_HEIGHT: bl_height,
BL_TOPOHEIGHT: uint64(bl_topoheight),
SCID: scid,
BLID: bl_hash,
TXID: tx_hash,
Signer: string(tx.MinerAddress[:]),
},
}
for p := range tx.Payloads {
if tx.Payloads[p].SCID.IsZero() {
state.DERO_Received += tx.Payloads[p].BurnValue
}
if tx.Payloads[p].SCID == scid {
state.Token_Received += tx.Payloads[p].BurnValue
}
}
// setup balance correctly
tx_store.ReceiveInternal(scid, state.DERO_Received)
// we have an entrypoint, now we must setup parameters and dvm
// all parameters are in string form to bypass translation issues in middle layers
params := map[string]interface{}{}
for _, p := range function.Params {
switch {
case p.Type == dvm.Uint64 && p.Name == "value":
params[p.Name] = fmt.Sprintf("%d", state.DERO_Received) // overide value
case p.Type == dvm.Uint64 && tx.SCDATA.Has(p.Name, rpc.DataUint64):
params[p.Name] = fmt.Sprintf("%d", tx.SCDATA.Value(p.Name, rpc.DataUint64).(uint64))
case p.Type == dvm.String && tx.SCDATA.Has(p.Name, rpc.DataString):
params[p.Name] = tx.SCDATA.Value(p.Name, rpc.DataString).(string)
default:
err = fmt.Errorf("entrypoint '%s' parameter type missing or not yet supported (%+v)", entrypoint, p)
return
}
}
result, err := dvm.RunSmartContract(&sc_parsed, entrypoint, state, params)
//fmt.Printf("result value %+v\n", result)
if err != nil {
rlog.Warnf("entrypoint '%s' scid %s err execution '%s' \n", entrypoint, scid, err)
return
}
if err == nil && result.Type == dvm.Uint64 && result.Value.(uint64) == 0 { // confirm the changes
for k, v := range tx_store.Keys {
chain.StoreSCValue(data_tree, scid, k.MarshalBinaryPanic(), v.MarshalBinaryPanic())
}
for k, v := range tx_store.RawKeys {
chain.StoreSCValue(data_tree, scid, []byte(k), v)
}
data_tree.leftover_balance = tx_store.Balance(scid)
data_tree.transfere = append(data_tree.transfere, tx_store.Transfers[scid].TransferE...)
} else { // discard all changes, since we never write to store immediately, they are purged, however we need to return any value associated
err = fmt.Errorf("Discarded knowingly")
return
}
//fmt.Printf("SC execution finished amount value %d\n", tx.Value)
return
}
// reads SC, balance
func (chain *Blockchain) ReadSC(w_sc_tree *Tree_Wrapper, data_tree *Tree_Wrapper, scid crypto.Hash) (balance uint64, sc dvm.SmartContract, found bool) {
meta_bytes, err := w_sc_tree.Get(SC_Meta_Key(scid))
if err != nil {
return
}
var meta SC_META_DATA // the meta contains the link to the SC bytes
if err := meta.UnmarshalBinary(meta_bytes); err != nil {
return
}
balance = meta.Balance
sc_bytes, err := data_tree.Get(SC_Code_Key(scid))
if err != nil {
return
}
var v dvm.Variable
if err = v.UnmarshalBinary(sc_bytes); err != nil {
return
}
sc, pos, err := dvm.ParseSmartContract(v.Value.(string))
if err != nil {
return
}
_ = pos
found = true
return
}
func (chain *Blockchain) LoadSCValue(data_tree *Tree_Wrapper, scid crypto.Hash, key []byte) (v dvm.Variable, found bool) {
//fmt.Printf("loading fromdb %s %s \n", scid, key)
object_data, err := data_tree.Get(key[:])
if err != nil {
return v, false
}
if len(object_data) == 0 {
return v, false
}
if err = v.UnmarshalBinary(object_data); err != nil {
return v, false
}
return v, true
}
// reads a value from SC, always read balance
func (chain *Blockchain) ReadSCValue(data_tree *Tree_Wrapper, scid crypto.Hash, key interface{}) (value interface{}) {
var keybytes []byte
if key == nil {
return
}
switch k := key.(type) {
case uint64:
keybytes = dvm.DataKey{Key: dvm.Variable{Type: dvm.Uint64, Value: k}}.MarshalBinaryPanic()
case string:
keybytes = dvm.DataKey{Key: dvm.Variable{Type: dvm.String, Value: k}}.MarshalBinaryPanic()
case int64:
keybytes = dvm.DataKey{Key: dvm.Variable{Type: dvm.String, Value: k}}.MarshalBinaryPanic()
default:
return
}
value_var, found := chain.LoadSCValue(data_tree, scid, keybytes)
//fmt.Printf("read value %+v", value_var)
if found && value_var.Type != dvm.Invalid {
value = value_var.Value
}
return
}
// store the value in the chain
func (chain *Blockchain) StoreSCValue(data_tree *Tree_Wrapper, scid crypto.Hash, key, value []byte) {
data_tree.Put(key, value)
return
}

View File

@ -1,393 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "sync"
import "math/big"
import "io/ioutil"
import "crypto/rand"
import "path/filepath"
import log "github.com/sirupsen/logrus"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/graviton"
import "github.com/golang/groupcache/lru"
// though these can be done within a single DB, these are separated for completely clarity purposes
type storage struct {
Balance_store *graviton.Store // stores most critical data, only history can be purged, its merkle tree is stored in the block
Block_tx_store storefs // stores blocks which can be discarded at any time(only past but keep recent history for rollback)
Topo_store storetopofs // stores topomapping which can only be discarded by punching holes in the start of the file
}
var store_logger *log.Entry
func (s *storage) Initialize(params map[string]interface{}) (err error) {
store_logger = globals.Logger.WithFields(log.Fields{"com": "STORE"})
current_path := filepath.Join(globals.GetDataDirectory())
if params["--simulator"] == true {
if s.Balance_store, err = graviton.NewMemStore(); err == nil {
current_path, err := ioutil.TempDir("", "dero_simulation")
if err != nil {
return err
}
if err = s.Topo_store.Open(current_path); err == nil {
s.Block_tx_store.basedir = current_path
}
}
} else {
if s.Balance_store, err = graviton.NewDiskStore(filepath.Join(current_path, "balances")); err == nil {
if err = s.Topo_store.Open(current_path); err == nil {
s.Block_tx_store.basedir = current_path
}
}
}
if err != nil {
store_logger.Fatalf("Cannot open store err %s", err)
}
store_logger.Infof("Initialized store at path %s", current_path)
return nil
}
func (s *storage) IsBalancesIntialized() bool {
var err error
var buf [64]byte
var balancehash, random_hash [32]byte
balance_ss, _ := s.Balance_store.LoadSnapshot(0) // load most recent snapshot
balancetree, _ := balance_ss.GetTree(config.BALANCE_TREE)
// avoid hardcoding any hash
if balancehash, err = balancetree.Hash(); err == nil {
if _, err = rand.Read(buf[:]); err == nil {
random_tree, _ := balance_ss.GetTree(string(buf[:]))
if random_hash, err = random_tree.Hash(); err == nil {
if random_hash == balancehash {
return false
}
}
}
}
if err != nil {
panic("database issues")
}
return true
}
func (chain *Blockchain) StoreBlock(bl *block.Block) {
hash := bl.GetHash()
serialized_bytes := bl.Serialize() // we are storing the miner transactions within
// calculate cumulative difficulty at last block
if len(bl.Tips) == 0 { // genesis block has no parent
difficulty_of_current_block := new(big.Int).SetUint64(1) // this is never used, as genesis block is a sync block, only its cumulative difficulty is used
cumulative_difficulty := new(big.Int).SetUint64(1) // genesis block cumulative difficulty is 1
err := chain.Store.Block_tx_store.WriteBlock(hash, serialized_bytes, difficulty_of_current_block, cumulative_difficulty)
if err != nil {
panic(fmt.Sprintf("error while writing block"))
}
} else {
difficulty_of_current_block := chain.Get_Difficulty_At_Tips(bl.Tips)
// NOTE: difficulty must be stored before cumulative difficulty calculation, since it is used while calculating Cdiff
base, base_height := chain.find_common_base(bl.Tips)
// this function requires block and difficulty to be pre-saveed
//work_map, cumulative_difficulty := chain.FindTipWorkScore( hash, base, base_height)
//this function is a copy of above function, however, it uses memory copies
work_map, cumulative_difficulty := chain.FindTipWorkScore_duringsave(bl, difficulty_of_current_block, base, base_height)
_ = work_map
err := chain.Store.Block_tx_store.WriteBlock(hash, serialized_bytes, difficulty_of_current_block, cumulative_difficulty)
if err != nil {
panic(fmt.Sprintf("error while writing block"))
}
}
}
// loads a block from disk, deserializes it
func (chain *Blockchain) Load_BL_FROM_ID(hash [32]byte) (*block.Block, error) {
var bl block.Block
if block_data, err := chain.Store.Block_tx_store.ReadBlock(hash); err == nil {
if err = bl.Deserialize(block_data); err != nil { // we should deserialize the block here
logger.Warnf("fError deserialiing block, block id %x len(data) %d data %x err %s", hash[:], len(block_data), block_data, err)
return nil, err
}
return &bl, nil
} else {
return nil, err
}
/*else if xerrors.Is(err,graviton.ErrNotFound){
}*/
}
// confirm whether the block exist in the data
// this only confirms whether the block has been downloaded
// a separate check is required, whether the block is valid ( satifies PoW and other conditions)
// we will not add a block to store, until it satisfies PoW
func (chain *Blockchain) Block_Exists(h crypto.Hash) bool {
_, err := chain.Load_BL_FROM_ID(h)
if err == nil {
return true
}
return false
}
// This will get the biggest height of tip for hardfork version and other calculations
// get biggest height of parent, add 1
func (chain *Blockchain) Calculate_Height_At_Tips(tips []crypto.Hash) int64 {
height := int64(0)
if len(tips) == 0 { // genesis block has no parent
} else { // find the best height of past
for i := range tips {
bl, err := chain.Load_BL_FROM_ID(tips[i])
if err != nil {
panic(err)
}
past_height := int64(bl.Height)
if height <= past_height {
height = past_height
}
}
height++
}
return height
}
func (chain *Blockchain) Load_Block_Timestamp(h crypto.Hash) uint64 {
bl, err := chain.Load_BL_FROM_ID(h)
if err != nil {
panic(err)
}
return bl.Timestamp
}
func (chain *Blockchain) Load_Block_Height(h crypto.Hash) (height int64) {
defer func() {
if r := recover(); r != nil {
height = -1
}
}()
bl, err := chain.Load_BL_FROM_ID(h)
if err != nil {
panic(err)
}
height = int64(bl.Height)
return
}
func (chain *Blockchain) Load_Height_for_BL_ID(h crypto.Hash) int64 {
return chain.Load_Block_Height(h)
}
var past_cache = lru.New(10240)
var past_cache_lock sync.Mutex
// all the immediate past of a block
func (chain *Blockchain) Get_Block_Past(hash crypto.Hash) (blocks []crypto.Hash) {
//fmt.Printf("loading tips for block %x\n", hash)
past_cache_lock.Lock()
defer past_cache_lock.Unlock()
if keysi, ok := past_cache.Get(hash); ok {
keys := keysi.([]crypto.Hash)
blocks = make([]crypto.Hash, len(keys))
for i := range keys {
copy(blocks[i][:], keys[i][:])
}
return
}
bl, err := chain.Load_BL_FROM_ID(hash)
if err != nil {
panic(err)
}
blocks = make([]crypto.Hash, 0, len(bl.Tips))
for i := range bl.Tips {
blocks = append(blocks, bl.Tips[i])
}
cache_copy := make([]crypto.Hash, len(blocks), len(blocks))
for i := range blocks {
cache_copy[i] = blocks[i]
}
//set in cache
past_cache.Add(hash, cache_copy)
return
}
func (chain *Blockchain) Load_Block_Difficulty(h crypto.Hash) *big.Int {
if diff, err := chain.Store.Block_tx_store.ReadBlockDifficulty(h); err != nil {
panic(err)
} else {
return diff
}
}
func (chain *Blockchain) Load_Block_Cumulative_Difficulty(h crypto.Hash) *big.Int {
if cdiff, err := chain.Store.Block_tx_store.ReadBlockCDifficulty(h); err != nil {
panic(err)
} else {
return cdiff
}
}
func (chain *Blockchain) Get_Top_ID() crypto.Hash {
var h crypto.Hash
topo_count := chain.Store.Topo_store.Count()
if topo_count == 0 {
return h
}
cindex := topo_count - 1
for {
r, err := chain.Store.Topo_store.Read(cindex)
if err != nil {
panic(err)
}
if !r.IsClean() {
return r.BLOCK_ID
}
if cindex == 0 {
return h
}
cindex--
}
}
// faster bootstrap
func (chain *Blockchain) Load_TOP_HEIGHT() int64 {
return chain.Load_Block_Height(chain.Get_Top_ID())
}
func (chain *Blockchain) Load_TOPO_HEIGHT() int64 {
topo_count := chain.Store.Topo_store.Count()
if topo_count == 0 {
return 0
}
cindex := topo_count - 1
for {
r, err := chain.Store.Topo_store.Read(cindex)
if err != nil {
panic(err)
}
if !r.IsClean() {
return cindex
}
if cindex == 0 {
return 0
}
cindex--
}
}
func (chain *Blockchain) Load_Block_Topological_order_at_index(index_pos int64) (hash crypto.Hash, err error) {
r, err := chain.Store.Topo_store.Read(index_pos)
if err != nil {
return hash, err
}
if !r.IsClean() {
return r.BLOCK_ID, nil
} else {
panic("cnnot query clean block id")
}
}
//load store hash from 2 tree
func (chain *Blockchain) Load_Merkle_Hash(index_pos int64) (hash crypto.Hash, err error) {
toporecord, err := chain.Store.Topo_store.Read(index_pos)
if err != nil {
return hash, err
}
if toporecord.IsClean() {
err = fmt.Errorf("cannot query clean block")
return
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
return
}
balance_tree, err := ss.GetTree(config.BALANCE_TREE)
if err != nil {
return
}
sc_meta_tree, err := ss.GetTree(config.SC_META)
if err != nil {
return
}
balance_merkle_hash, err := balance_tree.Hash()
if err != nil {
return
}
meta_merkle_hash, err := sc_meta_tree.Hash()
if err != nil {
return
}
for i := range balance_merkle_hash {
hash[i] = balance_merkle_hash[i] ^ meta_merkle_hash[i]
}
return hash, nil
}

View File

@ -1,168 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
// this file implements a filesystem store which is used to store blocks/transactions directly in the file system
import "os"
import "fmt"
import "strings"
import "io/ioutil"
import "math/big"
import "path/filepath"
type storefs struct {
basedir string
}
// the filename stores the following information
// hex block id (64 chars).block._ rewards (decimal) _ difficulty _ cumulative difficulty
func (s *storefs) ReadBlock(h [32]byte) ([]byte, error) {
var dummy [32]byte
if h == dummy {
return nil, fmt.Errorf("empty block")
}
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
files, err := ioutil.ReadDir(dir)
if err != nil {
return nil, err
}
filename_start := fmt.Sprintf("%x.block", h[:])
for _, file := range files {
if strings.HasPrefix(file.Name(), filename_start) {
//fmt.Printf("Reading block with filename %s\n", file.Name())
file := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]), file.Name())
return ioutil.ReadFile(file)
}
}
return nil, os.ErrNotExist
}
func (s *storefs) DeleteBlock(h [32]byte) error {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
files, err := ioutil.ReadDir(dir)
if err != nil {
return err
}
filename_start := fmt.Sprintf("%x.block", h[:])
for _, file := range files {
if strings.HasPrefix(file.Name(), filename_start) {
file := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]), file.Name())
return os.Remove(file)
}
}
return os.ErrNotExist
}
func (s *storefs) ReadBlockDifficulty(h [32]byte) (*big.Int, error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
files, err := ioutil.ReadDir(dir)
if err != nil {
return nil, err
}
filename_start := fmt.Sprintf("%x.block", h[:])
for _, file := range files {
if strings.HasPrefix(file.Name(), filename_start) {
diff := new(big.Int)
parts := strings.Split(file.Name(), "_")
if len(parts) != 3 {
panic("such filename cannot occur")
}
_, err := fmt.Sscan(parts[1], diff)
if err != nil {
return nil, err
}
return diff, nil
}
}
return nil, os.ErrNotExist
}
func (s *storefs) ReadBlockCDifficulty(h [32]byte) (*big.Int, error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
files, err := ioutil.ReadDir(dir)
if err != nil {
return nil, err
}
filename_start := fmt.Sprintf("%x.block", h[:])
for _, file := range files {
if strings.HasPrefix(file.Name(), filename_start) {
diff := new(big.Int)
parts := strings.Split(file.Name(), "_")
if len(parts) != 3 {
panic("such filename cannot occur")
}
_, err := fmt.Sscan(parts[2], diff)
if err != nil {
return nil, err
}
return diff, nil
}
}
return nil, os.ErrNotExist
}
func (s *storefs) WriteBlock(h [32]byte, data []byte, difficulty *big.Int, cdiff *big.Int) (err error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
file := filepath.Join(dir, fmt.Sprintf("%x.block_%s_%s", h[:], difficulty.String(), cdiff.String()))
if err = os.MkdirAll(dir, 0700); err != nil {
return err
}
return ioutil.WriteFile(file, data, 0600)
}
func (s *storefs) ReadTX(h [32]byte) ([]byte, error) {
file := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]), fmt.Sprintf("%x.tx", h[:]))
return ioutil.ReadFile(file)
}
func (s *storefs) WriteTX(h [32]byte, data []byte) (err error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
file := filepath.Join(dir, fmt.Sprintf("%x.tx", h[:]))
if err = os.MkdirAll(dir, 0700); err != nil {
return err
}
return ioutil.WriteFile(file, data, 0600)
}
func (s *storefs) DeleteTX(h [32]byte) (err error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
file := filepath.Join(dir, fmt.Sprintf("%x.tx", h[:]))
return os.Remove(file)
}

View File

@ -1,269 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "os"
import "fmt"
import "math"
import "path/filepath"
import "encoding/binary"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/cryptography/crypto"
type TopoRecord struct {
BLOCK_ID [32]byte
State_Version uint64
Height int64
}
const TOPORECORD_SIZE int64 = 48
// this file implements a filesystem store which is used to store topo to block mapping directly in the file system and the state version directly tied
type storetopofs struct {
topomapping *os.File
}
func (s TopoRecord) String() string {
return fmt.Sprintf("blid %x state version %d height %d", s.BLOCK_ID[:], s.State_Version, s.Height)
}
func (s *storetopofs) Open(basedir string) (err error) {
s.topomapping, err = os.OpenFile(filepath.Join(basedir, "topo.map"), os.O_RDWR|os.O_CREATE, 0700)
return err
}
func (s *storetopofs) Count() int64 {
fstat, err := s.topomapping.Stat()
if err != nil {
panic(fmt.Sprintf("cannot stat topofile. err %s", err))
}
count := int64(fstat.Size() / int64(TOPORECORD_SIZE))
for ; count >= 1; count-- {
if record, err := s.Read(count - 1); err == nil && !record.IsClean() {
break
} else if err != nil {
panic(fmt.Sprintf("cannot read topofile. err %s", err))
}
}
return count
}
// it basically represents Load_Block_Topological_order_at_index
// reads an entry at specific location
func (s *storetopofs) Read(index int64) (TopoRecord, error) {
var buf [TOPORECORD_SIZE]byte
var record TopoRecord
if n, err := s.topomapping.ReadAt(buf[:], index*TOPORECORD_SIZE); int64(n) != TOPORECORD_SIZE {
return record, err
}
copy(record.BLOCK_ID[:], buf[:])
record.State_Version = binary.LittleEndian.Uint64(buf[len(record.BLOCK_ID):])
record.Height = int64(binary.LittleEndian.Uint64(buf[len(record.BLOCK_ID)+8:]))
return record, nil
}
func (s *storetopofs) Write(index int64, blid [32]byte, state_version uint64, height int64) (err error) {
var buf [TOPORECORD_SIZE]byte
var record TopoRecord
copy(buf[:], blid[:])
binary.LittleEndian.PutUint64(buf[len(record.BLOCK_ID):], state_version)
//height := chain.Load_Height_for_BL_ID(blid)
binary.LittleEndian.PutUint64(buf[len(record.BLOCK_ID)+8:], uint64(height))
_, err = s.topomapping.WriteAt(buf[:], index*TOPORECORD_SIZE)
return err
}
func (s *storetopofs) Clean(index int64) (err error) {
var state_version uint64
var blid [32]byte
return s.Write(index, blid, state_version, 0)
}
// whether record is clean
func (r *TopoRecord) IsClean() bool {
if r.State_Version != 0 {
return false
}
for _, x := range r.BLOCK_ID {
if x != 0 {
return false
}
}
return true
}
var pruned_till int64 = -1
// locates prune topoheight till where the history has been pruned
// this is not used anywhere in the consensus and can be modified any way possible
// this is for the wallet
func (s *storetopofs) LocatePruneTopo() int64 {
if pruned_till >= 0 { // return cached result
return pruned_till
}
count := s.Count()
if count < 10 {
return 0
}
zero_block, err := s.Read(0)
if err != nil || zero_block.IsClean() {
return 0
}
fifth_block, err := s.Read(5)
if err != nil || fifth_block.IsClean() {
return 0
}
// we are assumming atleast 5 blocks are pruned
if zero_block.State_Version != fifth_block.State_Version {
return 0
}
// now we must find the point where version number = zero_block.State_Version + 1
low := int64(0) // in case of purging DB, this should start from N
high := int64(count)
prune_topo := int64(math.MaxInt64)
for low <= high {
median := (low + high) / 2
median_block, _ := s.Read(median)
if median_block.State_Version >= (zero_block.State_Version + 1) {
if prune_topo > median {
prune_topo = median
}
high = median - 1
} else {
low = median + 1
}
}
prune_topo--
pruned_till = prune_topo
return prune_topo
}
// exported from chain
func (chain *Blockchain) LocatePruneTopo() int64 {
return chain.Store.Topo_store.LocatePruneTopo()
}
func (s *storetopofs) binarySearchHeight(targetheight int64) (blids []crypto.Hash, topos []int64) {
startIndex := int64(0)
total_records := int64(s.Count())
endIndex := total_records
midIndex := total_records / 2
if endIndex < 0 { // no record
return
}
for startIndex <= endIndex {
record, _ := s.Read(midIndex)
if record.Height >= targetheight-((config.STABLE_LIMIT*4)/2) && record.Height <= targetheight+((config.STABLE_LIMIT*4)/2) {
break
}
if record.Height >= targetheight {
endIndex = midIndex - 1
midIndex = (startIndex + endIndex) / 2
continue
}
startIndex = midIndex + 1
midIndex = (startIndex + endIndex) / 2
}
for i, count := midIndex, 0; i >= 0 && count < 100; i, count = i-1, count+1 {
record, _ := s.Read(i)
if record.Height == targetheight {
blids = append(blids, record.BLOCK_ID)
topos = append(topos, i)
}
}
for i, count := midIndex, 0; i < total_records && count < 100; i, count = i+1, count+1 {
record, _ := s.Read(i)
if record.Height == targetheight {
blids = append(blids, record.BLOCK_ID)
topos = append(topos, i)
}
}
blids, topos = SliceUniqTopoRecord(blids, topos) // unique the record
return
}
// SliceUniq removes duplicate values in given slice
func SliceUniqTopoRecord(s []crypto.Hash, h []int64) ([]crypto.Hash, []int64) {
for i := 0; i < len(s); i++ {
for i2 := i + 1; i2 < len(s); i2++ {
if s[i] == s[i2] {
// delete
s = append(s[:i2], s[i2+1:]...)
h = append(h[:i2], h[i2+1:]...)
i2--
}
}
}
return s, h
}
func (chain *Blockchain) Get_Blocks_At_Height(height int64) []crypto.Hash {
blids, _ := chain.Store.Topo_store.binarySearchHeight(height)
return blids
}
// since topological order might mutate, instead of doing cleanup, we double check the pointers
// we first locate the block and its height, then we locate that height, then we traverse 50 blocks up and 50 blocks down
func (chain *Blockchain) Is_Block_Topological_order(blid crypto.Hash) bool {
bl_height := chain.Load_Height_for_BL_ID(blid)
blids, _ := chain.Store.Topo_store.binarySearchHeight(bl_height)
for i := range blids {
if blids[i] == blid {
return true
}
}
return false
}
func (chain *Blockchain) Load_Block_Topological_order(blid crypto.Hash) int64 {
bl_height := chain.Load_Height_for_BL_ID(blid)
blids, topos := chain.Store.Topo_store.binarySearchHeight(bl_height)
for i := range blids {
if blids[i] == blid {
return topos[i]
}
}
return -1
}

View File

@ -1,377 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
// this file implements core execution of all changes to block chain homomorphically
import "fmt"
import "math/big"
import "golang.org/x/xerrors"
import "github.com/romana/rlog"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/cryptography/bn256"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/dvm"
import "github.com/deroproject/graviton"
// convert bitcoin model to our, but skip initial 4 years of supply, so our total supply gets to 10.5 million
const RewardReductionInterval = 210000 * 600 / config.BLOCK_TIME // 210000 comes from bitcoin
const BaseReward = 50 * 100000 * config.BLOCK_TIME / 600 // convert bitcoin reward system to our block
// CalcBlockSubsidy returns the subsidy amount a block at the provided height
// should have. This is mainly used for determining how much the coinbase for
// newly generated blocks awards as well as validating the coinbase for blocks
// has the expected value.
//
// The subsidy is halved every SubsidyReductionInterval blocks. Mathematically
// this is: baseSubsidy / 2^(height/SubsidyReductionInterval)
//
// At the target block generation rate for the main network, this is
// approximately every 4 years.
//
// basically out of of the bitcoin supply, we have wiped of initial interval ( this wipes of 10.5 million, so total remaining is around 10.5 million
func CalcBlockReward(height uint64) uint64 {
return BaseReward >> ((height + RewardReductionInterval) / RewardReductionInterval)
}
// process the miner tx, giving fees, miner rewatd etc
func (chain *Blockchain) process_miner_transaction(tx transaction.Transaction, genesis bool, balance_tree *graviton.Tree, fees uint64, height uint64,sideblock bool) {
var acckey crypto.Point
if err := acckey.DecodeCompressed(tx.MinerAddress[:]); err != nil {
panic(err)
}
if genesis == true { // process premine ,register genesis block, dev key
balance := crypto.ConstructElGamal(acckey.G1(), crypto.ElGamal_BASE_G) // init zero balance
balance = balance.Plus(new(big.Int).SetUint64(tx.Value)) // add premine to users balance homomorphically
balance_tree.Put(tx.MinerAddress[:], balance.Serialize()) // reserialize and store
return
}
// general coin base transaction
base_reward := CalcBlockReward(uint64(height))
full_reward := base_reward+ fees
if sideblock {// give devs reward
balance_serialized, err := balance_tree.Get(chain.Dev_Address_Bytes[:])
if err != nil {
panic(err)
}
balance := new(crypto.ElGamal).Deserialize(balance_serialized)
balance = balance.Plus(new(big.Int).SetUint64(full_reward)) // add devs reward to devs balance homomorphically
balance_tree.Put(chain.Dev_Address_Bytes[:], balance.Serialize()) // reserialize and store
}else{ // giver miner reward
balance_serialized, err := balance_tree.Get(tx.MinerAddress[:])
if err != nil {
panic(err)
}
balance := new(crypto.ElGamal).Deserialize(balance_serialized)
balance = balance.Plus(new(big.Int).SetUint64(full_reward)) // add miners reward to miners balance homomorphically
balance_tree.Put(tx.MinerAddress[:], balance.Serialize()) // reserialize and store
}
return
}
// process the tx, giving fees, miner rewatd etc
// this should be atomic, either all should be done or none at all
func (chain *Blockchain) process_transaction(changed map[crypto.Hash]*graviton.Tree, tx transaction.Transaction, balance_tree *graviton.Tree) uint64 {
//fmt.Printf("Processing/Executing transaction %s %s\n", tx.GetHash(), tx.TransactionType.String())
switch tx.TransactionType {
case transaction.REGISTRATION:
if _, err := balance_tree.Get(tx.MinerAddress[:]); err == nil {
return 0
}
if _, err := balance_tree.Get(tx.MinerAddress[:]); err != nil {
if !xerrors.Is(err, graviton.ErrNotFound) { // any other err except not found panic
panic(err)
}
} // address needs registration
var acckey crypto.Point
if err := acckey.DecodeCompressed(tx.MinerAddress[:]); err != nil {
panic(err)
}
zerobalance := crypto.ConstructElGamal(acckey.G1(), crypto.ElGamal_BASE_G)
zerobalance = zerobalance.Plus(new(big.Int).SetUint64(800000)) // add fix amount to every wallet to users balance for more testing
balance_tree.Put(tx.MinerAddress[:], zerobalance.Serialize())
return 0 // registration doesn't give any fees . why & how ?
case transaction.BURN_TX, transaction.NORMAL, transaction.SC_TX: // burned amount is not added anywhere and thus lost forever
for t := range tx.Payloads {
var tree *graviton.Tree
if tx.Payloads[t].SCID.IsZero() {
tree = balance_tree
} else {
tree = changed[tx.Payloads[t].SCID]
}
for i := 0; i < int(tx.Payloads[t].Statement.RingSize); i++ {
key_pointer := tx.Payloads[t].Statement.Publickeylist_pointers[i*int(tx.Payloads[t].Statement.Bytes_per_publickey) : (i+1)*int(tx.Payloads[t].Statement.Bytes_per_publickey)]
_, key_compressed, balance_serialized, err := tree.GetKeyValueFromHash(key_pointer)
if err != nil && !tx.Payloads[t].SCID.IsZero() {
if xerrors.Is(err, graviton.ErrNotFound) { // if the address is not found, lookup in main tree
_, key_compressed, _, err = balance_tree.GetKeyValueFromHash(key_pointer)
if err == nil {
var p bn256.G1
if err = p.DecodeCompressed(key_compressed[:]); err != nil {
panic(fmt.Errorf("key %d could not be decompressed", i))
}
balance := crypto.ConstructElGamal(&p, crypto.ElGamal_BASE_G) // init zero balance
balance_serialized = balance.Serialize()
}
}
}
if err != nil {
panic(fmt.Errorf("balance not obtained err %s\n", err))
}
balance := new(crypto.ElGamal).Deserialize(balance_serialized)
echanges := crypto.ConstructElGamal(tx.Payloads[t].Statement.C[i], tx.Payloads[t].Statement.D)
balance = balance.Add(echanges) // homomorphic addition of changes
tree.Put(key_compressed, balance.Serialize()) // reserialize and store
}
}
return tx.Fees()
default:
panic("unknown transaction, do not know how to process it")
return 0
}
}
type Tree_Wrapper struct {
tree *graviton.Tree
entries map[string][]byte
leftover_balance uint64
transfere []dvm.TransferExternal
}
func (t *Tree_Wrapper) Get(key []byte) ([]byte, error) {
if value, ok := t.entries[string(key)]; ok {
return value, nil
} else {
return t.tree.Get(key)
}
}
func (t *Tree_Wrapper) Put(key []byte, value []byte) error {
t.entries[string(key)] = append([]byte{}, value...)
return nil
}
// does additional processing for SC
func (chain *Blockchain) process_transaction_sc(cache map[crypto.Hash]*graviton.Tree, ss *graviton.Snapshot, bl_height, bl_topoheight uint64, blid crypto.Hash, tx transaction.Transaction, balance_tree *graviton.Tree, sc_tree *graviton.Tree) (gas uint64, err error) {
if len(tx.SCDATA) == 0 {
return tx.Fees(), nil
}
success := false
w_balance_tree := &Tree_Wrapper{tree: balance_tree, entries: map[string][]byte{}}
w_sc_tree := &Tree_Wrapper{tree: sc_tree, entries: map[string][]byte{}}
_ = w_balance_tree
var sc_data_tree *graviton.Tree // SC data tree
var w_sc_data_tree *Tree_Wrapper
txhash := tx.GetHash()
scid := txhash
defer func() {
if success { // merge the trees
}
}()
if !tx.SCDATA.Has(rpc.SCACTION, rpc.DataUint64) { // tx doesn't have sc action
//err = fmt.Errorf("no scid provided")
return tx.Fees(), nil
}
action_code := rpc.SC_ACTION(tx.SCDATA.Value(rpc.SCACTION, rpc.DataUint64).(uint64))
switch action_code {
case rpc.SC_INSTALL: // request to install an SC
if !tx.SCDATA.Has(rpc.SCCODE, rpc.DataString) { // but only it is present
break
}
sc_code := tx.SCDATA.Value(rpc.SCCODE, rpc.DataString).(string)
if sc_code == "" { // no code provided nothing to do
err = fmt.Errorf("no code provided")
break
}
// check whether sc can be parsed
//var sc_parsed dvm.SmartContract
pos := ""
var sc dvm.SmartContract
sc, pos, err = dvm.ParseSmartContract(sc_code)
if err != nil {
rlog.Warnf("error Parsing sc txid %s err %s pos %s\n", txhash, err, pos)
break
}
meta := SC_META_DATA{Balance: tx.Value}
if _, ok := sc.Functions["InitializePrivate"]; ok {
meta.Type = 1
}
if sc_data_tree, err = ss.GetTree(string(scid[:])); err != nil {
break
} else {
w_sc_data_tree = &Tree_Wrapper{tree: sc_data_tree, entries: map[string][]byte{}}
}
// install SC, should we check for sanity now, why or why not
w_sc_data_tree.Put(SC_Code_Key(scid), dvm.Variable{Type: dvm.String, Value: sc_code}.MarshalBinaryPanic())
w_sc_tree.Put(SC_Meta_Key(scid), meta.MarshalBinary())
// at this point we must trigger the initialize call in the DVM
//fmt.Printf("We must call the SC initialize function\n")
if meta.Type == 1 { // if its a a private SC
gas, err = chain.execute_sc_function(w_sc_tree, w_sc_data_tree, scid, bl_height, bl_topoheight, blid, tx, "InitializePrivate", 1)
} else {
gas, err = chain.execute_sc_function(w_sc_tree, w_sc_data_tree, scid, bl_height, bl_topoheight, blid, tx, "Initialize", 1)
}
case rpc.SC_CALL: // trigger a CALL
if !tx.SCDATA.Has(rpc.SCID, rpc.DataHash) { // but only it is present
err = fmt.Errorf("no scid provided")
break
}
if !tx.SCDATA.Has("entrypoint", rpc.DataString) { // but only it is present
err = fmt.Errorf("no entrypoint provided")
break
}
scid = tx.SCDATA.Value(rpc.SCID, rpc.DataHash).(crypto.Hash)
if _, err = w_sc_tree.Get(SC_Meta_Key(scid)); err != nil {
err = fmt.Errorf("scid %s not installed", scid)
return
}
if sc_data_tree, err = ss.GetTree(string(scid[:])); err != nil {
return
} else {
w_sc_data_tree = &Tree_Wrapper{tree: sc_data_tree, entries: map[string][]byte{}}
}
entrypoint := tx.SCDATA.Value("entrypoint", rpc.DataString).(string)
//fmt.Printf("We must call the SC %s function\n", entrypoint)
gas, err = chain.execute_sc_function(w_sc_tree, w_sc_data_tree, scid, bl_height, bl_topoheight, blid, tx, entrypoint, 1)
default: // unknown what to do
err = fmt.Errorf("unknown action what to do", scid)
return
}
if err == nil { // we must commit the changes
var data_tree *graviton.Tree
var ok bool
if data_tree, ok = cache[scid]; !ok {
data_tree = w_sc_data_tree.tree
cache[scid] = w_sc_data_tree.tree
}
// commit entire data to tree
for k, v := range w_sc_data_tree.entries {
//fmt.Printf("persisting %x %x\n", k, v)
if err = data_tree.Put([]byte(k), v); err != nil {
return
}
}
for k, v := range w_sc_tree.entries { // these entries are only partial
if err = sc_tree.Put([]byte(k), v); err != nil {
return
}
}
// at this point, settle the balances, how ??
var meta_bytes []byte
meta_bytes, err = w_sc_tree.Get(SC_Meta_Key(scid))
if err != nil {
return
}
var meta SC_META_DATA // the meta contains the link to the SC bytes
if err = meta.UnmarshalBinary(meta_bytes); err != nil {
return
}
meta.Balance = w_sc_data_tree.leftover_balance
//fmt.Printf("SC %s balance %d\n", scid, w_sc_data_tree.leftover_balance)
sc_tree.Put(SC_Meta_Key(scid), meta.MarshalBinary())
for i, transfer := range w_sc_data_tree.transfere { // give devs reward
var balance_serialized []byte
addr_bytes := []byte(transfer.Address)
balance_serialized, err = balance_tree.Get(addr_bytes)
if err != nil {
fmt.Printf("%s %d could not transfer %d %+v\n", scid, i, transfer.Amount, addr_bytes)
return
}
balance := new(crypto.ElGamal).Deserialize(balance_serialized)
balance = balance.Plus(new(big.Int).SetUint64(transfer.Amount)) // add devs reward to devs balance homomorphically
balance_tree.Put(addr_bytes, balance.Serialize()) // reserialize and store
//fmt.Printf("%s paid back %d\n", scid, transfer.Amount)
}
/*
c := data_tree.Cursor()
for k, v, err := c.First(); err == nil; k, v, err = c.Next() {
fmt.Printf("key=%s (%x), value=%s\n", k, k, v)
}
fmt.Printf("cursor complete\n")
*/
//h, err := data_tree.Hash()
//fmt.Printf("%s successfully executed sc_call data_tree hash %x %s\n", scid, h, err)
}
return tx.Fees(), nil
}

View File

@ -1,615 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "time"
/*import "bytes"
import "encoding/binary"
import "github.com/romana/rlog"
*/
import "sync"
import "runtime/debug"
import "golang.org/x/xerrors"
import "github.com/deroproject/graviton"
//import "github.com/romana/rlog"
import log "github.com/sirupsen/logrus"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/cryptography/bn256"
//import "github.com/deroproject/derosuite/emission"
// caches x of transactions validity
// it is always atomic
// the cache is not txhash -> validity mapping
// instead it is txhash+expanded ringmembers
// if the entry exist, the tx is valid
// it stores special hash and first seen time
// this can only be used on expanded transactions
var transaction_valid_cache sync.Map
// this go routine continuously scans and cleans up the cache for expired entries
func clean_up_valid_cache() {
for {
time.Sleep(3600 * time.Second)
current_time := time.Now()
// track propagation upto 10 minutes
transaction_valid_cache.Range(func(k, value interface{}) bool {
first_seen := value.(time.Time)
if current_time.Sub(first_seen).Round(time.Second).Seconds() > 3600 {
transaction_valid_cache.Delete(k)
}
return true
})
}
}
/* Coinbase transactions need to verify registration
* */
func (chain *Blockchain) Verify_Transaction_Coinbase(cbl *block.Complete_Block, minertx *transaction.Transaction) (err error) {
if !minertx.IsCoinbase() { // transaction is not coinbase, return failed
return fmt.Errorf("tx is not coinbase")
}
// make sure miner address is registered
_, topos := chain.Store.Topo_store.binarySearchHeight(int64(cbl.Bl.Height - 1))
// load all db versions one by one and check whether the root hash matches the one mentioned in the tx
if len(topos) < 1 {
return fmt.Errorf("could not find previous height blocks %d", cbl.Bl.Height-1)
}
var balance_tree *graviton.Tree
for i := range topos {
toporecord, err := chain.Store.Topo_store.Read(topos[i])
if err != nil {
return fmt.Errorf("could not read block at height %d due to error while obtaining toporecord topos %+v processing %d err:%s\n", cbl.Bl.Height-1, topos, i, err)
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
return err
}
if balance_tree, err = ss.GetTree(config.BALANCE_TREE); err != nil {
return err
}
if _, err := balance_tree.Get(minertx.MinerAddress[:]); err != nil {
return fmt.Errorf("balance not obtained err %s\n", err)
//return false
}
}
return nil // success comes last
}
// only verifies height whether all height checks are good
func Verify_Transaction_NonCoinbase_Height(tx *transaction.Transaction, chain_height uint64) bool {
return Verify_Transaction_Height(tx.Height, chain_height)
}
func Verify_Transaction_Height(tx_height, chain_height uint64) bool{
if tx_height % config.BLOCK_BATCH_SIZE != 0 {
return false
}
if tx_height >= chain_height {
return false
}
if chain_height-tx_height <= 5 { // we should be atleast 5 steps from top
return false
}
comp := (chain_height / config.BLOCK_BATCH_SIZE) - (tx_height / config.BLOCK_BATCH_SIZE)
if comp ==0 || comp ==1 {
return true
}else{
return false
}
}
// all non miner tx must be non-coinbase tx
// each check is placed in a separate block of code, to avoid ambigous code or faulty checks
// all check are placed and not within individual functions ( so as we cannot skip a check )
// This function verifies tx fully, means all checks,
// if the transaction has passed the check it can be added to mempool, relayed or added to blockchain
// the transaction has already been deserialized thats it
// It also expands the transactions, using the repective state trie
func (chain *Blockchain) Verify_Transaction_NonCoinbase(hf_version int64, tx *transaction.Transaction) (err error) {
var tx_hash crypto.Hash
defer func() { // safety so if anything wrong happens, verification fails
if r := recover(); r != nil {
logger.WithFields(log.Fields{"txid": tx_hash}).Warnf("Recovered while Verifying transaction, failed verification, Stack trace below")
logger.Warnf("Stack trace \n%s", debug.Stack())
err = fmt.Errorf("Stack Trace %s", debug.Stack())
}
}()
if tx.Version != 1 {
return fmt.Errorf("TX should be version 1")
}
tx_hash = tx.GetHash()
if tx.TransactionType == transaction.REGISTRATION {
if _, ok := transaction_valid_cache.Load(tx_hash); ok {
return nil //logger.Infof("Found in cache %s ",tx_hash)
} else {
//logger.Infof("TX not found in cache %s len %d ",tx_hash, len(tmp_buffer))
}
if tx.IsRegistrationValid() {
transaction_valid_cache.Store(tx_hash, time.Now()) // signature got verified, cache it
return nil
}
return fmt.Errorf("Registration has invalid signature")
}
// currently we allow following types of transaction
if !(tx.TransactionType == transaction.NORMAL || tx.TransactionType == transaction.SC_TX || tx.TransactionType == transaction.BURN_TX) {
return fmt.Errorf("Unknown transaction type")
}
if tx.TransactionType == transaction.BURN_TX {
if tx.Value == 0 {
return fmt.Errorf("Burn Value cannot be zero")
}
}
// avoid some bugs lurking elsewhere
if tx.Height != uint64(int64(tx.Height)) {
return fmt.Errorf("invalid tx height")
}
for t := range tx.Payloads {
// check sanity
if tx.Payloads[t].Statement.RingSize != uint64(len(tx.Payloads[t].Statement.Publickeylist_pointers)/int(tx.Payloads[t].Statement.Bytes_per_publickey)) {
return fmt.Errorf("corrupted key pointers ringsize")
}
if tx.Payloads[t].Statement.RingSize < 2 { // ring size minimum 4
return fmt.Errorf("RingSize cannot be less than 2")
}
if tx.Payloads[t].Statement.RingSize > 128 { // ring size current limited to 128
return fmt.Errorf("RingSize cannot be more than 128")
}
if !crypto.IsPowerOf2(len(tx.Payloads[t].Statement.Publickeylist_pointers) / int(tx.Payloads[t].Statement.Bytes_per_publickey)) {
return fmt.Errorf("corrupted key pointers")
}
// check duplicate ring members within the tx
{
key_map := map[string]bool{}
for i := 0; i < int(tx.Payloads[t].Statement.RingSize); i++ {
key_map[string(tx.Payloads[t].Statement.Publickeylist_pointers[i*int(tx.Payloads[t].Statement.Bytes_per_publickey):(i+1)*int(tx.Payloads[t].Statement.Bytes_per_publickey)])] = true
}
if len(key_map) != int(tx.Payloads[t].Statement.RingSize) {
return fmt.Errorf("Duplicated ring members")
}
}
tx.Payloads[t].Statement.CLn = tx.Payloads[t].Statement.CLn[:0]
tx.Payloads[t].Statement.CRn = tx.Payloads[t].Statement.CRn[:0]
}
match_topo := int64(1)
// transaction needs to be expanded. this expansion needs balance state
_, topos := chain.Store.Topo_store.binarySearchHeight(int64(tx.Height))
// load all db versions one by one and check whether the root hash matches the one mentioned in the tx
if len(topos) < 1 {
return fmt.Errorf("TX could NOT be expanded")
}
for i := range topos {
hash, err := chain.Load_Merkle_Hash(topos[i])
if err != nil {
continue
}
if hash == tx.Payloads[0].Statement.Roothash {
match_topo = topos[i]
break // we have found the balance tree with which it was built now lets verify
}
}
if match_topo < 0 {
return fmt.Errorf("mentioned balance tree not found, cannot verify TX")
}
var balance_tree *graviton.Tree
toporecord, err := chain.Store.Topo_store.Read(match_topo)
if err != nil {
return err
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
return err
}
if balance_tree, err = ss.GetTree(config.BALANCE_TREE); err != nil {
return err
}
if balance_tree == nil {
return fmt.Errorf("mentioned balance tree not found, cannot verify TX")
}
if _, ok := transaction_valid_cache.Load(tx_hash); ok {
return nil //logger.Infof("Found in cache %s ",tx_hash)
} else {
//logger.Infof("TX not found in cache %s len %d ",tx_hash, len(tmp_buffer))
}
//logger.Infof("dTX state tree has been found")
trees := map[crypto.Hash]*graviton.Tree{}
var zerohash crypto.Hash
trees[zerohash] = balance_tree // initialize main tree by default
for t := range tx.Payloads {
tx.Payloads[t].Statement.Publickeylist_compressed = tx.Payloads[t].Statement.Publickeylist_compressed[:0]
tx.Payloads[t].Statement.Publickeylist = tx.Payloads[t].Statement.Publickeylist[:0]
var tree *graviton.Tree
if _, ok := trees[tx.Payloads[t].SCID]; ok {
tree = trees[tx.Payloads[t].SCID]
} else {
// fmt.Printf("SCID loading %s tree\n", tx.Payloads[t].SCID)
tree, _ = ss.GetTree(string(tx.Payloads[t].SCID[:]))
trees[tx.Payloads[t].SCID] = tree
}
// now lets calculate CLn and CRn
for i := 0; i < int(tx.Payloads[t].Statement.RingSize); i++ {
key_pointer := tx.Payloads[t].Statement.Publickeylist_pointers[i*int(tx.Payloads[t].Statement.Bytes_per_publickey) : (i+1)*int(tx.Payloads[t].Statement.Bytes_per_publickey)]
_, key_compressed, balance_serialized, err := tree.GetKeyValueFromHash(key_pointer)
// if destination address could be found be found in main balance tree, assume its zero balance
needs_init := false
if err != nil && !tx.Payloads[t].SCID.IsZero() {
if xerrors.Is(err, graviton.ErrNotFound) { // if the address is not found, lookup in main tree
_, key_compressed, _, err = balance_tree.GetKeyValueFromHash(key_pointer)
if err != nil {
return fmt.Errorf("balance not obtained err %s\n", err)
}
needs_init = true
}
}
if err != nil {
return fmt.Errorf("balance not obtained err %s\n", err)
}
// decode public key and expand
{
var p bn256.G1
var pcopy [33]byte
copy(pcopy[:], key_compressed)
if err = p.DecodeCompressed(key_compressed[:]); err != nil {
return fmt.Errorf("key %d could not be decompressed", i)
}
tx.Payloads[t].Statement.Publickeylist_compressed = append(tx.Payloads[t].Statement.Publickeylist_compressed, pcopy)
tx.Payloads[t].Statement.Publickeylist = append(tx.Payloads[t].Statement.Publickeylist, &p)
if needs_init {
balance := crypto.ConstructElGamal(&p, crypto.ElGamal_BASE_G) // init zero balance
balance_serialized = balance.Serialize()
}
}
var ll, rr bn256.G1
ebalance := new(crypto.ElGamal).Deserialize(balance_serialized)
ll.Add(ebalance.Left, tx.Payloads[t].Statement.C[i])
tx.Payloads[t].Statement.CLn = append(tx.Payloads[t].Statement.CLn, &ll)
rr.Add(ebalance.Right, tx.Payloads[t].Statement.D)
tx.Payloads[t].Statement.CRn = append(tx.Payloads[t].Statement.CRn, &rr)
// prepare for another sub transaction
echanges := crypto.ConstructElGamal(tx.Payloads[t].Statement.C[i], tx.Payloads[t].Statement.D)
ebalance = new(crypto.ElGamal).Deserialize(balance_serialized).Add(echanges) // homomorphic addition of changes
tree.Put(key_compressed, ebalance.Serialize()) // reserialize and store temporarily, tree will be discarded after verification
}
}
// at this point has been completely expanded, verify the tx statement
for t := range tx.Payloads {
if !tx.Payloads[t].Proof.Verify(&tx.Payloads[t].Statement, tx.GetHash(), tx.Height, tx.Payloads[t].BurnValue) {
fmt.Printf("Statement %+v\n", tx.Payloads[t].Statement)
fmt.Printf("Proof %+v\n", tx.Payloads[t].Proof)
return fmt.Errorf("transaction statement %d verification failed", t)
}
}
// these transactions are done
if tx.TransactionType == transaction.NORMAL || tx.TransactionType == transaction.BURN_TX {
transaction_valid_cache.Store(tx_hash, time.Now()) // signature got verified, cache it
return nil
}
// we reach here if tx proofs are valid
if tx.TransactionType != transaction.SC_TX {
return fmt.Errorf("non sc transaction should never reach here")
}
if !tx.IsRegistrationValid() {
return fmt.Errorf("SC has invalid signature")
}
return nil
/*
var tx_hash crypto.Hash
var tx_serialized []byte // serialized tx
defer func() { // safety so if anything wrong happens, verification fails
if r := recover(); r != nil {
logger.WithFields(log.Fields{"txid": tx_hash}).Warnf("Recovered while Verifying transaction, failed verification, Stack trace below")
logger.Warnf("Stack trace \n%s", debug.Stack())
result = false
}
}()
tx_hash = tx.GetHash()
if tx.Version != 2 {
return false
}
// make sure atleast 1 vin and 1 vout are there
if len(tx.Vin) < 1 || len(tx.Vout) < 1 {
logger.WithFields(log.Fields{"txid": tx_hash}).Warnf("Incoming TX does NOT have atleast 1 vin and 1 vout")
return false
}
// this means some other checks have failed somewhere else
if tx.IsCoinbase() { // transaction coinbase must never come here
logger.WithFields(log.Fields{"txid": tx_hash}).Warnf("Coinbase tx in non coinbase path, Please investigate")
return false
}
// Vin can be only specific type rest all make the fail case
for i := 0; i < len(tx.Vin); i++ {
switch tx.Vin[i].(type) {
case transaction.Txin_gen:
return false // this is for coinbase so fail it
case transaction.Txin_to_key: // pass
default:
return false
}
}
if hf_version >= 2 {
if len(tx.Vout) >= config.MAX_VOUT {
rlog.Warnf("Tx %s has more Vouts than allowed limit 7 actual %d", tx_hash, len(tx.Vout))
return
}
}
// Vout can be only specific type rest all make th fail case
for i := 0; i < len(tx.Vout); i++ {
switch tx.Vout[i].Target.(type) {
case transaction.Txout_to_key: // pass
public_key := tx.Vout[i].Target.(transaction.Txout_to_key).Key
if !public_key.Public_Key_Valid() { // if public_key is not valid ( not a point on the curve reject the TX)
logger.WithFields(log.Fields{"txid": tx_hash}).Warnf("TX public is INVALID %s ", public_key)
return false
}
default:
return false
}
}
// Vout should have amount 0
for i := 0; i < len(tx.Vout); i++ {
if tx.Vout[i].Amount != 0 {
logger.WithFields(log.Fields{"txid": tx_hash, "Amount": tx.Vout[i].Amount}).Warnf("Amount must be zero in ringCT world")
return false
}
}
// check the mixin , it should be atleast 4 and should be same through out the tx ( all other inputs)
// someone did send a mixin of 3 in 12006 block height
// atlantis has minimum mixin of 5
if hf_version >= 2 {
mixin := len(tx.Vin[0].(transaction.Txin_to_key).Key_offsets)
if mixin < config.MIN_MIXIN {
logger.WithFields(log.Fields{"txid": tx_hash, "Mixin": mixin}).Warnf("Mixin cannot be more than %d.", config.MIN_MIXIN)
return false
}
if mixin >= config.MAX_MIXIN {
logger.WithFields(log.Fields{"txid": tx_hash, "Mixin": mixin}).Warnf("Mixin cannot be more than %d.", config.MAX_MIXIN)
return false
}
for i := 0; i < len(tx.Vin); i++ {
if mixin != len(tx.Vin[i].(transaction.Txin_to_key).Key_offsets) {
logger.WithFields(log.Fields{"txid": tx_hash, "Mixin": mixin}).Warnf("Mixin must be same for entire TX in ringCT world")
return false
}
}
}
// duplicate ringmembers are not allowed, check them here
// just in case protect ourselves as much as we can
for i := 0; i < len(tx.Vin); i++ {
ring_members := map[uint64]bool{} // create a separate map for each input
ring_member := uint64(0)
for j := 0; j < len(tx.Vin[i].(transaction.Txin_to_key).Key_offsets); j++ {
ring_member += tx.Vin[i].(transaction.Txin_to_key).Key_offsets[j]
if _, ok := ring_members[ring_member]; ok {
logger.WithFields(log.Fields{"txid": tx_hash, "input_index": i}).Warnf("Duplicate ring member within the TX")
return false
}
ring_members[ring_member] = true // add member to ring member
}
// rlog.Debugf("Ring members for %d %+v", i, ring_members )
}
// check whether the key image is duplicate within the inputs
// NOTE: a block wide key_image duplication is done during block testing but we are still keeping it
{
kimages := map[crypto.Hash]bool{}
for i := 0; i < len(tx.Vin); i++ {
if _, ok := kimages[tx.Vin[i].(transaction.Txin_to_key).K_image]; ok {
logger.WithFields(log.Fields{
"txid": tx_hash,
"kimage": tx.Vin[i].(transaction.Txin_to_key).K_image,
}).Warnf("TX using duplicate inputs within the TX")
return false
}
kimages[tx.Vin[i].(transaction.Txin_to_key).K_image] = true // add element to map for next check
}
}
// check whether the key image is low order attack, if yes reject it right now
for i := 0; i < len(tx.Vin); i++ {
k_image := crypto.Key(tx.Vin[i].(transaction.Txin_to_key).K_image)
curve_order := crypto.CurveOrder()
mult_result := crypto.ScalarMultKey(&k_image, &curve_order)
if *mult_result != crypto.Identity {
logger.WithFields(log.Fields{
"txid": tx_hash,
"kimage": tx.Vin[i].(transaction.Txin_to_key).K_image,
"curve_order": curve_order,
"mult_result": *mult_result,
"identity": crypto.Identity,
}).Warnf("TX contains a low order key image attack, but we are already safeguarded")
return false
}
}
// disallow old transactions with borrowmean signatures
if hf_version >= 2 {
switch tx.RctSignature.Get_Sig_Type() {
case ringct.RCTTypeSimple, ringct.RCTTypeFull:
return false
}
}
// check whether the TX contains a signature or NOT
switch tx.RctSignature.Get_Sig_Type() {
case ringct.RCTTypeSimpleBulletproof, ringct.RCTTypeSimple, ringct.RCTTypeFull: // default case, pass through
default:
logger.WithFields(log.Fields{"txid": tx_hash}).Warnf("TX does NOT contain a ringct signature. It is NOT possible")
return false
}
// check tx size for validity
if hf_version >= 2 {
tx_serialized = tx.Serialize()
if len(tx_serialized) >= config.CRYPTONOTE_MAX_TX_SIZE {
rlog.Warnf("tx %s rejected Size(%d) is more than allowed(%d)", tx_hash, len(tx.Serialize()), config.CRYPTONOTE_MAX_TX_SIZE)
return false
}
}
// expand the signature first
// whether the inputs are mature and can be used at time is verified while expanding the inputs
//rlog.Debugf("txverify tx %s hf_version %d", tx_hash, hf_version )
if !chain.Expand_Transaction_v2(dbtx, hf_version, tx) {
rlog.Warnf("TX %s inputs could not be expanded or inputs are NOT mature", tx_hash)
return false
}
//logger.Infof("Expanded tx %+v", tx.RctSignature)
// create a temporary hash out of expanded transaction
// this feature is very critical and helps the daemon by spreading out the compute load
// over the entire time between 2 blocks
// this tremendously helps in block propagation times
// and make them easy to process just like like small 50 KB blocks
// each ring member if 64 bytes
tmp_buffer := make([]byte, 0, len(tx.Vin)*32+len(tx.Vin)*len(tx.Vin[0].(transaction.Txin_to_key).Key_offsets)*64)
// build the buffer for special hash
// DO NOT skip anything, use full serialized tx, it is used while building keccak hash
// use everything from tx expansion etc
for i := 0; i < len(tx.Vin); i++ { // append all mlsag sigs
tmp_buffer = append(tmp_buffer, tx.RctSignature.MlsagSigs[i].II[0][:]...)
}
for i := 0; i < len(tx.RctSignature.MixRing); i++ {
for j := 0; j < len(tx.RctSignature.MixRing[i]); j++ {
tmp_buffer = append(tmp_buffer, tx.RctSignature.MixRing[i][j].Destination[:]...)
tmp_buffer = append(tmp_buffer, tx.RctSignature.MixRing[i][j].Mask[:]...)
}
}
// 1 less allocation this way
special_hash := crypto.Keccak256(tx_serialized, tmp_buffer)
if _, ok := transaction_valid_cache.Load(special_hash); ok {
//logger.Infof("Found in cache %s ",tx_hash)
return true
} else {
//logger.Infof("TX not found in cache %s len %d ",tx_hash, len(tmp_buffer))
}
// check the ring signature
if !tx.RctSignature.Verify() {
//logger.Infof("tx expanded %+v\n", tx.RctSignature.MixRing)
logger.WithFields(log.Fields{"txid": tx_hash}).Warnf("TX RCT Signature failed")
return false
}
// signature got verified, cache it
transaction_valid_cache.Store(special_hash, time.Now())
//logger.Infof("TX validity marked in cache %s ",tx_hash)
//logger.WithFields(log.Fields{"txid": tx_hash}).Debugf("TX successfully verified")
*/
}

View File

@ -1,45 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
//import "math/big"
import "github.com/deroproject/derohe/config"
//import "github.com/deroproject/derosuite/emission"
// this file implements the logic to calculate fees dynamicallly
// get maximum size of TX
func Get_Transaction_Maximum_Size() uint64 {
return config.STARGATE_HE_MAX_TX_SIZE
}
// get the tx fee
// this function assumes that fees are per KB
// for every part of 1KB multiply by fee_per_kb
func (chain *Blockchain) Calculate_TX_fee(hard_fork_version int64, tx_size uint64) uint64 {
size_in_kb := tx_size / 1024
if (tx_size % 1024) != 0 { // for any part there of, use a full KB fee
size_in_kb += 1
}
needed_fee := size_in_kb * config.FEE_PER_KB
return needed_fee
}

View File

@ -1,38 +0,0 @@
#!/usr/bin/env bash
CURDIR=`/bin/pwd`
BASEDIR=$(dirname $0)
ABSPATH=$(readlink -f $0)
ABSDIR=$(dirname $ABSPATH)
unset GOPATH
version=`cat ./config/version.go | grep -i version |cut -d\" -f 2`
cd $CURDIR
bash $ABSDIR/build_package.sh "github.com/deroproject/derohe/cmd/derod"
bash $ABSDIR/build_package.sh "github.com/deroproject/derohe/cmd/explorer"
bash $ABSDIR/build_package.sh "github.com/deroproject/derohe/cmd/dero-wallet-cli"
bash $ABSDIR/build_package.sh "github.com/deroproject/derohe/cmd/dero-miner"
bash $ABSDIR/build_package.sh "github.com/deroproject/derohe/cmd/rpc_examples/pong_server"
for d in build/*; do cp Start.md "$d"; done
cd "${ABSDIR}/build"
#windows users require zip files
#zip -r dero_windows_amd64_$version.zip dero_windows_amd64
zip -r dero_windows_amd64.zip dero_windows_amd64
zip -r dero_windows_x86.zip dero_windows_386
zip -r dero_windows_386.zip dero_windows_386
zip -r dero_windows_amd64_$version.zip dero_windows_amd64
zip -r dero_windows_x86_$version.zip dero_windows_386
zip -r dero_windows_386_$version.zip dero_windows_386
#all other platforms are okay with tar.gz
find . -mindepth 1 -type d -not -name '*windows*' -exec tar -cvzf {}.tar.gz {} \;
find . -mindepth 1 -type d -not -name '*windows*' -exec tar -cvzf {}_$version.tar.gz {} \;
cd $CURDIR

View File

@ -1,96 +0,0 @@
#!/usr/bin/env bash
package=$1
package_split=(${package//\// })
package_name=${package_split[-1]}
CURDIR=`/bin/pwd`
BASEDIR=$(dirname $0)
ABSPATH=$(readlink -f $0)
ABSDIR=$(dirname $ABSPATH)
PLATFORMS="darwin/amd64" # amd64 only as of go1.5
PLATFORMS="$PLATFORMS windows/amd64 windows/386" # arm compilation not available for Windows
PLATFORMS="$PLATFORMS linux/amd64 linux/386"
#PLATFORMS="$PLATFORMS linux/ppc64le" is it common enough ??
#PLATFORMS="$PLATFORMS linux/mips64le" # experimental in go1.6 is it common enough ??
PLATFORMS="$PLATFORMS freebsd/amd64 freebsd/386"
PLATFORMS="$PLATFORMS netbsd/amd64" # amd64 only as of go1.6
#PLATFORMS="$PLATFORMS openbsd/amd64" # amd64 only as of go1.6
PLATFORMS="$PLATFORMS dragonfly/amd64" # amd64 only as of go1.5
#PLATFORMS="$PLATFORMS plan9/amd64 plan9/386" # as of go1.4, is it common enough ??
# solaris disabled due to badger error below
#vendor/github.com/dgraph-io/badger/y/mmap_unix.go:57:30: undefined: syscall.SYS_MADVISE
#PLATFORMS="$PLATFORMS solaris/amd64" # as of go1.3
#PLATFORMS_ARM="linux freebsd netbsd"
PLATFORMS_ARM="linux freebsd"
#PLATFORMS="linux/amd64"
#PLATFORMS_ARM=""
type setopt >/dev/null 2>&1
SCRIPT_NAME=`basename "$0"`
FAILURES=""
CURRENT_DIRECTORY=${PWD##*/}
OUTPUT="$package_name" # if no src file given, use current dir name
GCFLAGS=""
if [[ "${OUTPUT}" == "dero-miner" ]]; then GCFLAGS="github.com/deroproject/derohe/astrobwt=-B"; fi
for PLATFORM in $PLATFORMS; do
GOOS=${PLATFORM%/*}
GOARCH=${PLATFORM#*/}
OUTPUT_DIR="${ABSDIR}/build/dero_${GOOS}_${GOARCH}"
BIN_FILENAME="${OUTPUT}-${GOOS}-${GOARCH}"
echo mkdir -p $OUTPUT_DIR
if [[ "${GOOS}" == "windows" ]]; then BIN_FILENAME="${BIN_FILENAME}.exe"; fi
CMD="GOOS=${GOOS} GOARCH=${GOARCH} go build -gcflags=${GCFLAGS} -o $OUTPUT_DIR/${BIN_FILENAME} $package"
echo "${CMD}"
eval $CMD || FAILURES="${FAILURES} ${PLATFORM}"
# build docker image for linux amd64 competely static
if [[ "${GOOS}" == "linux" && "${GOARCH}" == "amd64" && "${OUTPUT}" != "explorer" && "${OUTPUT}" != "dero-miner" ]] ; then
BIN_FILENAME="docker-${OUTPUT}-${GOOS}-${GOARCH}"
CMD="GOOS=${GOOS} GOARCH=${GOARCH} CGO_ENABLED=0 go build -o $OUTPUT_DIR/${BIN_FILENAME} $package"
echo "${CMD}"
eval $CMD || FAILURES="${FAILURES} ${PLATFORM}"
fi
done
# ARM64 builds only for linux
if [[ $PLATFORMS_ARM == *"linux"* ]]; then
GOOS="linux"
GOARCH="arm64"
OUTPUT_DIR="${ABSDIR}/build/dero_${GOOS}_${GOARCH}"
CMD="GOOS=linux GOARCH=arm64 go build -gcflags=${GCFLAGS} -o $OUTPUT_DIR/${OUTPUT}-linux-arm64 $package"
echo "${CMD}"
eval $CMD || FAILURES="${FAILURES} ${PLATFORM}"
fi
for GOOS in $PLATFORMS_ARM; do
GOARCH="arm"
# build for each ARM version
for GOARM in 7 6 5; do
OUTPUT_DIR="${ABSDIR}/build/dero_${GOOS}_${GOARCH}${GOARM}"
BIN_FILENAME="${OUTPUT}-${GOOS}-${GOARCH}${GOARM}"
CMD="GOARM=${GOARM} GOOS=${GOOS} GOARCH=${GOARCH} go build -gcflags=${GCFLAGS} -o $OUTPUT_DIR/${BIN_FILENAME} $package"
echo "${CMD}"
eval "${CMD}" || FAILURES="${FAILURES} ${GOOS}/${GOARCH}${GOARM}"
done
done
# eval errors
if [[ "${FAILURES}" != "" ]]; then
echo ""
echo "${SCRIPT_NAME} failed on: ${FAILURES}"
exit 1
fi

View File

@ -1,84 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
// ripoff from blockchain folder
import "math/big"
import "github.com/deroproject/derohe/cryptography/crypto"
var (
// bigZero is 0 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigZero = big.NewInt(0)
// bigOne is 1 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigOne = big.NewInt(1)
// oneLsh256 is 1 shifted left 256 bits. It is defined here to avoid
// the overhead of creating it multiple times.
oneLsh256 = new(big.Int).Lsh(bigOne, 256)
// enabling this will simulation mode with hard coded difficulty set to 1
// the variable is knowingly not exported, so no one can tinker with it
//simulation = false // simulation mode is disabled
)
// HashToBig converts a PoW has into a big.Int that can be used to
// perform math comparisons.
func HashToBig(buf crypto.Hash) *big.Int {
// A Hash is in little-endian, but the big package wants the bytes in
// big-endian, so reverse them.
blen := len(buf) // its hardcoded 32 bytes, so why do len but lets do it
for i := 0; i < blen/2; i++ {
buf[i], buf[blen-1-i] = buf[blen-1-i], buf[i]
}
return new(big.Int).SetBytes(buf[:])
}
// this function calculates the difficulty in big num form
func ConvertDifficultyToBig(difficultyi uint64) *big.Int {
if difficultyi == 0 {
panic("difficulty can never be zero")
}
// (1 << 256) / (difficultyNum )
difficulty := new(big.Int).SetUint64(difficultyi)
denominator := new(big.Int).Add(difficulty, bigZero) // above 2 lines can be merged
return new(big.Int).Div(oneLsh256, denominator)
}
func ConvertIntegerDifficultyToBig(difficultyi *big.Int) *big.Int {
if difficultyi.Cmp(bigZero) == 0 { // if work_pow is less than difficulty
panic("difficulty can never be zero")
}
return new(big.Int).Div(oneLsh256, difficultyi)
}
// this function check whether the pow hash meets difficulty criteria
// however, it take diff in bigint format
func CheckPowHashBig(pow_hash crypto.Hash, big_difficulty_integer *big.Int) bool {
big_pow_hash := HashToBig(pow_hash)
big_difficulty := ConvertIntegerDifficultyToBig(big_difficulty_integer)
if big_pow_hash.Cmp(big_difficulty) <= 0 { // if work_pow is less than difficulty
return true
}
return false
}

View File

@ -1,572 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "io"
import "os"
import "fmt"
import "time"
import "crypto/rand"
import "sync"
import "runtime"
import "net"
import "net/http"
import "math/big"
import "path/filepath"
import "encoding/hex"
import "encoding/binary"
import "os/signal"
import "sync/atomic"
import "strings"
import "strconv"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/astrobwt"
import "github.com/deroproject/derohe/rpc"
import log "github.com/sirupsen/logrus"
import "github.com/ybbus/jsonrpc"
import "github.com/romana/rlog"
import "github.com/chzyer/readline"
import "github.com/docopt/docopt-go"
var rpcClient jsonrpc.RPCClient
var netClient *http.Client
var mutex sync.RWMutex
var job rpc.GetBlockTemplate_Result
var maxdelay int = 10000
var threads int
var iterations int = 100
var max_pow_size int = 819200 //astrobwt.MAX_LENGTH
var wallet_address string
var daemon_rpc_address string
var counter uint64
var hash_rate uint64
var Difficulty uint64
var our_height int64
var block_counter int
var command_line string = `dero-miner
DERO CPU Miner for AstroBWT.
ONE CPU, ONE VOTE.
http://wiki.dero.io
Usage:
dero-miner --wallet-address=<wallet_address> [--daemon-rpc-address=<http://127.0.0.1:10102>] [--mining-threads=<threads>] [--max-pow-size=1120] [--testnet] [--debug]
dero-miner --bench [--max-pow-size=1120]
dero-miner -h | --help
dero-miner --version
Options:
-h --help Show this screen.
--version Show version.
--bench Run benchmark mode.
--daemon-rpc-address=<http://127.0.0.1:10102> Miner will connect to daemon RPC on this port.
--wallet-address=<wallet_address> This address is rewarded when a block is mined sucessfully.
--mining-threads=<threads> Number of CPU threads for mining [default: ` + fmt.Sprintf("%d", runtime.GOMAXPROCS(0)) + `]
--max-pow-size=1120 Max amount of PoW size in KiB to mine, some older/newer cpus can increase their work
Example Mainnet: ./dero-miner-linux-amd64 --wallet-address dero1qxsplx7vzgydacczw6vnrtfh3fxqcjevyxcvlvl82fs8uykjkmaxgfgxx40cu --daemon-rpc-address=http://explorer.dero.io:10102
Example Testnet: ./dero-miner-linux-amd64 --wallet-address deto1qxsplx7vzgydacczw6vnrtfh3fxqcjevyxcvlvl82fs8uykjkmaxgfgulfha5 --daemon-rpc-address=http://127.0.0.1:40402
If daemon running on local machine no requirement of '--daemon-rpc-address' argument.
`
var Exit_In_Progress = make(chan bool)
func main() {
var err error
globals.Init_rlog()
globals.Arguments, err = docopt.Parse(command_line, nil, true, config.Version.String(), false)
if err != nil {
log.Fatalf("Error while parsing options err: %s\n", err)
}
// We need to initialize readline first, so it changes stderr to ansi processor on windows
l, err := readline.NewEx(&readline.Config{
//Prompt: "\033[92mDERO:\033[32m»\033[0m",
Prompt: "\033[92mDERO Miner:\033[32m>>>\033[0m ",
HistoryFile: filepath.Join(os.TempDir(), "dero_miner_readline.tmp"),
AutoComplete: completer,
InterruptPrompt: "^C",
EOFPrompt: "exit",
HistorySearchFold: true,
FuncFilterInputRune: filterInput,
})
if err != nil {
panic(err)
}
defer l.Close()
// parse arguments and setup testnet mainnet
globals.Initialize() // setup network and proxy
globals.Logger.Infof("") // a dummy write is required to fully activate logrus
// all screen output must go through the readline
globals.Logger.Out = l.Stdout()
os.Setenv("RLOG_LOG_LEVEL", "INFO")
rlog.UpdateEnv()
rlog.Infof("Arguments %+v", globals.Arguments)
globals.Logger.Infof("DERO Stargate HE AstroBWT miner : It is an alpha version, use it for testing/evaluations purpose only.")
globals.Logger.Infof("Copyright 2017-2021 DERO Project. All rights reserved.")
globals.Logger.Infof("OS:%s ARCH:%s GOMAXPROCS:%d", runtime.GOOS, runtime.GOARCH, runtime.GOMAXPROCS(0))
globals.Logger.Infof("Version v%s", config.Version.String())
if globals.Arguments["--wallet-address"] != nil {
addr, err := globals.ParseValidateAddress(globals.Arguments["--wallet-address"].(string))
if err != nil {
globals.Logger.Fatalf("Wallet address is invalid: err %s", err)
}
wallet_address = addr.String()
}
if !globals.Arguments["--testnet"].(bool) {
daemon_rpc_address = "http://127.0.0.1:10102"
} else {
daemon_rpc_address = "http://127.0.0.1:40402"
}
if globals.Arguments["--daemon-rpc-address"] != nil {
daemon_rpc_address = globals.Arguments["--daemon-rpc-address"].(string)
}
threads = runtime.GOMAXPROCS(0)
if globals.Arguments["--mining-threads"] != nil {
if s, err := strconv.Atoi(globals.Arguments["--mining-threads"].(string)); err == nil {
threads = s
} else {
globals.Logger.Fatalf("Mining threads argument cannot be parsed: err %s", err)
}
if threads > runtime.GOMAXPROCS(0) {
globals.Logger.Fatalf("Mining threads (%d) is more than available CPUs (%d). This is NOT optimal", threads, runtime.GOMAXPROCS(0))
}
}
if globals.Arguments["--max-pow-size"] != nil {
if s, err := strconv.Atoi(globals.Arguments["--max-pow-size"].(string)); err == nil && s > 200 && s < 100000 {
max_pow_size = s * 1024
} else {
globals.Logger.Fatalf("max-pow-size argument cannot be parsed: err %s", err)
}
}
globals.Logger.Infof("max-pow-size limited to %d bytes. Good Luck!!", max_pow_size)
if globals.Arguments["--bench"].(bool) {
var wg sync.WaitGroup
fmt.Printf("%20s %20s %20s %20s %20s \n", "Threads", "Total Time", "Total Iterations", "Time/PoW ", "Hash Rate/Sec")
iterations = 500
for bench := 1; bench <= threads; bench++ {
processor = 0
now := time.Now()
for i := 0; i < bench; i++ {
wg.Add(1)
go random_execution(&wg, iterations)
}
wg.Wait()
duration := time.Now().Sub(now)
fmt.Printf("%20s %20s %20s %20s %20s \n", fmt.Sprintf("%d", bench), fmt.Sprintf("%s", duration), fmt.Sprintf("%d", bench*iterations),
fmt.Sprintf("%s", duration/time.Duration(bench*iterations)), fmt.Sprintf("%.1f", float32(time.Second)/(float32(duration/time.Duration(bench*iterations)))))
}
os.Exit(0)
}
globals.Logger.Infof("System will mine to \"%s\" with %d threads. Good Luck!!", wallet_address, threads)
//threads_ptr := flag.Int("threads", runtime.NumCPU(), "No. Of threads")
//iterations_ptr := flag.Int("iterations", 20, "No. Of DERO Stereo POW calculated/thread")
/*bench_ptr := flag.Bool("bench", false, "run bench with params")
daemon_ptr := flag.String("rpc-server-address", "127.0.0.1:18091", "DERO daemon RPC address to get work and submit mined blocks")
delay_ptr := flag.Int("delay", 1, "Fetch job every this many seconds")
wallet_address := flag.String("wallet-address", "", "Owner of this wallet will receive mining rewards")
_ = daemon_ptr
_ = delay_ptr
_ = wallet_address
*/
if threads < 1 || iterations < 1 || threads > 2048 {
globals.Logger.Fatalf("Invalid parameters\n")
return
}
// This tiny goroutine continuously updates status as required
go func() {
last_our_height := int64(0)
last_best_height := int64(0)
last_peer_count := uint64(0)
last_topo_height := int64(0)
last_mempool_tx_count := 0
last_counter := uint64(0)
last_counter_time := time.Now()
last_mining_state := false
_ = last_mining_state
_ = last_peer_count
_ = last_topo_height
_ = last_mempool_tx_count
mining := true
for {
select {
case <-Exit_In_Progress:
return
default:
}
best_height, best_topo_height := int64(0), int64(0)
peer_count := uint64(0)
mempool_tx_count := 0
// only update prompt if needed
if last_our_height != our_height || last_best_height != best_height || last_counter != counter {
// choose color based on urgency
color := "\033[33m" // default is green color
/*if our_height < best_height {
color = "\033[33m" // make prompt yellow
} else if our_height > best_height {
color = "\033[31m" // make prompt red
}*/
pcolor := "\033[32m" // default is green color
/*if peer_count < 1 {
pcolor = "\033[31m" // make prompt red
} else if peer_count <= 8 {
pcolor = "\033[33m" // make prompt yellow
}*/
mining_string := ""
if mining {
mining_speed := float64(counter-last_counter) / (float64(uint64(time.Since(last_counter_time))) / 1000000000.0)
last_counter = counter
last_counter_time = time.Now()
switch {
case mining_speed > 1000000:
mining_string = fmt.Sprintf("MINING @ %.1f MH/s", float32(mining_speed)/1000000.0)
case mining_speed > 1000:
mining_string = fmt.Sprintf("MINING @ %.1f KH/s", float32(mining_speed)/1000.0)
case mining_speed > 0:
mining_string = fmt.Sprintf("MINING @ %.0f H/s", mining_speed)
}
}
last_mining_state = mining
hash_rate_string := ""
switch {
case hash_rate > 1000000000000:
hash_rate_string = fmt.Sprintf("%.1f TH/s", float64(hash_rate)/1000000000000.0)
case hash_rate > 1000000000:
hash_rate_string = fmt.Sprintf("%.1f GH/s", float64(hash_rate)/1000000000.0)
case hash_rate > 1000000:
hash_rate_string = fmt.Sprintf("%.1f MH/s", float64(hash_rate)/1000000.0)
case hash_rate > 1000:
hash_rate_string = fmt.Sprintf("%.1f KH/s", float64(hash_rate)/1000.0)
case hash_rate > 0:
hash_rate_string = fmt.Sprintf("%d H/s", hash_rate)
}
testnet_string := ""
if !globals.IsMainnet() {
testnet_string = "\033[31m TESTNET"
}
l.SetPrompt(fmt.Sprintf("\033[1m\033[32mDERO Miner: \033[0m"+color+"Height %d "+pcolor+" FOUND_BLOCKS %d \033[32mNW %s %s>%s>>\033[0m ", our_height, block_counter, hash_rate_string, mining_string, testnet_string))
l.Refresh()
last_our_height = our_height
last_best_height = best_height
last_peer_count = peer_count
last_mempool_tx_count = mempool_tx_count
last_topo_height = best_topo_height
}
time.Sleep(1 * time.Second)
}
}()
l.Refresh() // refresh the prompt
go func() {
var gracefulStop = make(chan os.Signal)
signal.Notify(gracefulStop, os.Interrupt) // listen to all signals
for {
sig := <-gracefulStop
fmt.Printf("received signal %s\n", sig)
if sig.String() == "interrupt" {
close(Exit_In_Progress)
}
}
}()
go increase_delay()
for i := 0; i < threads; i++ {
go mineblock()
}
go getwork()
for {
line, err := l.Readline()
if err == readline.ErrInterrupt {
if len(line) == 0 {
fmt.Print("Ctrl-C received, Exit in progress\n")
close(Exit_In_Progress)
os.Exit(0)
break
} else {
continue
}
} else if err == io.EOF {
<-Exit_In_Progress
break
}
line = strings.TrimSpace(line)
line_parts := strings.Fields(line)
command := ""
if len(line_parts) >= 1 {
command = strings.ToLower(line_parts[0])
}
switch {
case line == "help":
usage(l.Stderr())
case strings.HasPrefix(line, "say"):
line := strings.TrimSpace(line[3:])
if len(line) == 0 {
log.Println("say what?")
break
}
case command == "version":
fmt.Printf("Version %s OS:%s ARCH:%s \n", config.Version.String(), runtime.GOOS, runtime.GOARCH)
case strings.ToLower(line) == "bye":
fallthrough
case strings.ToLower(line) == "exit":
fallthrough
case strings.ToLower(line) == "quit":
close(Exit_In_Progress)
os.Exit(0)
case line == "":
default:
log.Println("you said:", strconv.Quote(line))
}
}
<-Exit_In_Progress
return
}
func random_execution(wg *sync.WaitGroup, iterations int) {
var workbuf [255]byte
runtime.LockOSThread()
//threadaffinity()
for i := 0; i < iterations; i++ {
rand.Read(workbuf[:])
//astrobwt.POW(workbuf[:])
//astrobwt.POW_0alloc(workbuf[:])
_, success := astrobwt.POW_optimized_v1(workbuf[:], max_pow_size)
if !success {
i--
}
}
wg.Done()
runtime.UnlockOSThread()
}
func increase_delay() {
for {
time.Sleep(time.Second)
maxdelay++
}
}
// continuously get work
func getwork() {
// create client
rpcClient = jsonrpc.NewClient(daemon_rpc_address + "/json_rpc")
var netTransport = &http.Transport{
Dial: (&net.Dialer{
Timeout: 5 * time.Second,
}).Dial,
TLSHandshakeTimeout: 5 * time.Second,
}
netClient = &http.Client{
Timeout: time.Second * 5,
Transport: netTransport,
}
// execute rpc to service
response, err := rpcClient.Call("get_info")
if err == nil {
globals.Logger.Infof("Connection to RPC server successful \"%s\"", daemon_rpc_address)
} else {
//log.Fatalf("Connection to RPC server Failed err %s", err)
globals.Logger.Infof("Connection to RPC server \"%s\".Failed err %s", daemon_rpc_address, err)
return
}
for {
response, err = rpcClient.Call("getblocktemplate", map[string]interface{}{"wallet_address": fmt.Sprintf("%s", wallet_address), "reserve_size": 10})
if err == nil {
var block_template rpc.GetBlockTemplate_Result
err = response.GetObject(&block_template)
if err == nil {
mutex.Lock()
job = block_template
maxdelay = 0
mutex.Unlock()
hash_rate = job.Difficulty / config.BLOCK_TIME
our_height = int64(job.Height)
Difficulty = job.Difficulty
//fmt.Printf("block_template %+v\n", block_template)
}
} else {
globals.Logger.Errorf("Error receiving block template Failed err %s", err)
}
time.Sleep(300 * time.Millisecond)
}
}
func mineblock() {
var diff big.Int
var powhash crypto.Hash
var work [76]byte
var extra_nonce [16]byte
nonce_buf := work[39 : 39+4] //since slices are linked, it modifies parent
runtime.LockOSThread()
threadaffinity()
iterations_per_loop := uint32(31.0 * float32(astrobwt.MAX_LENGTH) / float32(max_pow_size))
var data astrobwt.Data
for {
mutex.RLock()
myjob := job
mutex.RUnlock()
if maxdelay > 10 {
time.Sleep(time.Second)
continue
}
n, err := hex.Decode(work[:], []byte(myjob.Blockhashing_blob))
if err != nil || n != 76 {
time.Sleep(time.Second)
globals.Logger.Errorf("Blockwork could not decoded successfully (%s) , err:%s n:%d %+v", myjob.Blockhashing_blob, err, n, myjob)
continue
}
rand.Read(extra_nonce[:]) // fill extra nonce with random buffer
copy(work[7+32+4:75], extra_nonce[:])
diff.SetUint64(myjob.Difficulty)
if work[0] >= 1 { // check major version
for i := uint32(0); i < iterations_per_loop; i++ {
binary.BigEndian.PutUint32(nonce_buf, i)
//pow := astrobwt.POW_0alloc(work[:])
pow, success := astrobwt.POW_optimized_v2(work[:], max_pow_size, &data)
if !success {
continue
}
atomic.AddUint64(&counter, 1)
copy(powhash[:], pow[:])
if CheckPowHashBig(powhash, &diff) == true {
globals.Logger.Infof("Successfully found DERO block astroblock at difficulty:%d at height %d", myjob.Difficulty, myjob.Height)
maxdelay = 200
block_counter++
response, err := rpcClient.Call("submitblock", myjob.Blocktemplate_blob, fmt.Sprintf("%x", work[:]))
_ = response
_ = err
/*fmt.Printf("submitting %+v\n", []string{myjob.Blocktemplate_blob, fmt.Sprintf("%x", work[:])})
fmt.Printf("submit err %s\n", err)
fmt.Printf("submit response %s\n", response)
*/
break
}
}
}
}
}
func usage(w io.Writer) {
io.WriteString(w, "commands:\n")
//io.WriteString(w, completer.Tree(" "))
io.WriteString(w, "\t\033[1mhelp\033[0m\t\tthis help\n")
io.WriteString(w, "\t\033[1mstatus\033[0m\t\tShow general information\n")
io.WriteString(w, "\t\033[1mbye\033[0m\t\tQuit the miner\n")
io.WriteString(w, "\t\033[1mversion\033[0m\t\tShow version\n")
io.WriteString(w, "\t\033[1mexit\033[0m\t\tQuit the miner\n")
io.WriteString(w, "\t\033[1mquit\033[0m\t\tQuit the miner\n")
}
var completer = readline.NewPrefixCompleter(
readline.PcItem("help"),
readline.PcItem("status"),
readline.PcItem("version"),
readline.PcItem("bye"),
readline.PcItem("exit"),
readline.PcItem("quit"),
)
func filterInput(r rune) (rune, bool) {
switch r {
// block CtrlZ feature
case readline.CharCtrlZ:
return r, false
}
return r, true
}

View File

@ -1,26 +0,0 @@
//+build !linux,!windows
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
var processor int32
// TODO
func threadaffinity() {
}

View File

@ -1,46 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "runtime"
import "sync/atomic"
import "golang.org/x/sys/unix"
var processor int32
// sets thread affinity to avoid cache collision and thread migration
func threadaffinity() {
var cpuset unix.CPUSet
lock_on_cpu := atomic.AddInt32(&processor, 1)
if lock_on_cpu >= int32(runtime.GOMAXPROCS(0)) { // threads are more than cpu, we do not know what to do
return
}
cpuset.Zero()
cpuset.Set(int(avoidHT(int(lock_on_cpu))))
unix.SchedSetaffinity(0, &cpuset)
}
func avoidHT(i int) int {
count := runtime.GOMAXPROCS(0)
if i < count/2 {
return i * 2
} else {
return (i-count/2)*2 + 1
}
}

View File

@ -1,85 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "runtime"
import "sync/atomic"
import "syscall"
import "unsafe"
import "math/bits"
var libkernel32 uintptr
var setThreadAffinityMask uintptr
func doLoadLibrary(name string) uintptr {
lib, _ := syscall.LoadLibrary(name)
return uintptr(lib)
}
func doGetProcAddress(lib uintptr, name string) uintptr {
addr, _ := syscall.GetProcAddress(syscall.Handle(lib), name)
return uintptr(addr)
}
func syscall3(trap, nargs, a1, a2, a3 uintptr) uintptr {
ret, _, _ := syscall.Syscall(trap, nargs, a1, a2, a3)
return ret
}
func init() {
libkernel32 = doLoadLibrary("kernel32.dll")
setThreadAffinityMask = doGetProcAddress(libkernel32, "SetThreadAffinityMask")
}
var processor int32
// currently we suppport upto 64 cores
func SetThreadAffinityMask(hThread syscall.Handle, dwThreadAffinityMask uint) *uint32 {
ret1 := syscall3(setThreadAffinityMask, 2,
uintptr(hThread),
uintptr(dwThreadAffinityMask),
0)
return (*uint32)(unsafe.Pointer(ret1))
}
// CurrentThread returns the handle for the current thread.
// It is a pseudo handle that does not need to be closed.
func CurrentThread() syscall.Handle { return syscall.Handle(^uintptr(2 - 1)) }
// sets thread affinity to avoid cache collision and thread migration
func threadaffinity() {
lock_on_cpu := atomic.AddInt32(&processor, 1)
if lock_on_cpu >= int32(runtime.GOMAXPROCS(0)) { // threads are more than cpu, we do not know what to do
return
}
if lock_on_cpu >= bits.UintSize {
return
}
var cpuset uint
cpuset = 1 << uint(avoidHT(int(lock_on_cpu)))
SetThreadAffinityMask(CurrentThread(), cpuset)
}
func avoidHT(i int) int {
count := runtime.GOMAXPROCS(0)
if i < count/2 {
return i * 2
} else {
return (i-count/2)*2 + 1
}
}

View File

@ -1,90 +0,0 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -1,13 +0,0 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
package main
import "testing"
// Needs to expand test to cover failure conditions
func Test_Part1(t *testing.T) {
}

View File

@ -1,386 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "io"
import "time"
import "fmt"
//import "io/ioutil"
import "strings"
//import "path/filepath"
//import "encoding/hex"
import "github.com/chzyer/readline"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/globals"
//import "github.com/deroproject/derohe/address"
//import "github.com/deroproject/derohe/walletapi"
import "github.com/deroproject/derohe/transaction"
// handle menu if a wallet is currently opened
func display_easymenu_post_open_command(l *readline.Instance) {
w := l.Stderr()
io.WriteString(w, "Menu:\n")
io.WriteString(w, "\t\033[1m1\033[0m\tDisplay account Address \n")
io.WriteString(w, "\t\033[1m2\033[0m\tDisplay Seed "+color_red+"(Please save seed in safe location)\n\033[0m")
io.WriteString(w, "\t\033[1m3\033[0m\tDisplay Keys (hex)\n")
if !wallet.IsRegistered() {
io.WriteString(w, "\t\033[1m4\033[0m\tAccount registration to blockchain (registration has no fee requirement and is precondition to use the account)\n")
io.WriteString(w, "\n")
io.WriteString(w, "\n")
} else { // hide some commands, if view only wallet
io.WriteString(w, "\t\033[1m4\033[0m\tDisplay wallet pool\n")
io.WriteString(w, "\t\033[1m5\033[0m\tTransfer (send DERO) To Another Wallet\n")
//io.WriteString(w, "\t\033[1m6\033[0m\tCreate Transaction in offline mode\n")
io.WriteString(w, "\n")
}
io.WriteString(w, "\t\033[1m7\033[0m\tChange wallet password\n")
io.WriteString(w, "\t\033[1m8\033[0m\tClose Wallet\n")
if wallet.IsRegistered() {
io.WriteString(w, "\t\033[1m12\033[0m\tTransfer all balance (send DERO) To Another Wallet\n")
io.WriteString(w, "\t\033[1m13\033[0m\tShow transaction history\n")
io.WriteString(w, "\t\033[1m14\033[0m\tRescan transaction history\n")
}
io.WriteString(w, "\n\t\033[1m9\033[0m\tExit menu and start prompt\n")
io.WriteString(w, "\t\033[1m0\033[0m\tExit Wallet\n")
}
// this handles all the commands if wallet in menu mode and a wallet is opened
func handle_easymenu_post_open_command(l *readline.Instance, line string) (processed bool) {
var err error
_ = err
line = strings.TrimSpace(line)
line_parts := strings.Fields(line)
processed = true
if len(line_parts) < 1 { // if no command return
return
}
command := ""
if len(line_parts) >= 1 {
command = strings.ToLower(line_parts[0])
}
offline_tx := false
_ = offline_tx
switch command {
case "1":
fmt.Fprintf(l.Stderr(), "Wallet address : "+color_green+"%s"+color_white+"\n", wallet.GetAddress())
if !wallet.IsRegistered() {
reg_tx := wallet.GetRegistrationTX()
fmt.Fprintf(l.Stderr(), "Registration TX : "+color_green+"%x"+color_white+"\n", reg_tx.Serialize())
}
PressAnyKey(l, wallet)
case "2": // give user his seed
if !ValidateCurrentPassword(l, wallet) {
globals.Logger.Warnf("Invalid password")
PressAnyKey(l, wallet)
break
}
display_seed(l, wallet) // seed should be given only to authenticated users
PressAnyKey(l, wallet)
case "3": // give user his keys in hex form
if !ValidateCurrentPassword(l, wallet) {
globals.Logger.Warnf("Invalid password")
PressAnyKey(l, wallet)
break
}
display_spend_key(l, wallet)
PressAnyKey(l, wallet)
case "4": // Registration
if !wallet.IsRegistered() {
fmt.Fprintf(l.Stderr(), "Wallet address : "+color_green+"%s"+color_white+" is going to be registered.This is a pre-condition for using the online chain.It will take few seconds to register.\n", wallet.GetAddress())
reg_tx := wallet.GetRegistrationTX()
// at this point we must send the registration transaction
fmt.Fprintf(l.Stderr(), "Wallet address : "+color_green+"%s"+color_white+" is going to be registered.Pls wait till the account is registered.\n", wallet.GetAddress())
fmt.Printf("sending registration tx err %s\n", wallet.SendTransaction(reg_tx))
} else {
pool := wallet.GetPool()
fmt.Fprintf(l.Stderr(), "Wallet pool has %d pending/in-progress transactions.\n", len(pool))
fmt.Fprintf(l.Stderr(), "%5s %9s %8s %64s %s %s\n", "No.", "Amount", "TH", "TXID", "Destination", "Status")
for i := range pool {
var txid, status string
if len(pool[i].Tries) > 0 {
try := pool[i].Tries[len(pool[i].Tries)-1]
txid = try.TXID.String()
status = try.Status
} else {
status = "Will Dispatch in next block"
}
fmt.Fprintf(l.Stderr(), "%5d %9s %8d %64s %s %s\n", i, "-"+globals.FormatMoney(pool[i].Amount()), pool[i].Trigger_Height, txid, "Not implemented", status)
}
}
case "6":
offline_tx = true
fallthrough
case "5":
if !valid_registration_or_display_error(l, wallet) {
break
}
if !ValidateCurrentPassword(l, wallet) {
globals.Logger.Warnf("Invalid password")
break
}
// a , amount_to_transfer, err := collect_transfer_info(l,wallet)
a, err := ReadAddress(l)
if err != nil {
globals.Logger.Warnf("Err :%s", err)
break
}
var amount_to_transfer uint64
var arguments = rpc.Arguments{
// { rpc.RPC_DESTINATION_PORT, rpc.DataUint64,uint64(0x1234567812345678)},
// { rpc.RPC_VALUE_TRANSFER, rpc.DataUint64,uint64(12345)},
// { rpc.RPC_EXPIRY , rpc.DataTime, time.Now().Add(time.Hour).UTC()},
// { rpc.RPC_COMMENT , rpc.DataString, "Purchase XYZ"},
}
if a.IsIntegratedAddress() { // read everything from the address
if a.Arguments.Validate_Arguments() != nil {
globals.Logger.Warnf("Integrated Address arguments could not be validated, err: %s", err)
break
}
if !a.Arguments.Has(rpc.RPC_DESTINATION_PORT, rpc.DataUint64) { // but only it is present
globals.Logger.Warnf("Integrated Address does not contain destination port.")
break
}
arguments = append(arguments, rpc.Argument{rpc.RPC_DESTINATION_PORT, rpc.DataUint64, a.Arguments.Value(rpc.RPC_DESTINATION_PORT, rpc.DataUint64).(uint64)})
// arguments = append(arguments, rpc.Argument{"Comment", rpc.DataString, "holygrail of all data is now working if you can see this"})
if a.Arguments.Has(rpc.RPC_EXPIRY, rpc.DataTime) { // but only it is present
if a.Arguments.Value(rpc.RPC_EXPIRY, rpc.DataTime).(time.Time).Before(time.Now().UTC()) {
globals.Logger.Warnf("This address has expired on %s", a.Arguments.Value(rpc.RPC_EXPIRY, rpc.DataTime))
break
} else {
globals.Logger.Infof("This address will expire on %s", a.Arguments.Value(rpc.RPC_EXPIRY, rpc.DataTime))
}
}
globals.Logger.Infof("Destination port is integreted in address ID:%016x", a.Arguments.Value(rpc.RPC_DESTINATION_PORT, rpc.DataUint64).(uint64))
if a.Arguments.Has(rpc.RPC_COMMENT, rpc.DataString) { // but only it is present
globals.Logger.Infof("Integrated Message:%s", a.Arguments.Value(rpc.RPC_COMMENT, rpc.DataString))
}
}
// arguments have been already validated
for _, arg := range a.Arguments {
if !(arg.Name == rpc.RPC_COMMENT || arg.Name == rpc.RPC_EXPIRY || arg.Name == rpc.RPC_DESTINATION_PORT || arg.Name == rpc.RPC_SOURCE_PORT || arg.Name == rpc.RPC_VALUE_TRANSFER) {
switch arg.DataType {
case rpc.DataString:
if v, err := ReadString(l, arg.Name, arg.Value.(string)); err == nil {
arguments = append(arguments, rpc.Argument{arg.Name, arg.DataType, v})
} else {
globals.Logger.Warnf("%s could not be parsed (type %s),", arg.Name, arg.DataType)
return
}
case rpc.DataInt64:
if v, err := ReadInt64(l, arg.Name, arg.Value.(int64)); err == nil {
arguments = append(arguments, rpc.Argument{arg.Name, arg.DataType, v})
} else {
globals.Logger.Warnf("%s could not be parsed (type %s),", arg.Name, arg.DataType)
return
}
case rpc.DataUint64:
if v, err := ReadUint64(l, arg.Name, arg.Value.(uint64)); err == nil {
arguments = append(arguments, rpc.Argument{arg.Name, arg.DataType, v})
} else {
globals.Logger.Warnf("%s could not be parsed (type %s),", arg.Name, arg.DataType)
return
}
case rpc.DataFloat64:
if v, err := ReadFloat64(l, arg.Name, arg.Value.(float64)); err == nil {
arguments = append(arguments, rpc.Argument{arg.Name, arg.DataType, v})
} else {
globals.Logger.Warnf("%s could not be parsed (type %s),", arg.Name, arg.DataType)
return
}
case rpc.DataTime:
globals.Logger.Warnf("time argument is currently not supported.")
break
}
}
}
if a.Arguments.Has(rpc.RPC_VALUE_TRANSFER, rpc.DataUint64) { // but only it is present
globals.Logger.Infof("Transaction Value: %s", globals.FormatMoney(a.Arguments.Value(rpc.RPC_VALUE_TRANSFER, rpc.DataUint64).(uint64)))
amount_to_transfer = a.Arguments.Value(rpc.RPC_VALUE_TRANSFER, rpc.DataUint64).(uint64)
} else {
amount_str := read_line_with_prompt(l, fmt.Sprintf("Enter amount to transfer in DERO (max TODO): "))
if amount_str == "" {
amount_str = ".00009"
}
amount_to_transfer, err = globals.ParseAmount(amount_str)
if err != nil {
globals.Logger.Warnf("Err :%s", err)
break // invalid amount provided, bail out
}
}
// if no arguments, use space by embedding a small comment
if len(arguments) == 0 { // allow user to enter Comment
if v, err := ReadString(l, "Comment", ""); err == nil {
arguments = append(arguments, rpc.Argument{"Comment", rpc.DataString, v})
} else {
globals.Logger.Warnf("%s could not be parsed (type %s),", "Comment", rpc.DataString)
return
}
}
if _, err := arguments.CheckPack(transaction.PAYLOAD0_LIMIT); err != nil {
globals.Logger.Warnf("Arguments packing err: %s,", err)
return
}
if ConfirmYesNoDefaultNo(l, "Confirm Transaction (y/N)") {
//src_port := uint64(0xffffffffffffffff)
_, err := wallet.PoolTransfer([]rpc.Transfer{rpc.Transfer{Amount: amount_to_transfer, Destination: a.String(), Payload_RPC: arguments}}, rpc.Arguments{}) // empty SCDATA
if err != nil {
globals.Logger.Warnf("Error while building Transaction err %s\n", err)
break
}
//fmt.Printf("queued tx err %s\n")
}
case "12":
if !valid_registration_or_display_error(l, wallet) {
break
}
if !ValidateCurrentPassword(l, wallet) {
globals.Logger.Warnf("Invalid password")
break
}
globals.Logger.Warnf("Not supported err %s\n", err)
/*
// a , amount_to_transfer, err := collect_transfer_info(l,wallet)
fmt.Printf("dest address %s\n", "deroi1qxqqkmaz8nhv4q07w3cjyt84kmrqnuw4nprpqfl9xmmvtvwa7cdykxq5dph4ufnx5ndq4ltraf (14686f5e2666a4da) dero1qxqqkmaz8nhv4q07w3cjyt84kmrqnuw4nprpqfl9xmmvtvwa7cdykxqpfpaes")
a, err := ReadAddress(l)
if err != nil {
globals.Logger.Warnf("Err :%s", err)
break
}
// if user provided an integrated address donot ask him payment id
if a.IsIntegratedAddress() {
globals.Logger.Infof("Payment ID is integreted in address ID:%x", a.PaymentID)
}
if ConfirmYesNoDefaultNo(l, "Confirm Transaction to send entire balance (y/N)") {
addr_list := []address.Address{*a}
amount_list := []uint64{0} // transfer 50 dero, 2 dero
fees_per_kb := uint64(0) // fees must be calculated by walletapi
uid, err := wallet.PoolTransfer(addr_list, amount_list, fees_per_kb, 0, true)
_ = uid
if err != nil {
globals.Logger.Warnf("Error while building Transaction err %s\n", err)
break
}
}
*/
//PressAnyKey(l, wallet) // wait for a key press
case "7": // change password
if ConfirmYesNoDefaultNo(l, "Change wallet password (y/N)") &&
ValidateCurrentPassword(l, wallet) {
new_password := ReadConfirmedPassword(l, "Enter new password", "Confirm password")
err = wallet.Set_Encrypted_Wallet_Password(new_password)
if err == nil {
globals.Logger.Infof("Wallet password successfully changed")
} else {
globals.Logger.Warnf("Wallet password could not be changed err %s", err)
}
}
case "8": // close and discard user key
wallet.Close_Encrypted_Wallet()
prompt_mutex.Lock()
wallet = nil // overwrite previous instance
prompt_mutex.Unlock()
fmt.Fprintf(l.Stderr(), color_yellow+"Wallet closed"+color_white)
case "9": // enable prompt mode
menu_mode = false
globals.Logger.Infof("Prompt mode enabled, type \"menu\" command to start menu mode")
case "0", "bye", "exit", "quit":
wallet.Close_Encrypted_Wallet() // save the wallet
prompt_mutex.Lock()
wallet = nil
globals.Exit_In_Progress = true
prompt_mutex.Unlock()
fmt.Fprintf(l.Stderr(), color_yellow+"Wallet closed"+color_white)
fmt.Fprintf(l.Stderr(), color_yellow+"Exiting"+color_white)
case "13":
show_transfers(l, wallet, 100)
case "14":
globals.Logger.Infof("Rescanning wallet history")
rescan_bc(wallet)
default:
processed = false // just loop
}
return
}

View File

@ -1,253 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "io"
import "fmt"
import "time"
import "strings"
import "encoding/hex"
import "github.com/chzyer/readline"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/walletapi"
import "github.com/deroproject/derohe/walletapi/rpcserver"
// display menu before a wallet is opened
func display_easymenu_pre_open_command(l *readline.Instance) {
w := l.Stderr()
io.WriteString(w, "Menu:\n")
io.WriteString(w, "\t\033[1m1\033[0m\tOpen existing Wallet\n")
io.WriteString(w, "\t\033[1m2\033[0m\tCreate New Wallet\n")
io.WriteString(w, "\t\033[1m3\033[0m\tRecover Wallet using recovery seed (25 words)\n")
io.WriteString(w, "\t\033[1m4\033[0m\tRecover Wallet using recovery key (64 char private spend key hex)\n")
// io.WriteString(w, "\t\033[1m5\033[0m\tCreate Watch-able Wallet (view only) using wallet view key\n")
// io.WriteString(w, "\t\033[1m6\033[0m\tRecover Non-deterministic Wallet key\n")
io.WriteString(w, "\n\t\033[1m9\033[0m\tExit menu and start prompt\n")
io.WriteString(w, "\t\033[1m0\033[0m\tExit Wallet\n")
}
// handle all commands
func handle_easymenu_pre_open_command(l *readline.Instance, line string) {
var err error
line = strings.TrimSpace(line)
line_parts := strings.Fields(line)
if len(line_parts) < 1 { // if no command return
return
}
command := ""
if len(line_parts) >= 1 {
command = strings.ToLower(line_parts[0])
}
var wallett *walletapi.Wallet_Disk
//account_state := account_valid
switch command {
case "1": // open existing wallet
filename := choose_file_name(l)
// ask user a password
for i := 0; i < 3; i++ {
wallett, err = walletapi.Open_Encrypted_Wallet(filename, ReadPassword(l, filename))
if err != nil {
globals.Logger.Warnf("Error occurred while opening wallet file %s. err %s", filename, err)
wallet = nil
break
} else { // user knows the password and is db is valid
break
}
}
if wallett != nil {
wallet = wallett
wallett = nil
globals.Logger.Infof("Successfully opened wallet")
common_processing(wallet)
}
case "2": // create a new random account
filename := choose_file_name(l)
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
wallett, err = walletapi.Create_Encrypted_Wallet_Random(filename, password)
if err != nil {
globals.Logger.Warnf("Error occured while creating new wallet, err: %s", err)
wallet = nil
break
}
err = wallett.Set_Encrypted_Wallet_Password(password)
if err != nil {
globals.Logger.Warnf("Error changing password")
}
wallet = wallett
wallett = nil
seed_language := choose_seed_language(l)
wallet.SetSeedLanguage(seed_language)
globals.Logger.Debugf("Seed Language %s", seed_language)
display_seed(l, wallet)
common_processing(wallet)
case "3": // create wallet from recovery words
filename := choose_file_name(l)
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
electrum_words := read_line_with_prompt(l, "Enter seed (25 words) : ")
wallett, err = walletapi.Create_Encrypted_Wallet_From_Recovery_Words(filename, password, electrum_words)
if err != nil {
globals.Logger.Warnf("Error while recovering wallet using seed err %s\n", err)
break
}
wallet = wallett
wallett = nil
//globals.Logger.Debugf("Seed Language %s", account.SeedLanguage)
globals.Logger.Infof("Successfully recovered wallet from seed")
common_processing(wallet)
case "4": // create wallet from hex seed
filename := choose_file_name(l)
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
seed_key_string := read_line_with_prompt(l, "Please enter your seed ( hex 64 chars): ")
seed_raw, err := hex.DecodeString(seed_key_string) // hex decode
if len(seed_key_string) >= 65 || err != nil { //sanity check
globals.Logger.Warnf("Seed must be less than 66 chars hexadecimal chars")
break
}
wallett, err = walletapi.Create_Encrypted_Wallet(filename, password, new(crypto.BNRed).SetBytes(seed_raw))
if err != nil {
globals.Logger.Warnf("Error while recovering wallet using seed key err %s\n", err)
break
}
globals.Logger.Infof("Successfully recovered wallet from hex seed")
wallet = wallett
wallett = nil
seed_language := choose_seed_language(l)
wallet.SetSeedLanguage(seed_language)
globals.Logger.Debugf("Seed Language %s", seed_language)
display_seed(l, wallet)
common_processing(wallet)
/*
case "5": // create new view only wallet // TODO user providing wrong key is not being validated, do it ASAP
filename := choose_file_name(l)
view_key_string := read_line_with_prompt(l, "Please enter your View Only Key ( hex 128 chars): ")
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
wallet, err = walletapi.Create_Encrypted_Wallet_ViewOnly(filename, password, view_key_string)
if err != nil {
globals.Logger.Warnf("Error while reconstructing view only wallet using view key err %s\n", err)
break
}
if globals.Arguments["--offline"].(bool) == true {
//offline_mode = true
} else {
wallet.SetOnlineMode()
}
case "6": // create non deterministic wallet // TODO user providing wrong key is not being validated, do it ASAP
filename := choose_file_name(l)
spend_key_string := read_line_with_prompt(l, "Please enter your Secret spend key ( hex 64 chars): ")
view_key_string := read_line_with_prompt(l, "Please enter your Secret view key ( hex 64 chars): ")
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
wallet, err = walletapi.Create_Encrypted_Wallet_NonDeterministic(filename, password, spend_key_string,view_key_string)
if err != nil {
globals.Logger.Warnf("Error while reconstructing view only wallet using view key err %s\n", err)
break
}
if globals.Arguments["--offline"].(bool) == true {
//offline_mode = true
} else {
wallet.SetOnlineMode()
}
*/
case "9":
menu_mode = false
globals.Logger.Infof("Prompt mode enabled")
case "0", "bye", "exit", "quit":
globals.Exit_In_Progress = true
default: // just loop
}
//_ = account_state
// NOTE: if we are in online mode, it is handled automatically
// user opened or created a new account
// rescan blockchain in offline mode
//if account_state == false && account_valid && offline_mode {
// go trigger_offline_data_scan()
//}
}
// sets online mode, starts RPC server etc
func common_processing(wallet *walletapi.Wallet_Disk) {
if globals.Arguments["--offline"].(bool) == true {
//offline_mode = true
} else {
wallet.SetOnlineMode()
}
wallet.SetNetwork(!globals.Arguments["--testnet"].(bool))
// start rpc server if requested
if globals.Arguments["--rpc-server"].(bool) == true {
rpc_address := "127.0.0.1:" + fmt.Sprintf("%d", config.Mainnet.Wallet_RPC_Default_Port)
if !globals.IsMainnet() {
rpc_address = "127.0.0.1:" + fmt.Sprintf("%d", config.Testnet.Wallet_RPC_Default_Port)
}
if globals.Arguments["--rpc-bind"] != nil {
rpc_address = globals.Arguments["--rpc-bind"].(string)
}
globals.Logger.Infof("Starting RPC server at %s", rpc_address)
if _, err := rpcserver.RPCServer_Start(wallet); err != nil {
globals.Logger.Warnf("Error starting rpc server err %s", err)
}
}
time.Sleep(time.Second)
// init_script_engine(wallet) // init script engine
// init_plugins_engine(wallet) // init script engine
}

View File

@ -1,600 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
/// this file implements the wallet and rpc wallet
import "io"
import "os"
import "fmt"
import "time"
import "sync"
import "strings"
import "strconv"
import "runtime"
import "sync/atomic"
//import "io/ioutil"
//import "bufio"
//import "bytes"
//import "net/http"
import "github.com/romana/rlog"
import "github.com/chzyer/readline"
import "github.com/docopt/docopt-go"
import log "github.com/sirupsen/logrus"
//import "github.com/vmihailenco/msgpack"
//import "github.com/deroproject/derosuite/address"
import "github.com/deroproject/derohe/config"
//import "github.com/deroproject/derohe/crypto"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/walletapi"
import "github.com/deroproject/derohe/walletapi/mnemonics"
//import "encoding/json"
var command_line string = `dero-wallet-cli
DERO : A secure, private blockchain with smart-contracts
Usage:
dero-wallet-cli [options]
dero-wallet-cli -h | --help
dero-wallet-cli --version
Options:
-h --help Show this screen.
--version Show version.
--wallet-file=<file> Use this file to restore or create new wallet
--password=<password> Use this password to unlock the wallet
--offline Run the wallet in completely offline mode
--offline_datafile=<file> Use the data in offline mode default ("getoutputs.bin") in current dir
--prompt Disable menu and display prompt
--testnet Run in testnet mode.
--debug Debug mode enabled, print log messages
--unlocked Keep wallet unlocked for cli commands (Does not confirm password before commands)
--generate-new-wallet Generate new wallet
--restore-deterministic-wallet Restore wallet from previously saved recovery seed
--electrum-seed=<recovery-seed> Seed to use while restoring wallet
--socks-proxy=<socks_ip:port> Use a proxy to connect to Daemon.
--remote use hard coded remote daemon https://rwallet.dero.live
--daemon-address=<host:port> Use daemon instance at <host>:<port> or https://domain
--rpc-server Run rpc server, so wallet is accessible using api
--rpc-bind=<127.0.0.1:20209> Wallet binds on this ip address and port
--rpc-login=<username:password> RPC server will grant access based on these credentials
`
var menu_mode bool = true // default display menu mode
//var account_valid bool = false // if an account has been opened, do not allow to create new account in this session
var offline_mode bool // whether we are in offline mode
var sync_in_progress int // whether sync is in progress with daemon
var wallet *walletapi.Wallet_Disk //= &walletapi.Account{} // all account data is available here
//var address string
var sync_time time.Time // used to suitable update prompt
var default_offline_datafile string = "getoutputs.bin"
var color_black = "\033[30m"
var color_red = "\033[31m"
var color_green = "\033[32m"
var color_yellow = "\033[33m"
var color_blue = "\033[34m"
var color_magenta = "\033[35m"
var color_cyan = "\033[36m"
var color_white = "\033[37m"
var color_extra_white = "\033[1m"
var color_normal = "\033[0m"
var prompt_mutex sync.Mutex // prompt lock
var prompt string = "\033[92mDERO Wallet:\033[32m>>>\033[0m "
var tablock uint32
func main() {
var err error
globals.Init_rlog()
globals.Arguments, err = docopt.Parse(command_line, nil, true, "DERO atlantis wallet : work in progress", false)
//globals.Arguments, err = docopt.ParseArgs(command_line, os.Args[1:], "DERO daemon : work in progress")
if err != nil {
log.Fatalf("Error while parsing options err: %s\n", err)
}
// init the lookup table one, anyone importing walletapi should init this first, this will take around 1 sec on any recent system
walletapi.Initialize_LookupTable(1, 1<<17)
// We need to initialize readline first, so it changes stderr to ansi processor on windows
l, err := readline.NewEx(&readline.Config{
//Prompt: "\033[92mDERO:\033[32m»\033[0m",
Prompt: prompt,
HistoryFile: "", // wallet never saves any history file anywhere, to prevent any leakage
AutoComplete: completer,
InterruptPrompt: "^C",
EOFPrompt: "exit",
HistorySearchFold: true,
FuncFilterInputRune: filterInput,
})
if err != nil {
panic(err)
}
defer l.Close()
// get ready to grab passwords
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
l.SetPrompt(fmt.Sprintf("Enter password(%v): ", len(line)))
l.Refresh()
return nil, 0, false
})
l.Refresh() // refresh the prompt
// parse arguments and setup testnet mainnet
globals.Initialize() // setup network and proxy
globals.Logger.Infof("") // a dummy write is required to fully activate logrus
// all screen output must go through the readline
globals.Logger.Out = l.Stdout()
rlog.Infof("Arguments %+v", globals.Arguments)
globals.Logger.Infof("DERO Wallet : %s This version is under heavy development, use it for testing/evaluations purpose only", config.Version.String())
globals.Logger.Infof("Copyright 2017-2018 DERO Project. All rights reserved.")
globals.Logger.Infof("OS:%s ARCH:%s GOMAXPROCS:%d", runtime.GOOS, runtime.GOARCH, runtime.GOMAXPROCS(0))
globals.Logger.Infof("Wallet in %s mode", globals.Config.Name)
// disable menu mode if requested
if globals.Arguments["--prompt"] != nil && globals.Arguments["--prompt"].(bool) {
menu_mode = false
}
wallet_file := "wallet.db" //dero.wallet"
if globals.Arguments["--wallet-file"] != nil {
wallet_file = globals.Arguments["--wallet-file"].(string) // override with user specified settings
}
wallet_password := "" // default
if globals.Arguments["--password"] != nil {
wallet_password = globals.Arguments["--password"].(string) // override with user specified settings
}
// lets handle the arguments one by one
if globals.Arguments["--restore-deterministic-wallet"].(bool) {
// user wants to recover wallet, check whether seed is provided on command line, if not prompt now
seed := ""
if globals.Arguments["--electrum-seed"] != nil {
seed = globals.Arguments["--electrum-seed"].(string)
} else { // prompt user for seed
seed = read_line_with_prompt(l, "Enter your seed (25 words) : ")
}
account, err := walletapi.Generate_Account_From_Recovery_Words(seed)
if err != nil {
globals.Logger.Warnf("Error while recovering seed err %s\n", err)
return
}
// ask user a pass, if not provided on command_line
password := ""
if wallet_password == "" {
password = ReadConfirmedPassword(l, "Enter password", "Confirm password")
}
wallet, err = walletapi.Create_Encrypted_Wallet(wallet_file, password, account.Keys.Secret)
if err != nil {
globals.Logger.Warnf("Error occurred while restoring wallet. err %s", err)
return
}
globals.Logger.Debugf("Seed Language %s", account.SeedLanguage)
globals.Logger.Infof("Successfully recovered wallet from seed")
}
// generare new random account if requested
if globals.Arguments["--generate-new-wallet"] != nil && globals.Arguments["--generate-new-wallet"].(bool) {
filename := choose_file_name(l)
// ask user a pass, if not provided on command_line
password := ""
if wallet_password == "" {
password = ReadConfirmedPassword(l, "Enter password", "Confirm password")
}
seed_language := choose_seed_language(l)
wallet, err = walletapi.Create_Encrypted_Wallet_Random(filename, password)
if err != nil {
globals.Logger.Warnf("Error occured while creating new wallet, err: %s", err)
wallet = nil
return
}
globals.Logger.Debugf("Seed Language %s", seed_language)
display_seed(l, wallet)
}
if globals.Arguments["--rpc-login"] != nil {
userpass := globals.Arguments["--rpc-login"].(string)
parts := strings.SplitN(userpass, ":", 2)
if len(parts) != 2 {
globals.Logger.Warnf("RPC user name or password invalid")
return
}
log.Infof("RPC username \"%s\" password \"%s\" ", parts[0], parts[1])
}
// if wallet is nil, check whether the file exists, if yes, request password
if wallet == nil {
if _, err = os.Stat(wallet_file); err == nil {
// if a wallet file and password has been provide, make sure that the wallet opens in 1st attempt, othwer wise exit
if globals.Arguments["--password"] != nil {
wallet, err = walletapi.Open_Encrypted_Wallet(wallet_file, wallet_password)
if err != nil {
globals.Logger.Warnf("Error occurred while opening wallet. err %s", err)
os.Exit(-1)
}
} else { // request user the password
// ask user a password
for i := 0; i < 3; i++ {
wallet, err = walletapi.Open_Encrypted_Wallet(wallet_file, ReadPassword(l, wallet_file))
if err != nil {
globals.Logger.Warnf("Error occurred while opening wallet. err %s", err)
} else { // user knows the password and is db is valid
break
}
}
}
//globals.Logger.Debugf("Seed Language %s", account.SeedLanguage)
//globals.Logger.Infof("Successfully recovered wallet from seed")
}
}
// check if offline mode requested
if wallet != nil {
common_processing(wallet)
}
go walletapi.Keep_Connectivity() // maintain connectivity
//pipe_reader, pipe_writer = io.Pipe() // create pipes
// reader ready to parse any data from the file
//go blockchain_data_consumer()
// update prompt when required
prompt_mutex.Lock()
go update_prompt(l)
prompt_mutex.Unlock()
// if wallet has been opened in offline mode by commands supplied at command prompt
// trigger the offline scan
// go trigger_offline_data_scan()
// start infinite loop processing user commands
for {
prompt_mutex.Lock()
if globals.Exit_In_Progress { // exit if requested so
prompt_mutex.Unlock()
break
}
prompt_mutex.Unlock()
if menu_mode { // display menu if requested
if wallet != nil { // account is opened, display post menu
display_easymenu_post_open_command(l)
} else { // account has not been opened display pre open menu
display_easymenu_pre_open_command(l)
}
}
line, err := l.Readline()
if err == readline.ErrInterrupt {
if len(line) == 0 {
globals.Logger.Infof("Ctrl-C received, Exit in progress\n")
globals.Exit_In_Progress = true
break
} else {
continue
}
} else if err == io.EOF {
// break
time.Sleep(time.Second)
}
// pass command to suitable handler
if menu_mode {
if wallet != nil {
if !handle_easymenu_post_open_command(l, line) { // if not processed , try processing as command
handle_prompt_command(l, line)
PressAnyKey(l, wallet)
}
} else {
handle_easymenu_pre_open_command(l, line)
}
} else {
handle_prompt_command(l, line)
}
}
prompt_mutex.Lock()
globals.Exit_In_Progress = true
prompt_mutex.Unlock()
}
// update prompt as and when necessary
// TODO: make this code simple, with clear direction
func update_prompt(l *readline.Instance) {
last_wallet_height := uint64(0)
last_daemon_height := uint64(0)
daemon_online := false
last_update_time := int64(0)
for {
time.Sleep(30 * time.Millisecond) // give user a smooth running number
prompt_mutex.Lock()
if globals.Exit_In_Progress {
prompt_mutex.Unlock()
return
}
prompt_mutex.Unlock()
if atomic.LoadUint32(&tablock) > 0 { // tab key has been presssed, stop delivering updates to prompt
continue
}
prompt_mutex.Lock() // do not update if we can not lock the mutex
// show first 8 bytes of address
address_trim := ""
if wallet != nil {
tmp_addr := wallet.GetAddress().String()
address_trim = tmp_addr[0:8]
} else {
address_trim = "DERO Wallet"
}
if wallet == nil {
l.SetPrompt(fmt.Sprintf("\033[1m\033[32m%s \033[0m"+color_green+"0/%d \033[32m>>>\033[0m ", address_trim, 0))
prompt_mutex.Unlock()
continue
}
// only update prompt if needed, or update atleast once every second
_ = daemon_online
//fmt.Printf("chekcing if update is required\n")
if last_wallet_height != wallet.Get_Height() || last_daemon_height != wallet.Get_Daemon_Height() ||
/*daemon_online != wallet.IsDaemonOnlineCached() ||*/ (time.Now().Unix()-last_update_time) >= 1 {
// choose color based on urgency
color := "\033[32m" // default is green color
if wallet.Get_Height() < wallet.Get_Daemon_Height() {
color = "\033[33m" // make prompt yellow
}
dheight := wallet.Get_Daemon_Height()
/*if wallet.IsDaemonOnlineCached() == false {
color = "\033[33m" // make prompt yellow
dheight = 0
}*/
balance_string := ""
//balance_unlocked, locked_balance := wallet.Get_Balance_Rescan()// wallet.Get_Balance()
balance_unlocked, _ := wallet.Get_Balance()
balance_string = fmt.Sprintf(color_green+"%s "+color_white, globals.FormatMoney(balance_unlocked))
if wallet.Error != nil {
balance_string += fmt.Sprintf(color_red+" %s ", wallet.Error)
} else if wallet.PoolCount() > 0 {
balance_string += fmt.Sprintf(color_yellow+"(%d tx pending for -%s)", wallet.PoolCount(), globals.FormatMoney(wallet.PoolBalance()))
}
testnet_string := ""
if !globals.IsMainnet() {
testnet_string = "\033[31m TESTNET"
}
l.SetPrompt(fmt.Sprintf("\033[1m\033[32m%s \033[0m"+color+"%d/%d %s %s\033[32m>>>\033[0m ", address_trim, wallet.Get_Height(), dheight, balance_string, testnet_string))
l.Refresh()
last_wallet_height = wallet.Get_Height()
last_daemon_height = wallet.Get_Daemon_Height()
last_update_time = time.Now().Unix()
//daemon_online = wallet.IsDaemonOnlineCached()
_ = last_update_time
}
prompt_mutex.Unlock()
}
}
// create a new wallet from scratch from random numbers
func Create_New_Wallet(l *readline.Instance) (w *walletapi.Wallet_Disk, err error) {
// ask user a file name to store the data
walletpath := read_line_with_prompt(l, "Please enter wallet file name : ")
walletpassword := ""
account, _ := walletapi.Generate_Keys_From_Random()
account.SeedLanguage = choose_seed_language(l)
w, err = walletapi.Create_Encrypted_Wallet(walletpath, walletpassword, account.Keys.Secret)
if err != nil {
return
}
// set wallet seed language
// a new account has been created, append the seed to user home directory
//usr, err := user.Current()
/*if err != nil {
globals.Logger.Warnf("Cannot get current username to save recovery key and password")
}else{ // we have a user, get his home dir
}*/
return
}
/*
// create a new wallet from hex seed provided
func Create_New_Account_from_seed(l *readline.Instance) *walletapi.Account {
var account *walletapi.Account
var seedkey crypto.Key
seed := read_line_with_prompt(l, "Please enter your seed ( hex 64 chars): ")
seed = strings.TrimSpace(seed) // trim any extra space
seed_raw, err := hex.DecodeString(seed) // hex decode
if len(seed) != 64 || err != nil { //sanity check
globals.Logger.Warnf("Seed must be 64 chars hexadecimal chars")
return account
}
copy(seedkey[:], seed_raw[:32]) // copy bytes to seed
account, _ = walletapi.Generate_Account_From_Seed(seedkey) // create a new account
account.SeedLanguage = choose_seed_language(l) // ask user his seed preference and set it
account_valid = true
return account
}
// create a new wallet from viewable seed provided
// viewable seed consists of public spend key and private view key
func Create_New_Account_from_viewable_key(l *readline.Instance) *walletapi.Account {
var seedkey crypto.Key
var privateview crypto.Key
var account *walletapi.Account
seed := read_line_with_prompt(l, "Please enter your View Only Key ( hex 128 chars): ")
seed = strings.TrimSpace(seed) // trim any extra space
seed_raw, err := hex.DecodeString(seed)
if len(seed) != 128 || err != nil {
globals.Logger.Warnf("View Only key must be 128 chars hexadecimal chars")
return account
}
copy(seedkey[:], seed_raw[:32])
copy(privateview[:], seed_raw[32:64])
account, _ = walletapi.Generate_Account_View_Only(seedkey, privateview)
account_valid = true
return account
}
*/
// helper function to let user to choose a seed in specific lanaguage
func choose_seed_language(l *readline.Instance) string {
languages := mnemonics.Language_List()
fmt.Printf("Language list for seeds, please enter a number (default English)\n")
for i := range languages {
fmt.Fprintf(l.Stderr(), "\033[1m%2d:\033[0m %s\n", i, languages[i])
}
language_number := read_line_with_prompt(l, "Please enter a choice: ")
choice := 0 // 0 for english
if s, err := strconv.Atoi(language_number); err == nil {
choice = s
}
for i := range languages { // if user gave any wrong or ot of range choice, choose english
if choice == i {
return languages[choice]
}
}
// if no match , return Englisg
return "English"
}
// lets the user choose a filename or use default
func choose_file_name(l *readline.Instance) (filename string) {
default_filename := "wallet.db"
if globals.Arguments["--wallet-file"] != nil {
default_filename = globals.Arguments["--wallet-file"].(string) // override with user specified settings
}
filename = read_line_with_prompt(l, fmt.Sprintf("Enter wallet filename (default %s): ", default_filename))
if len(filename) < 1 {
filename = default_filename
}
return
}
// read a line from the prompt
// since we cannot query existing, we can get away by using password mode with
func read_line_with_prompt(l *readline.Instance, prompt_temporary string) string {
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
l.SetPrompt(prompt_temporary)
line, err := l.Readline()
if err == readline.ErrInterrupt {
if len(line) == 0 {
globals.Logger.Infof("Ctrl-C received, Exiting\n")
os.Exit(0)
}
} else if err == io.EOF {
os.Exit(0)
}
l.SetPrompt(prompt)
return line
}
// filter out specfic inputs from input processing
// currently we only skip CtrlZ background key
func filterInput(r rune) (rune, bool) {
switch r {
// block CtrlZ feature
case readline.CharCtrlZ:
return r, false
case readline.CharTab:
atomic.StoreUint32(&tablock, 1) // lock prompt update
case readline.CharEnter:
atomic.StoreUint32(&tablock, 0) // enable prompt update
}
return r, true
}

View File

@ -1,941 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "os"
import "io"
import "fmt"
import "bytes"
import "time"
//import "io/ioutil"
//import "path/filepath"
import "strings"
import "strconv"
import "encoding/hex"
import "github.com/chzyer/readline"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/walletapi"
import "github.com/deroproject/derohe/cryptography/crypto"
var account walletapi.Account
// handle all commands while in prompt mode
func handle_prompt_command(l *readline.Instance, line string) {
var err error
line = strings.TrimSpace(line)
line_parts := strings.Fields(line)
if len(line_parts) < 1 { // if no command return
return
}
_ = err
command := ""
if len(line_parts) >= 1 {
command = strings.ToLower(line_parts[0])
}
// handled closed wallet commands
switch command {
case "address", "rescan_bc", "seed", "set", "password", "get_tx_key", "i8", "payment_id":
fallthrough
case "spendkey", "transfer", "close":
fallthrough
case "transfer_all", "sweep_all", "show_transfers", "balance", "status":
if wallet == nil {
globals.Logger.Warnf("No wallet available")
return
}
}
switch command {
case "help":
usage(l.Stderr())
case "address": // give user his account address
fmt.Fprintf(l.Stderr(), "Wallet address : "+color_green+"%s"+color_white+"\n", wallet.GetAddress())
case "status": // show syncronisation status
fmt.Fprintf(l.Stderr(), "Wallet Version : %s\n", config.Version.String())
fmt.Fprintf(l.Stderr(), "Wallet Height : %d\t Daemon Height %d \n", wallet.Get_Height(), wallet.Get_Daemon_Height())
fallthrough
case "balance": // give user his balance
balance_unlocked, locked_balance := wallet.Get_Balance_Rescan()
fmt.Fprintf(l.Stderr(), "DERO Balance : "+color_green+"%s"+color_white+"\n", globals.FormatMoney(locked_balance+balance_unlocked))
line_parts := line_parts[1:] // remove first part
switch len(line_parts) {
case 0:
//globals.Logger.Warnf("not implemented")
break
case 1: // scid balance
scid := crypto.HashHexToHash(line_parts[0])
//globals.Logger.Infof("scid1 %s line_parts %+v", scid, line_parts)
balance, err := wallet.GetDecryptedBalanceAtTopoHeight(scid, -1, wallet.GetAddress().String())
//globals.Logger.Infof("scid %s", scid)
if err != nil {
globals.Logger.Infof("error %s", err)
} else {
fmt.Fprintf(l.Stderr(), "SCID %s Balance : "+color_green+"%s"+color_white+"\n\n", line_parts[0], globals.FormatMoney(balance))
}
case 2: // scid balance at topoheight
globals.Logger.Warnf("not implemented")
break
}
case "rescan_bc", "rescan_spent": // rescan from 0
if offline_mode {
globals.Logger.Warnf("Offline wallet rescanning NOT implemented")
} else {
rescan_bc(wallet)
}
case "seed": // give user his seed, if password is valid
if !ValidateCurrentPassword(l, wallet) {
globals.Logger.Warnf("Invalid password")
PressAnyKey(l, wallet)
break
}
display_seed(l, wallet) // seed should be given only to authenticated users
case "spendkey": // give user his spend key
display_spend_key(l, wallet)
case "password": // change wallet password
if ConfirmYesNoDefaultNo(l, "Change wallet password (y/N)") &&
ValidateCurrentPassword(l, wallet) {
new_password := ReadConfirmedPassword(l, "Enter new password", "Confirm password")
err = wallet.Set_Encrypted_Wallet_Password(new_password)
if err == nil {
globals.Logger.Infof("Wallet password successfully changed")
} else {
globals.Logger.Warnf("Wallet password could not be changed err %s", err)
}
}
case "get_tx_key":
if !valid_registration_or_display_error(l, wallet) {
break
}
if len(line_parts) == 2 && len(line_parts[1]) == 64 {
_, err := hex.DecodeString(line_parts[1])
if err != nil {
globals.Logger.Warnf("Error parsing txhash")
break
}
key := wallet.GetTXKey(line_parts[1])
if key != "" {
globals.Logger.Infof("TX Proof key \"%s\"", key)
} else {
globals.Logger.Warnf("TX not found in database")
}
} else {
globals.Logger.Warnf("get_tx_key needs transaction hash as input parameter")
globals.Logger.Warnf("eg. get_tx_key ea551b02b9f1e8aebe4d7b1b7f6bf173d76ae614cb9a066800773fee9e226fd7")
}
case "sweep_all", "transfer_all": // transfer everything
//Transfer_Everything(l)
case "show_transfers":
show_transfers(l, wallet, 100)
case "set": // set/display different settings
handle_set_command(l, line)
case "close": // close the account
if !ValidateCurrentPassword(l, wallet) {
globals.Logger.Warnf("Invalid password")
break
}
wallet.Close_Encrypted_Wallet() // overwrite previous instance
case "menu": // enable menu mode
menu_mode = true
globals.Logger.Infof("Menu mode enabled")
case "i8", "integrated_address": // user wants a random integrated address 8 bytes
a := wallet.GetRandomIAddress8()
fmt.Fprintf(l.Stderr(), "Wallet integrated address : "+color_green+"%s"+color_white+"\n", a.String())
fmt.Fprintf(l.Stderr(), "Embedded Arguments : "+color_green+"%s"+color_white+"\n", a.Arguments)
case "version":
globals.Logger.Infof("Version %s\n", config.Version.String())
case "burn":
line_parts := line_parts[1:] // remove first part
if len(line_parts) < 2 {
globals.Logger.Warnf("burn needs destination address and amount as input parameter")
break
}
addr := line_parts[0]
send_amount := uint64(1)
burn_amount, err := globals.ParseAmount(line_parts[1])
if err != nil {
globals.Logger.Warnf("Error Parsing burn amount \"%s\" err %s", line_parts[1], err)
return
}
if ConfirmYesNoDefaultNo(l, "Confirm Transaction (y/N)") {
//uid, err := wallet.PoolTransferWithBurn(addr, send_amount, burn_amount, data, rpc.Arguments{})
uid, err := wallet.PoolTransfer([]rpc.Transfer{rpc.Transfer{Amount: send_amount, Burn: burn_amount, Destination: addr}}, rpc.Arguments{}) // empty SCDATA
_ = uid
if err != nil {
globals.Logger.Warnf("Error while building Transaction err %s\n", err)
break
}
//fmt.Printf("queued tx err %s\n", err)
//build_relay_transaction(l, uid, err, offline_tx, amount_list)
}
case "transfer":
// parse the address, amount pair
/*
line_parts := line_parts[1:] // remove first part
addr_list := []address.Address{}
amount_list := []uint64{}
payment_id := ""
for i := 0; i < len(line_parts); {
globals.Logger.Debugf("len %d %+v", len(line_parts), line_parts)
if len(line_parts) >= 2 { // parse address amount pair
addr, err := globals.ParseValidateAddress(line_parts[0])
if err != nil {
globals.Logger.Warnf("Error Parsing \"%s\" err %s", line_parts[0], err)
return
}
amount, err := globals.ParseAmount(line_parts[1])
if err != nil {
globals.Logger.Warnf("Error Parsing \"%s\" err %s", line_parts[1], err)
return
}
line_parts = line_parts[2:] // remove parsed
addr_list = append(addr_list, *addr)
amount_list = append(amount_list, amount)
continue
}
if len(line_parts) == 1 { // parse payment_id
if len(line_parts[0]) == 64 || len(line_parts[0]) == 16 {
_, err := hex.DecodeString(line_parts[0])
if err != nil {
globals.Logger.Warnf("Error parsing payment ID, it should be in hex 16 or 64 chars")
return
}
payment_id = line_parts[0]
line_parts = line_parts[1:]
} else {
globals.Logger.Warnf("Invalid payment ID \"%s\"", line_parts[0])
return
}
}
}
// check if everything is okay, if yes build the transaction
if len(addr_list) == 0 {
globals.Logger.Warnf("Destination address not provided")
return
}
payment_id_integrated := false
for i := range addr_list {
if addr_list[i].IsIntegratedAddress() {
payment_id_integrated = true
globals.Logger.Infof("Payment ID is integreted in address ID:%x", addr_list[i].PaymentID)
}
}
offline := false
tx, inputs, input_sum, change, err := wallet.Transfer(addr_list, amount_list, 0, payment_id, 0, 0)
build_relay_transaction(l, tx, inputs, input_sum, change, err, offline, amount_list)
*/
case "q", "bye", "exit", "quit":
globals.Exit_In_Progress = true
if wallet != nil {
wallet.Close_Encrypted_Wallet() // overwrite previous instance
}
case "flush": // flush wallet pool
if wallet != nil {
fmt.Fprintf(l.Stderr(), "Flushed %d transactions from wallet pool\n", wallet.PoolClear())
}
case "": // blank enter key just loop
default:
//fmt.Fprintf(l.Stderr(), "you said: %s", strconv.Quote(line))
globals.Logger.Warnf("No such command")
}
}
// handle all commands while in prompt mode
func handle_set_command(l *readline.Instance, line string) {
//var err error
line = strings.TrimSpace(line)
line_parts := strings.Fields(line)
if len(line_parts) < 1 { // if no command return
return
}
command := ""
if len(line_parts) >= 2 {
command = strings.ToLower(line_parts[1])
}
help := false
switch command {
case "help":
case "ringsize":
if len(line_parts) != 3 {
globals.Logger.Warnf("Wrong number of arguments, see help eg")
help = true
break
}
s, err := strconv.ParseUint(line_parts[2], 10, 64)
if err != nil {
globals.Logger.Warnf("Error parsing ringsize")
return
}
wallet.SetRingSize(int(s))
globals.Logger.Infof("Ring size = %d", wallet.GetRingSize())
case "priority":
if len(line_parts) != 3 {
globals.Logger.Warnf("Wrong number of arguments, see help eg")
help = true
break
}
s, err := strconv.ParseFloat(line_parts[2], 64)
if err != nil {
globals.Logger.Warnf("Error parsing priority")
return
}
wallet.SetFeeMultiplier(float32(s))
globals.Logger.Infof("Transaction priority = %.02f", wallet.GetFeeMultiplier())
case "seed": // seed only has 1 setting, lanuage so do it now
language := choose_seed_language(l)
globals.Logger.Infof("Setting seed language to \"%s\"", wallet.SetSeedLanguage(language))
default:
help = true
}
if help == true || len(line_parts) == 1 { // user type plain set command, give out all settings and help
fmt.Fprintf(l.Stderr(), color_extra_white+"Current settings"+color_extra_white+"\n")
fmt.Fprintf(l.Stderr(), color_normal+"Seed Language: "+color_extra_white+"%s\t"+color_normal+"eg. "+color_extra_white+"set seed language\n"+color_normal, wallet.GetSeedLanguage())
fmt.Fprintf(l.Stderr(), color_normal+"Ringsize: "+color_extra_white+"%d\t"+color_normal+"eg. "+color_extra_white+"set ringsize 16\n"+color_normal, wallet.GetRingSize())
fmt.Fprintf(l.Stderr(), color_normal+"Priority: "+color_extra_white+"%0.2f\t"+color_normal+"eg. "+color_extra_white+"set priority 4.0\t"+color_normal+"Transaction priority on DERO network \n", wallet.GetFeeMultiplier())
fmt.Fprintf(l.Stderr(), "\t\tMinimum priority is 1.00. High priority = high fees\n")
}
}
// read an address with all goodies such as color encoding and other things in prompt
func ReadAddress(l *readline.Instance) (a *rpc.Address, err error) {
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.EnableMask = false
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
error_message := ""
color := color_green
if len(line) >= 1 {
_, err := globals.ParseValidateAddress(string(line))
if err != nil {
error_message = " " //err.Error()
}
}
if error_message != "" {
color = color_red // Should we display the error message here??
l.SetPrompt(fmt.Sprintf("%sEnter Destination Address: ", color))
} else {
l.SetPrompt(fmt.Sprintf("%sEnter Destination Address: ", color))
}
l.Refresh()
return nil, 0, false
})
line, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
return
}
a, err = globals.ParseValidateAddress(string(line))
l.SetPrompt(prompt)
l.Refresh()
return
}
func ReadFloat64(l *readline.Instance, cprompt string, default_value float64) (a float64, err error) {
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.EnableMask = false
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
error_message := ""
color := color_green
if len(line) >= 1 {
_, err := strconv.ParseFloat(string(line), 64)
if err != nil {
error_message = " " //err.Error()
}
}
if error_message != "" {
color = color_red // Should we display the error message here??
l.SetPrompt(fmt.Sprintf("%sEnter %s (default %f): ", color, cprompt, default_value))
} else {
l.SetPrompt(fmt.Sprintf("%sEnter %s (default %f): ", color, cprompt, default_value))
}
l.Refresh()
return nil, 0, false
})
line, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
return
}
a, err = strconv.ParseFloat(string(line), 64)
l.SetPrompt(cprompt)
l.Refresh()
return
}
func ReadUint64(l *readline.Instance, cprompt string, default_value uint64) (a uint64, err error) {
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.EnableMask = false
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
error_message := ""
color := color_green
if len(line) >= 1 {
_, err := strconv.ParseUint(string(line), 0, 64)
if err != nil {
error_message = " " //err.Error()
}
}
if error_message != "" {
color = color_red // Should we display the error message here??
l.SetPrompt(fmt.Sprintf("%sEnter %s (default %d): ", color, cprompt, default_value))
} else {
l.SetPrompt(fmt.Sprintf("%sEnter %s (default %d): ", color, cprompt, default_value))
}
l.Refresh()
return nil, 0, false
})
line, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
return
}
a, err = strconv.ParseUint(string(line), 0, 64)
l.SetPrompt(cprompt)
l.Refresh()
return
}
func ReadInt64(l *readline.Instance, cprompt string, default_value int64) (a int64, err error) {
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.EnableMask = false
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
error_message := ""
color := color_green
if len(line) >= 1 {
_, err := strconv.ParseInt(string(line), 0, 64)
if err != nil {
error_message = " " //err.Error()
}
}
if error_message != "" {
color = color_red // Should we display the error message here??
l.SetPrompt(fmt.Sprintf("%sEnter %s (default %d): ", color, cprompt, default_value))
} else {
l.SetPrompt(fmt.Sprintf("%sEnter %s (default %d): ", color, cprompt, default_value))
}
l.Refresh()
return nil, 0, false
})
line, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
return
}
a, err = strconv.ParseInt(string(line), 0, 64)
l.SetPrompt(cprompt)
l.Refresh()
return
}
func ReadString(l *readline.Instance, cprompt string, default_value string) (a string, err error) {
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.EnableMask = false
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
error_message := ""
color := color_green
if len(line) < 1 {
error_message = " " //err.Error()
}
if error_message != "" {
color = color_red // Should we display the error message here??
l.SetPrompt(fmt.Sprintf("%sEnter %s (default '%s'): ", color, cprompt, default_value))
} else {
l.SetPrompt(fmt.Sprintf("%sEnter %s (default '%s'): ", color, cprompt, default_value))
}
l.Refresh()
return nil, 0, false
})
line, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
return
}
a = string(line)
l.SetPrompt(cprompt)
l.Refresh()
return
}
// confirms whether the user wants to confirm yes
func ConfirmYesNoDefaultYes(l *readline.Instance, prompt_temporary string) bool {
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
l.SetPrompt(prompt_temporary)
line, err := l.Readline()
if err == readline.ErrInterrupt {
if len(line) == 0 {
globals.Logger.Infof("Ctrl-C received, Exiting\n")
os.Exit(0)
}
} else if err == io.EOF {
os.Exit(0)
}
l.SetPrompt(prompt)
l.Refresh()
if strings.TrimSpace(line) == "n" || strings.TrimSpace(line) == "N" {
return false
}
return true
}
// confirms whether the user wants to confirm NO
func ConfirmYesNoDefaultNo(l *readline.Instance, prompt_temporary string) bool {
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
l.SetPrompt(prompt_temporary)
line, err := l.Readline()
if err == readline.ErrInterrupt {
if len(line) == 0 {
globals.Logger.Infof("Ctrl-C received, Exiting\n")
os.Exit(0)
}
} else if err == io.EOF {
os.Exit(0)
}
l.SetPrompt(prompt)
if strings.TrimSpace(line) == "y" || strings.TrimSpace(line) == "Y" {
return true
}
return false
}
// confirms whether user knows the current password for the wallet
// this is triggerred while transferring amount, changing settings and so on
func ValidateCurrentPassword(l *readline.Instance, wallet *walletapi.Wallet_Disk) bool {
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
// if user requested wallet to be open/unlocked, keep it open
if globals.Arguments["--unlocked"].(bool) == true {
return true
}
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
l.SetPrompt(fmt.Sprintf("Enter current wallet password(%v): ", len(line)))
l.Refresh()
return nil, 0, false
})
//pswd, err := l.ReadPassword("please enter your password: ")
pswd, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
return false
}
// something was read, check whether it's the password setup in the wallet
return wallet.Check_Password(string(pswd))
}
// reads a password to open the wallet
func ReadPassword(l *readline.Instance, filename string) string {
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
try_again:
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
l.SetPrompt(fmt.Sprintf("Enter wallet password for %s (%v): ", filename, len(line)))
l.Refresh()
return nil, 0, false
})
//pswd, err := l.ReadPassword("please enter your password: ")
pswd, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
goto try_again
}
// something was read, check whether it's the password setup in the wallet
return string(pswd)
}
func ReadConfirmedPassword(l *readline.Instance, first_prompt string, second_prompt string) (password string) {
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
for {
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
l.SetPrompt(fmt.Sprintf("%s(%v): ", first_prompt, len(line)))
l.Refresh()
return nil, 0, false
})
password_bytes, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
//return
continue
}
setPasswordCfg = l.GenPasswordConfig()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
l.SetPrompt(fmt.Sprintf("%s(%v): ", second_prompt, len(line)))
l.Refresh()
return nil, 0, false
})
confirmed_bytes, err := l.ReadPasswordWithConfig(setPasswordCfg)
if err != nil {
//return
continue
}
if bytes.Equal(password_bytes, confirmed_bytes) {
password = string(password_bytes)
err = nil
return
}
globals.Logger.Warnf("Passwords mismatch.Retrying.")
}
}
// confirms user to press a key
// this is triggerred while transferring amount, changing settings and so on
func PressAnyKey(l *readline.Instance, wallet *walletapi.Wallet_Disk) {
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
l.SetPrompt(fmt.Sprintf("Press ENTER key to continue..."))
l.Refresh()
return nil, 0, false
})
// any error or any key is the same
l.ReadPasswordWithConfig(setPasswordCfg)
return
}
// this completer is used to complete the commands at the prompt
// BUG, this needs to be disabled in menu mode
var completer = readline.NewPrefixCompleter(
readline.PcItem("help"),
readline.PcItem("address"),
readline.PcItem("balance"),
readline.PcItem("integrated_address"),
readline.PcItem("get_tx_key"),
readline.PcItem("menu"),
readline.PcItem("rescan_bc"),
readline.PcItem("payment_id"),
readline.PcItem("print_height"),
readline.PcItem("seed"),
readline.PcItem("set",
readline.PcItem("mixin"),
readline.PcItem("seed"),
readline.PcItem("priority"),
),
readline.PcItem("show_transfers"),
readline.PcItem("spendkey"),
readline.PcItem("status"),
readline.PcItem("version"),
readline.PcItem("transfer"),
readline.PcItem("transfer_all"),
readline.PcItem("walletviewkey"),
readline.PcItem("bye"),
readline.PcItem("exit"),
readline.PcItem("quit"),
)
// help command screen
func usage(w io.Writer) {
io.WriteString(w, "commands:\n")
io.WriteString(w, "\t\033[1mhelp\033[0m\t\tthis help\n")
io.WriteString(w, "\t\033[1maddress\033[0m\t\tDisplay user address\n")
io.WriteString(w, "\t\033[1mbalance\033[0m\t\tDisplay user balance\n")
io.WriteString(w, "\t\033[1mget_tx_key\033[0m\tDisplay tx proof to prove receiver for specific transaction\n")
io.WriteString(w, "\t\033[1mintegrated_address\033[0m\tDisplay random integrated address (with encrypted payment ID)\n")
io.WriteString(w, "\t\033[1mmenu\033[0m\t\tEnable menu mode\n")
io.WriteString(w, "\t\033[1mrescan_bc\033[0m\tRescan blockchain to re-obtain transaction history \n")
io.WriteString(w, "\t\033[1mpassword\033[0m\tChange wallet password\n")
io.WriteString(w, "\t\033[1mpayment_id\033[0m\tPrint random Payment ID (for encrypted version see integrated_address)\n")
io.WriteString(w, "\t\033[1mseed\033[0m\t\tDisplay seed\n")
io.WriteString(w, "\t\033[1mshow_transfers\033[0m\tShow all transactions to/from current wallet\n")
io.WriteString(w, "\t\033[1mset\033[0m\t\tSet/get various settings\n")
io.WriteString(w, "\t\033[1mstatus\033[0m\t\tShow general information and balance\n")
io.WriteString(w, "\t\033[1mspendkey\033[0m\tView secret key\n")
io.WriteString(w, "\t\033[1mtransfer\033[0m\tTransfer/Send DERO to another address\n")
io.WriteString(w, "\t\t\tEg. transfer <address> <amount>\n")
io.WriteString(w, "\t\033[1mtransfer_all\033[0m\tTransfer everything to another address\n")
io.WriteString(w, "\t\033[1mflush\033[0m\tFlush local wallet pool (for testing purposes)\n")
io.WriteString(w, "\t\033[1mversion\033[0m\t\tShow version\n")
io.WriteString(w, "\t\033[1mbye\033[0m\t\tQuit wallet\n")
io.WriteString(w, "\t\033[1mexit\033[0m\t\tQuit wallet\n")
io.WriteString(w, "\t\033[1mquit\033[0m\t\tQuit wallet\n")
}
// display seed to the user in his preferred language
func display_seed(l *readline.Instance, wallet *walletapi.Wallet_Disk) {
seed := wallet.GetSeed()
fmt.Fprintf(l.Stderr(), color_green+"PLEASE NOTE: the following 25 words can be used to recover access to your wallet. Please write them down and store them somewhere safe and secure. Please do not store them in your email or on file storage services outside of your immediate control."+color_white+"\n")
fmt.Fprintf(os.Stderr, color_red+"%s"+color_white+"\n", seed)
}
// display spend key
// viewable wallet do not have spend secret key
// TODO wee need to give user a warning if we are printing secret
func display_spend_key(l *readline.Instance, wallet *walletapi.Wallet_Disk) {
keys := wallet.Get_Keys()
h := "0000000000000000000000000000000000000000000000" + keys.Secret.Text(16)
fmt.Fprintf(os.Stderr, "secret key: "+color_red+"%s"+color_white+"\n", h[len(h)-64:])
fmt.Fprintf(os.Stderr, "public key: %s\n", keys.Public.StringHex())
}
// start a rescan from block 0
func rescan_bc(wallet *walletapi.Wallet_Disk) {
if wallet.GetMode() { // trigger rescan we the wallet is online
wallet.Clean() // clean existing data from wallet
//wallet.Rescan_From_Height(0)
}
}
func valid_registration_or_display_error(l *readline.Instance, wallet *walletapi.Wallet_Disk) bool {
if !wallet.IsRegistered() {
globals.Logger.Warnf("Your account is not registered.Please register.")
}
return true
}
// show the transfers to the user originating from this account
func show_transfers(l *readline.Instance, wallet *walletapi.Wallet_Disk, limit uint64) {
in := true
out := true
coinbase := true
min_height := uint64(0)
max_height := uint64(0)
line := ""
line_parts := strings.Fields(line)
if len(line_parts) >= 2 {
switch strings.ToLower(line_parts[1]) {
case "coinbase":
out = false
in = false
case "in":
coinbase = false
in = true
out = false
case "out":
coinbase = false
in = false
out = true
}
}
if len(line_parts) >= 3 { // user supplied min height
s, err := strconv.ParseUint(line_parts[2], 10, 64)
if err != nil {
globals.Logger.Warnf("Error parsing minimum height")
return
}
min_height = s
}
if len(line_parts) >= 4 { // user supplied max height
s, err := strconv.ParseUint(line_parts[2], 10, 64)
if err != nil {
globals.Logger.Warnf("Error parsing maximum height")
return
}
max_height = s
}
// request payments without payment id
transfers := wallet.Show_Transfers(coinbase, in, out, min_height, max_height, "", "", 0, 0) // receives sorted list of transfers
if len(transfers) == 0 {
globals.Logger.Warnf("No transfers available")
return
}
// we need to paginate on say 20 transactions
paging := 20
//if limit != 0 && uint64(len(transfers)) > limit {
// transfers = transfers[uint64(len(transfers))-limit:]
//}
for i := len(transfers) - 1; i >= 0; i-- {
switch transfers[i].Status {
case 0:
if transfers[i].Coinbase {
io.WriteString(l.Stderr(), fmt.Sprintf(color_green+"%s Height %d TopoHeight %d Coinbase (miner reward) received %s DERO"+color_white+"\n", transfers[i].Time.Format(time.RFC822), transfers[i].Height, transfers[i].TopoHeight, globals.FormatMoney(transfers[i].Amount)))
} else {
args, err := transfers[i].ProcessPayload()
if err != nil {
io.WriteString(l.Stderr(), fmt.Sprintf(color_green+"%s Height %d TopoHeight %d transaction %s received %s DERO Proof: %s"+color_white+"\n", transfers[i].Time.Format(time.RFC822), transfers[i].Height, transfers[i].TopoHeight, transfers[i].TXID, globals.FormatMoney(transfers[i].Amount), transfers[i].Proof))
io.WriteString(l.Stderr(), fmt.Sprintf("Full Entry\n", transfers[i])) // dump entire entry for debugging purposes
} else if len(args) == 0 { // no rpc
io.WriteString(l.Stderr(), fmt.Sprintf(color_green+"%s Height %d TopoHeight %d transaction %s received %s DERO Proof: %s NO RPC CALL"+color_white+"\n", transfers[i].Time.Format(time.RFC822), transfers[i].Height, transfers[i].TopoHeight, transfers[i].TXID, globals.FormatMoney(transfers[i].Amount), transfers[i].Proof))
} else { // yes, its rpc
io.WriteString(l.Stderr(), fmt.Sprintf(color_green+"%s Height %d TopoHeight %d transaction %s received %s DERO Proof: %s RPC CALL arguments %s "+color_white+"\n", transfers[i].Time.Format(time.RFC822), transfers[i].Height, transfers[i].TopoHeight, transfers[i].TXID, globals.FormatMoney(transfers[i].Amount), transfers[i].Proof, args))
}
}
case 1:
args, err := transfers[i].ProcessPayload()
if err != nil {
io.WriteString(l.Stderr(), fmt.Sprintf(color_yellow+"%s Height %d TopoHeight %d transaction %s spent %s DERO Destination: %s Proof: %s\n"+color_white+"\n", transfers[i].Time.Format(time.RFC822), transfers[i].Height, transfers[i].TopoHeight, transfers[i].TXID, globals.FormatMoney(transfers[i].Amount), transfers[i].Destination, transfers[i].Proof))
io.WriteString(l.Stderr(), fmt.Sprintf("Err decoding entry %s\nFull Entry %+v\n", err, transfers[i])) // dump entire entry for debugging purposes
} else if len(args) == 0 { // no rpc
io.WriteString(l.Stderr(), fmt.Sprintf(color_yellow+"%s Height %d TopoHeight %d transaction %s spent %s DERO Destination: %s Proof: %s NO RPC CALL"+color_white+"\n", transfers[i].Time.Format(time.RFC822), transfers[i].Height, transfers[i].TopoHeight, transfers[i].TXID, globals.FormatMoney(transfers[i].Amount), transfers[i].Destination, transfers[i].Proof))
} else { // yes, its rpc
io.WriteString(l.Stderr(), fmt.Sprintf(color_yellow+"%s Height %d TopoHeight %d transaction %s spent %s DERO Destination: %s Proof: %s RPC CALL arguments %s "+color_white+"\n", transfers[i].Time.Format(time.RFC822), transfers[i].Height, transfers[i].TopoHeight, transfers[i].TXID, globals.FormatMoney(transfers[i].Amount), transfers[i].Destination, transfers[i].Proof, args))
}
case 2:
fallthrough
default:
globals.Logger.Warnf("Transaction status unknown TXID %s status %d", transfers[i].TXID, transfers[i].Status)
}
j := len(transfers) - i
if j != 0 && j%paging == 0 && (j+1) < len(transfers) { // ask user whether he want to see more till he quits
if !ConfirmYesNoDefaultNo(l, "Want to see more history (y/N)?") {
break // break loop
}
}
}
}

View File

@ -1,90 +0,0 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -1,12 +0,0 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
package main
import "testing"
func Test_Part1(t *testing.T) {
}

File diff suppressed because it is too large Load Diff

View File

@ -1,174 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
//import "io"
//import "os"
//import "fmt"
//import "bytes"
//import "bufio"
//import "strings"
//import "strconv"
//import "runtime"
//import "crypto/sha1"
//import "encoding/hex"
//import "encoding/json"
//import "path/filepath"
/*
import "github.com/romana/rlog"
import "github.com/chzyer/readline"
import "github.com/docopt/docopt-go"
import log "github.com/sirupsen/logrus"
import "github.com/deroproject/derohe/address"
//import "github.com/deroproject/derosuite/p2pv2"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/transaction"
//import "github.com/deroproject/derosuite/checkpoints"
import "github.com/deroproject/derohe/crypto"
*/
import "github.com/deroproject/derohe/astrobwt"
//import "github.com/deroproject/derosuite/crypto/ringct"
//import "github.com/deroproject/derosuite/blockchain/rpcserver"
import "time"
import "sync"
import "math/big"
import "crypto/rand"
import "sync/atomic"
//import "encoding/hex"
import "encoding/binary"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/blockchain"
import "github.com/deroproject/derohe/transaction"
// p2p needs to export a varible declaring whether the chain is in syncronising mode
var counter uint64 = 0 // used to track speeds of current miner
var mining bool // whether system is mining
// request block chain template, see if the tip changes, then continously mine
func start_miner(chain *blockchain.Blockchain, addr *rpc.Address, tx *transaction.Transaction, threads int) {
//tip_counter := 0
for { // once started keep generating blocks after every 10 secs
mining = true
counter = 0
for {
//time.Sleep(50 * time.Millisecond)
if !mining {
break
}
if chain.MINING_BLOCK == true {
time.Sleep(10 * time.Millisecond)
continue
}
cbl, bl := chain.Create_new_miner_block(*addr, tx)
difficulty := chain.Get_Difficulty_At_Tips(bl.Tips)
//globals.Logger.Infof("Difficulty of new block is %s", difficulty.String())
// calculate difficulty once
// update job from chain
wg := sync.WaitGroup{}
wg.Add(threads) // add total number of tx as work
for i := 0; i < threads; i++ {
go generate_valid_PoW(chain, 0, cbl, cbl.Bl, difficulty, &wg) // work should be complete in approx 100 ms, on a 12 cpu system, this would add cost of launching 12 g routine per second
}
wg.Wait()
}
time.Sleep(10 * time.Second)
}
// g
}
// each invoke will be take atleast 250 milliseconds
func generate_valid_PoW(chain *blockchain.Blockchain, hf_version uint64, cbl *block.Complete_Block, bl *block.Block, current_difficulty *big.Int, wg *sync.WaitGroup) {
var powhash crypto.Hash
block_work := bl.GetBlockWork()
// extra nonce is always at offset 36 and is of length 32
var extra_nonce [16]byte
rand.Read(extra_nonce[:]) // fill extra nonce with random buffer
bl.SetExtraNonce(extra_nonce[:])
// TODO this time must be replaced by detecting TIP change
start := time.Now()
//deadline := time.Now().Add(250*time.Millisecond)
i := uint32(0)
nonce_buf := block_work[39 : 39+4] // take last 8 bytes as nonce counter and bruteforce it, since slices are linked, it modifies parent
for {
//time.Sleep(1000 * time.Millisecond)
atomic.AddUint64(&counter, 1)
binary.BigEndian.PutUint32(nonce_buf, i)
//PoW := crypto.Scrypt_1024_1_1_256(block_work)
//PoW := crypto.Keccak256(block_work)
//PoW := cryptonight.SlowHash(block_work)
PoW := astrobwt.POW_0alloc(block_work)
copy(powhash[:], PoW[:])
if blockchain.CheckPowHashBig(powhash, current_difficulty) == true {
bl.CopyNonceFromBlockWork(block_work)
//globals.Logger.Infof("Pow Successfully solved, Submitting block")
if _, ok := chain.Add_Complete_Block(cbl); ok {
globals.Logger.Infof("Block %s successfully accepted diff %s", bl.GetHash(), current_difficulty.String())
chain.P2P_Block_Relayer(cbl, 0) // broadcast block to network ASAP
mining = false // this line enables single block mining in 1 go
break
}
}
if time.Now().Sub(start) > 250*time.Millisecond {
break
}
i++
}
wg.Done()
}

View File

@ -1,75 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
import "encoding/hex"
import "encoding/json"
import "runtime/debug"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/rpc"
//import "github.com/deroproject/derosuite/blockchain"
func (DERO_RPC_APIS) GetBlock(ctx context.Context, p rpc.GetBlock_Params) (result rpc.GetBlock_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
var hash crypto.Hash
if crypto.HashHexToHash(p.Hash) == hash { // user requested using height
if int64(p.Height) > chain.Load_TOPO_HEIGHT() {
err = fmt.Errorf("user requested block at toopheight more than chain topoheight")
return
}
hash, err = chain.Load_Block_Topological_order_at_index(int64(p.Height))
if err != nil { // if err return err
logger.Warnf("User requested %d height block, chain height %d but err occured %s", p.Height, chain.Get_Height(), err)
return
}
} else {
hash = crypto.HashHexToHash(p.Hash)
}
block_header, err := chain.GetBlockHeader(hash)
if err != nil { // if err return err
return
}
bl, err := chain.Load_BL_FROM_ID(hash)
if err != nil { // if err return err
return
}
json_encoded_bytes, err := json.Marshal(bl)
if err != nil { // if err return err
return
}
return rpc.GetBlock_Result{ // return success
Block_Header: block_header,
Blob: hex.EncodeToString(bl.Serialize()),
Json: string(json_encoded_bytes),
Status: "OK",
}, nil
}

View File

@ -1,27 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "context"
import "github.com/deroproject/derohe/rpc"
func (DERO_RPC_APIS) GetBlockCount(ctx context.Context) rpc.GetBlockCount_Result {
return rpc.GetBlockCount_Result{
Count: uint64(chain.Get_Height()),
Status: "OK",
}
}

View File

@ -1,39 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/rpc"
func (DERO_RPC_APIS) GetBlockHeaderByHash(ctx context.Context, p rpc.GetBlockHeaderByHash_Params) (result rpc.GetBlockHeaderByHash_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
hash := crypto.HashHexToHash(p.Hash)
if block_header, err := chain.GetBlockHeader(hash); err == nil { // if err return err
return rpc.GetBlockHeaderByHash_Result{ // return success
Block_Header: block_header,
Status: "OK",
}, nil
}
return
}

View File

@ -1,55 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/rpc"
func (DERO_RPC_APIS) GetBlockHeaderByTopoHeight(ctx context.Context, p rpc.GetBlockHeaderByTopoHeight_Params) (result rpc.GetBlockHeaderByHeight_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
if int64(p.TopoHeight) > chain.Load_TOPO_HEIGHT() {
err = fmt.Errorf("Too big topo height: %d, current blockchain height = %d", p.TopoHeight, chain.Load_TOPO_HEIGHT())
return
}
//return nil, &jsonrpc.Error{Code: -2, Message: fmt.Sprintf("NOT SUPPORTED height: %d, current blockchain height = %d", p.Height, chain.Get_Height())}
hash, err := chain.Load_Block_Topological_order_at_index(int64(p.TopoHeight))
if err != nil { // if err return err
err = fmt.Errorf("User requested %d height block, chain topo height %d but err occured %s", p.TopoHeight, chain.Load_TOPO_HEIGHT(), err)
return
}
block_header, err := chain.GetBlockHeader(hash)
if err != nil { // if err return err
err = fmt.Errorf("User requested %d height block, chain topo height %d but err occured %s", p.TopoHeight, chain.Load_TOPO_HEIGHT(), err)
return
}
return rpc.GetBlockHeaderByHeight_Result{ // return success
Block_Header: block_header,
Status: "OK",
}, nil
}

View File

@ -1,78 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "time"
import "context"
import "runtime/debug"
import "golang.org/x/time/rate"
import "github.com/deroproject/derohe/rpc"
// rate limiter is deployed, in case RPC is exposed over internet
// someone should not be just giving fake inputs and delay chain syncing
var get_block_limiter = rate.NewLimiter(16.0, 8) // 16 req per sec, burst of 8 req is okay
func (DERO_RPC_APIS) GetBlockTemplate(ctx context.Context, p rpc.GetBlockTemplate_Params) (result rpc.GetBlockTemplate_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
/*
if !get_block_limiter.Allow() { // if rate limiter allows, then add block to chain
logger.Warnf("Too many get block template requests per sec rejected by chain.")
return nil,&jsonrpc.Error{
Code: jsonrpc.ErrorCodeInvalidRequest,
Message: "Too many get block template requests per sec rejected by chain.",
}
}
*/
// validate address
miner_address, err := rpc.NewAddress(p.Wallet_Address)
if err != nil {
return result, fmt.Errorf("Address could not be parsed, err:%s", err)
}
if p.Reserve_size > 255 || p.Reserve_size < 1 {
return result, fmt.Errorf("Reserve size should be > 0 and < 255")
}
bl, block_hashing_blob_hex, block_template_hex, reserved_pos := chain.Create_new_block_template_mining(chain.Get_Top_ID(), *miner_address, int(p.Reserve_size))
prev_hash := ""
for i := range bl.Tips {
prev_hash = prev_hash + bl.Tips[i].String()
}
return rpc.GetBlockTemplate_Result{
Blocktemplate_blob: block_template_hex,
Blockhashing_blob: block_hashing_blob_hex,
Reserved_Offset: uint64(reserved_pos),
Expected_reward: 0, // fill in actual reward
Height: bl.Height,
Prev_Hash: prev_hash,
Epoch: uint64(uint64(time.Now().UTC().Unix())), // expiry time of this block
Difficulty: chain.Get_Difficulty_At_Tips(bl.Tips).Uint64(),
Status: "OK",
}, nil
}

View File

@ -1,218 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "math"
import "context"
import "runtime/debug"
import "golang.org/x/xerrors"
import "github.com/deroproject/graviton"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/errormsg"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/blockchain"
//import "github.com/deroproject/derohe/dvm"
//import "github.com/deroproject/derohe/cryptography/crypto"
func (DERO_RPC_APIS) GetEncryptedBalance(ctx context.Context, p rpc.GetEncryptedBalance_Params) (result rpc.GetEncryptedBalance_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
fmt.Printf("panic stack trace %s\n", debug.Stack())
}
}()
uaddress, err := globals.ParseValidateAddress(p.Address)
if err != nil {
panic(err)
}
registration := LocatePointOfRegistration(uaddress)
topoheight := chain.Load_TOPO_HEIGHT()
if p.Merkle_Balance_TreeHash == "" && p.TopoHeight >= 0 && p.TopoHeight <= topoheight { // get balance tree at specific topoheight
topoheight = p.TopoHeight
}
switch p.TopoHeight {
case rpc.RECENT_BATCH_BLOCK: // give data of specific point from where tx could be built
chain_height := chain.Get_Height()
var topo_list []int64
for ;topoheight > 0; {
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
panic(err)
}
if blockchain.Verify_Transaction_Height(uint64(toporecord.Height), uint64(chain_height)){
if chain_height - toporecord.Height <= (3*config.BLOCK_BATCH_SIZE)/2 { // give us enough leeway
topo_list=append(topo_list, topoheight)
}
}
if chain_height-toporecord.Height >= 2 * config.BLOCK_BATCH_SIZE {
break;
}
topoheight--
}
topoheight = topo_list[len(topo_list)-1]
case rpc.RECENT_BLOCK : fallthrough
default:
}
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
panic(err)
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
panic(err)
}
var balance_tree *graviton.Tree
treename := config.BALANCE_TREE
keyname := uaddress.Compressed()
if !p.SCID.IsZero() {
treename = string(p.SCID[:])
}
if balance_tree, err = ss.GetTree(treename); err != nil {
panic(err)
}
bits, _, balance_serialized, err := balance_tree.GetKeyValueFromKey(keyname)
//fmt.Printf("balance_serialized %x err %s, scid %s keyname %x treename %x\n", balance_serialized,err,p.SCID, keyname, treename)
if err != nil {
if xerrors.Is(err, graviton.ErrNotFound) { // address needs registration
return rpc.GetEncryptedBalance_Result{ // return success
Registration: registration,
Status: errormsg.ErrAccountUnregistered.Error(),
}, errormsg.ErrAccountUnregistered
} else {
panic(err)
}
}
merkle_hash, err := chain.Load_Merkle_Hash(topoheight)
if err != nil {
panic(err)
}
// calculate top height merkle tree hash
//var dmerkle_hash crypto.Hash
dmerkle_hash, err := chain.Load_Merkle_Hash(chain.Load_TOPO_HEIGHT())
if err != nil {
panic(err)
}
return rpc.GetEncryptedBalance_Result{ // return success
Data: fmt.Sprintf("%x", balance_serialized),
Registration: registration,
Bits: bits, // no. of bbits required
Height: toporecord.Height,
Topoheight: topoheight,
BlockHash: fmt.Sprintf("%x", toporecord.BLOCK_ID),
Merkle_Balance_TreeHash: fmt.Sprintf("%x", merkle_hash[:]),
DHeight: chain.Get_Height(),
DTopoheight: chain.Load_TOPO_HEIGHT(),
DMerkle_Balance_TreeHash: fmt.Sprintf("%x", dmerkle_hash[:]),
Status: "OK",
}, nil
}
// if address is unregistered, returns negative numbers
func LocatePointOfRegistration(uaddress *rpc.Address) int64 {
addr := uaddress.Compressed()
low := chain.LocatePruneTopo() // in case of purging DB, this should start from N
topoheight := chain.Load_TOPO_HEIGHT()
high := int64(topoheight)
if !IsRegisteredAtTopoHeight(addr, topoheight) {
return -1
}
if IsRegisteredAtTopoHeight(addr, low) {
return low
}
lowest := int64(math.MaxInt64)
for low <= high {
median := (low + high) / 2
if IsRegisteredAtTopoHeight(addr, median) {
if lowest > median {
lowest = median
}
high = median - 1
} else {
low = median + 1
}
}
//fmt.Printf("found point %d\n", lowest)
return lowest
}
func IsRegisteredAtTopoHeight(addr []byte, topoheight int64) bool {
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
panic(err)
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
panic(err)
}
var balance_tree *graviton.Tree
balance_tree, err = ss.GetTree(config.BALANCE_TREE)
if err != nil {
panic(err)
}
_, err = balance_tree.Get(addr)
if err != nil {
if xerrors.Is(err, graviton.ErrNotFound) { // address needs registration
return false
} else {
panic(err)
}
}
return true
}

View File

@ -1,29 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "context"
import "github.com/deroproject/derohe/rpc"
func (DERO_RPC_APIS) GetHeight(ctx context.Context) rpc.Daemon_GetHeight_Result {
return rpc.Daemon_GetHeight_Result{
Height: uint64(chain.Get_Height()),
StableHeight: chain.Get_Stable_Height(),
TopoHeight: chain.Load_TOPO_HEIGHT(),
Status: "OK",
}
}

View File

@ -1,87 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/rpc"
//import "github.com/deroproject/derohe/blockchain"
func (DERO_RPC_APIS) GetInfo(ctx context.Context) (result rpc.GetInfo_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
//result.Difficulty = chain.Get_Difficulty_At_Block(top_id)
result.Height = chain.Get_Height()
result.StableHeight = chain.Get_Stable_Height()
result.TopoHeight = chain.Load_TOPO_HEIGHT()
{
balance_merkle_hash, err := chain.Load_Merkle_Hash(result.TopoHeight)
if err != nil {
panic(err)
}
result.Merkle_Balance_TreeHash = fmt.Sprintf("%X", balance_merkle_hash[:])
}
blid, err := chain.Load_Block_Topological_order_at_index(result.TopoHeight)
if err == nil {
result.Difficulty = chain.Get_Difficulty_At_Tips(chain.Get_TIPS()).Uint64()
}
result.Status = "OK"
result.Version = config.Version.String()
result.Top_block_hash = blid.String()
result.Target = chain.Get_Current_BlockTime()
if result.TopoHeight-chain.LocatePruneTopo() > 100 {
blid50, err := chain.Load_Block_Topological_order_at_index(result.TopoHeight - 50)
if err == nil {
now := chain.Load_Block_Timestamp(blid)
now50 := chain.Load_Block_Timestamp(blid50)
result.AverageBlockTime50 = float32(now-now50) / 50.0
}
}
//result.Target_Height = uint64(chain.Get_Height())
//result.Tx_pool_size = uint64(len(chain.Mempool.Mempool_List_TX()))
// get dynamic fees per kb, used by wallet for tx creation
//result.Dynamic_fee_per_kb = config.FEE_PER_KB
//result.Median_Block_Size = config.CRYPTONOTE_MAX_BLOCK_SIZE
//result.Total_Supply = chain.Load_Already_Generated_Coins_for_Topo_Index( result.TopoHeight)
result.Total_Supply = 0
if result.Total_Supply > (1000000 * 1000000000000) {
result.Total_Supply -= (1000000 * 1000000000000) // remove premine
}
result.Total_Supply = result.Total_Supply / 1000000000000
if globals.Config.Name != config.Mainnet.Name { // anything other than mainnet is testnet at this point in time
result.Testnet = true
}
return result, nil
}

View File

@ -1,123 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/rpc"
//import "github.com/deroproject/derohe/blockchain"
func (DERO_RPC_APIS) GetRandomAddress(ctx context.Context, p rpc.GetRandomAddress_Params) (result rpc.GetRandomAddress_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
topoheight := chain.Load_TOPO_HEIGHT()
if topoheight > 100 {
topoheight -= 5
}
var cursor_list []string
{
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
panic(err)
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
panic(err)
}
treename := config.BALANCE_TREE
if !p.SCID.IsZero() {
treename = string(p.SCID[:])
}
balance_tree, err := ss.GetTree(treename)
if err != nil {
panic(err)
}
account_map := map[string]bool{}
for i := 0; i < 100; i++ {
k, _, err := balance_tree.Random()
if err != nil {
continue
}
var acckey crypto.Point
if err := acckey.DecodeCompressed(k[:]); err != nil {
continue
}
addr := rpc.NewAddressFromKeys(&acckey)
addr.Mainnet = true
if globals.Config.Name != config.Mainnet.Name { // anything other than mainnet is testnet at this point in time
addr.Mainnet = false
}
account_map[addr.String()] = true
if len(account_map) > 140 {
break
}
}
for k := range account_map {
cursor_list = append(cursor_list, k)
}
}
/*
c := balance_tree.Cursor()
for k, v, err := c.First(); err == nil; k, v, err = c.Next() {
_ = v
//fmt.Printf("key=%x, value=%x err %s\n", k, v, err)
var acckey crypto.Point
if err := acckey.DecodeCompressed(k[:]); err != nil {
panic(err)
}
addr := address.NewAddressFromKeys(&acckey)
if globals.Config.Name != config.Mainnet.Name { // anything other than mainnet is testnet at this point in time
addr.Network = globals.Config.Public_Address_Prefix
}
cursor_list = append(cursor_list, addr.String())
if len(cursor_list) >= 20 {
break
}
}
}
*/
result.Address = cursor_list
result.Status = "OK"
return result, nil
}

View File

@ -1,165 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
//import "encoding/hex"
import "runtime/debug"
//import "github.com/romana/rlog"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/dvm"
//import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/blockchain"
import "github.com/deroproject/graviton"
func (DERO_RPC_APIS) GetSC(ctx context.Context, p rpc.GetSC_Params) (result rpc.GetSC_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
scid := crypto.HashHexToHash(p.SCID)
topoheight := chain.Load_TOPO_HEIGHT()
if p.TopoHeight >= 1 {
topoheight = p.TopoHeight
}
toporecord, err := chain.Store.Topo_store.Read(topoheight)
// we must now fill in compressed ring members
if err == nil {
var ss *graviton.Snapshot
ss, err = chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err == nil {
var sc_meta_tree *graviton.Tree
if sc_meta_tree, err = ss.GetTree(config.SC_META); err == nil {
var meta_bytes []byte
if meta_bytes, err = sc_meta_tree.Get(blockchain.SC_Meta_Key(scid)); err == nil {
var meta blockchain.SC_META_DATA
if err = meta.UnmarshalBinary(meta_bytes); err == nil {
result.Balance = meta.Balance
}
}
} else {
return
}
if sc_data_tree, err := ss.GetTree(string(scid[:])); err == nil {
if p.Code { // give SC code
var code_bytes []byte
var v dvm.Variable
if code_bytes, err = sc_data_tree.Get(blockchain.SC_Code_Key(scid)); err == nil {
if err = v.UnmarshalBinary(code_bytes); err != nil {
result.Code = "Unmarshal error"
} else {
result.Code = v.Value.(string)
}
}
}
// give any uint64 keys data if any
for _, value := range p.KeysUint64 {
var v dvm.Variable
key, _ := dvm.Variable{Type: dvm.Uint64, Value: value}.MarshalBinary()
var value_bytes []byte
if value_bytes, err = sc_data_tree.Get(key); err != nil {
result.ValuesUint64 = append(result.ValuesUint64, fmt.Sprintf("NOT AVAILABLE err: %s", err))
continue
}
if err = v.UnmarshalBinary(value_bytes); err != nil {
result.ValuesUint64 = append(result.ValuesUint64, "Unmarshal error")
continue
}
switch v.Type {
case dvm.Uint64:
result.ValuesUint64 = append(result.ValuesUint64, fmt.Sprintf("%d", v.Value))
case dvm.String:
result.ValuesUint64 = append(result.ValuesUint64, fmt.Sprintf("%s", v.Value))
default:
result.ValuesUint64 = append(result.ValuesUint64, "UNKNOWN Data type")
}
}
for _, value := range p.KeysString {
var v dvm.Variable
key, _ := dvm.Variable{Type: dvm.String, Value: value}.MarshalBinary()
var value_bytes []byte
if value_bytes, err = sc_data_tree.Get(key); err != nil {
fmt.Printf("Getting key %x\n", key)
result.ValuesString = append(result.ValuesString, fmt.Sprintf("NOT AVAILABLE err: %s", err))
continue
}
if err = v.UnmarshalBinary(value_bytes); err != nil {
result.ValuesString = append(result.ValuesString, "Unmarshal error")
continue
}
switch v.Type {
case dvm.Uint64:
result.ValuesString = append(result.ValuesUint64, fmt.Sprintf("%d", v.Value))
case dvm.String:
result.ValuesString = append(result.ValuesString, fmt.Sprintf("%s", v.Value))
default:
result.ValuesString = append(result.ValuesString, "UNKNOWN Data type")
}
}
for _, value := range p.KeysBytes {
var v dvm.Variable
key, _ := dvm.Variable{Type: dvm.String, Value: string(value)}.MarshalBinary()
var value_bytes []byte
if value_bytes, err = sc_data_tree.Get(key); err != nil {
result.ValuesBytes = append(result.ValuesBytes, "NOT AVAILABLE")
continue
}
if err = v.UnmarshalBinary(value_bytes); err != nil {
result.ValuesBytes = append(result.ValuesBytes, "Unmarshal error")
continue
}
switch v.Type {
case dvm.Uint64:
result.ValuesBytes = append(result.ValuesBytes, fmt.Sprintf("%d", v.Value))
case dvm.String:
result.ValuesBytes = append(result.ValuesBytes, fmt.Sprintf("%s", v.Value))
default:
result.ValuesBytes = append(result.ValuesBytes, "UNKNOWN Data type")
}
}
}
}
}
result.Status = "OK"
err = nil
//logger.Debugf("result %+v\n", result);
return
}

View File

@ -1,189 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
import "encoding/hex"
import "runtime/debug"
//import "github.com/romana/rlog"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/blockchain"
import "github.com/deroproject/graviton"
func (DERO_RPC_APIS) GetTransaction(ctx context.Context, p rpc.GetTransaction_Params) (result rpc.GetTransaction_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
for i := 0; i < len(p.Tx_Hashes); i++ {
hash := crypto.HashHexToHash(p.Tx_Hashes[i])
// check whether we can get the tx from the pool
{
tx := chain.Mempool.Mempool_Get_TX(hash)
// logger.Debugf("checking tx in pool %+v", tx);
if tx != nil { // found the tx in the mempool
var related rpc.Tx_Related_Info
related.Block_Height = -1 // not mined
related.In_pool = true
result.Txs_as_hex = append(result.Txs_as_hex, hex.EncodeToString(tx.Serialize()))
result.Txs = append(result.Txs, related)
continue // no more processing required
}
}
{ // check if tx is from blockchain
var tx transaction.Transaction
var tx_bytes []byte
if tx_bytes, err = chain.Store.Block_tx_store.ReadTX(hash); err != nil { // if tx not found return empty rpc
var related rpc.Tx_Related_Info
result.Txs_as_hex = append(result.Txs_as_hex, "") // a not found tx will return ""
result.Txs = append(result.Txs, related)
continue
} else {
//fmt.Printf("txhash %s loaded %d bytes\n", hash, len(tx_bytes))
if err = tx.DeserializeHeader(tx_bytes); err != nil {
logger.Warnf("rpc txhash %s could not be decoded, err %s\n", hash, err)
return
}
if err == nil {
var related rpc.Tx_Related_Info
// check whether tx is orphan
//if chain.Is_TX_Orphan(hash) {
// result.Txs_as_hex = append(result.Txs_as_hex, "") // given empty data
// result.Txs = append(result.Txs, related) // should we have an orphan tx marker
//} else
if tx.IsCoinbase() { // fill reward but only for coinbase
//blhash, err := chain.Load_Block_Topological_order_at_index(nil, int64(related.Block_Height))
//if err == nil { // if err return err
related.Reward = 999999 //chain.Load_Block_Total_Reward(nil, blhash)
//}
}
// also fill where the tx is found and in which block is valid and in which it is invalid
blid_list,state_block,state_block_topo := chain.IS_TX_Mined(hash)
//logger.Infof(" tx %s related info valid_blid %s invalid_blid %+v valid %v ",hash, valid_blid, invalid_blid, valid)
if state_block_topo > 0 {
related.StateBlock = state_block.String()
related.Block_Height = state_block_topo
if tx.TransactionType != transaction.REGISTRATION {
// we must now fill in compressed ring members
if toporecord, err := chain.Store.Topo_store.Read(state_block_topo); err == nil {
if ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version); err == nil {
if tx.TransactionType == transaction.SC_TX {
scid := tx.GetHash()
if tx.SCDATA.Has(rpc.SCACTION, rpc.DataUint64) && rpc.SC_INSTALL == rpc.SC_ACTION(tx.SCDATA.Value(rpc.SCACTION, rpc.DataUint64).(uint64)) {
if sc_meta_tree, err := ss.GetTree(config.SC_META); err == nil {
var meta_bytes []byte
if meta_bytes, err = sc_meta_tree.Get(blockchain.SC_Meta_Key(scid)); err == nil {
var meta blockchain.SC_META_DATA // the meta contains the link to the SC bytes
if err = meta.UnmarshalBinary(meta_bytes); err == nil {
related.Balance = meta.Balance
}
}
}
if sc_data_tree, err := ss.GetTree(string(scid[:])); err == nil {
var code_bytes []byte
if code_bytes, err = sc_data_tree.Get(blockchain.SC_Code_Key(scid)); err == nil {
related.Code = string(code_bytes)
}
}
}
}
for t := range tx.Payloads {
var ring [][]byte
var tree *graviton.Tree
if tx.Payloads[t].SCID.IsZero() {
tree, err = ss.GetTree(config.BALANCE_TREE)
} else {
tree, err = ss.GetTree(string(tx.Payloads[t].SCID[:]))
}
if err != nil {
fmt.Printf("no such SC %s\n", tx.Payloads[t].SCID)
}
for j := 0; j < int(tx.Payloads[t].Statement.RingSize); j++ {
key_pointer := tx.Payloads[t].Statement.Publickeylist_pointers[j*int(tx.Payloads[t].Statement.Bytes_per_publickey) : (j+1)*int(tx.Payloads[t].Statement.Bytes_per_publickey)]
_, key_compressed, _, err := tree.GetKeyValueFromHash(key_pointer)
if err == nil {
ring = append(ring, key_compressed)
} else { // we should some how report error
fmt.Printf("Error expanding member for txid %s t %d err %s key_compressed %x\n", hash, t, err, key_compressed)
}
}
related.Ring = append(related.Ring, ring)
}
}
}
}
}
for i := range blid_list {
related.MinedBlock = append(related.MinedBlock, blid_list[i].String())
}
result.Txs_as_hex = append(result.Txs_as_hex, hex.EncodeToString(tx.Serialize()))
result.Txs = append(result.Txs, related)
}
continue
}
}
{ // we could not fetch the tx, return an empty string
result.Txs_as_hex = append(result.Txs_as_hex, "")
err = fmt.Errorf("TX NOT FOUND %s", hash)
return
}
}
result.Status = "OK"
err = nil
//logger.Debugf("result %+v\n", result)
return
}

View File

@ -1,31 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
import "github.com/deroproject/derohe/rpc"
func (DERO_RPC_APIS) GetTxPool(ctx context.Context) (result rpc.GetTxPool_Result) {
result.Status = "OK"
pool_list := chain.Mempool.Mempool_List_TX()
for i := range pool_list {
result.Tx_list = append(result.Tx_list, fmt.Sprintf("%s", pool_list[i]))
}
return result
}

View File

@ -1,71 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
import "encoding/hex"
import "runtime/debug"
import "github.com/romana/rlog"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/transaction"
//NOTE: finally we have shifted to json api
func (DERO_RPC_APIS) SendRawTransaction(ctx context.Context, p rpc.SendRawTransaction_Params) (result rpc.SendRawTransaction_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
var tx transaction.Transaction
rlog.Debugf("Incoming TX from RPC Server")
//lets decode the tx from hex
tx_bytes, err := hex.DecodeString(p.Tx_as_hex)
if err != nil {
result.Status = "TX could be hex decoded"
return
}
if len(tx_bytes) < 99 {
result.Status = "TX insufficient length"
return
}
// fmt.Printf("txbytes length %d data %s\n", len(p.Tx_as_hex), p.Tx_as_hex)
// lets add tx to pool, if we can do it, so can every one else
err = tx.DeserializeHeader(tx_bytes)
if err != nil {
rlog.Debugf("Incoming TX from RPC Server could NOT be deserialized")
return
}
rlog.Debugf("Incoming TXID %s from RPC Server", tx.GetHash())
// lets try to add it to pool
if err = chain.Add_TX_To_Pool(&tx); err == nil {
result.Status = "OK"
rlog.Debugf("Incoming TXID %s from RPC Server successfully accepted by MEMPOOL", tx.GetHash())
} else {
rlog.Warnf("Incoming TXID %s from RPC Server rejected by POOL err '%s'", tx.GetHash(),err)
err = fmt.Errorf("Transaction %s rejected by daemon err '%s'", tx.GetHash(), err)
}
return
}

View File

@ -1,65 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "context"
import "encoding/hex"
import "runtime/debug"
import "github.com/deroproject/derohe/rpc"
func (DERO_RPC_APIS) SubmitBlock(ctx context.Context, block_data [2]string) (result rpc.SubmitBlock_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
block_data_bytes, err := hex.DecodeString(block_data[0])
if err != nil {
logger.Infof("Submitting block could not be decoded")
return result, fmt.Errorf("Submitting block could not be decoded. err: %s", err)
}
hashing_blob, err := hex.DecodeString(block_data[1])
if err != nil || len(block_data[1]) == 0 {
logger.Infof("Submitting block hashing_blob could not be decoded")
return result, fmt.Errorf("block hashing blob could not be decoded. err: %s", err)
}
blid, sresult, err := chain.Accept_new_block(block_data_bytes, hashing_blob)
if sresult {
logger.Infof("Submitted block %s accepted", blid)
return rpc.SubmitBlock_Result{
BLID: blid.String(),
Status: "OK",
}, nil
}
if err != nil {
logger.Infof("Submitting block %s err %s", blid, err)
return result, err
}
logger.Infof("Submitting block rejected err %s", err)
return rpc.SubmitBlock_Result{
Status: "REJECTED",
}, nil
}

View File

@ -1,42 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
// get block template handler not implemented
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/rpc"
func (DERO_RPC_APIS) GetLastBlockHeader(ctx context.Context) (result rpc.GetLastBlockHeader_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
top_hash := chain.Get_Top_ID()
block_header, err := chain.GetBlockHeader(top_hash)
if err != nil {
return
}
return rpc.GetLastBlockHeader_Result{
Block_Header: block_header,
Status: "OK",
}, nil
}

View File

@ -1,19 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
// this file implements a stratum server for efficient mining

View File

@ -1,104 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
/*
import "io"
import "os"
import "fmt"
import "bytes"
import "bufio"
import "strings"
import "strconv"
import "runtime"
import "crypto/sha1"
import "encoding/hex"
import "encoding/json"
import "path/filepath"
import "github.com/romana/rlog"
import "github.com/chzyer/readline"
import "github.com/docopt/docopt-go"
import log "github.com/sirupsen/logrus"
import "github.com/deroproject/derosuite/address"
import "github.com/deroproject/derosuite/p2pv2"
import "github.com/deroproject/derosuite/config"
import "github.com/deroproject/derosuite/transaction"
//import "github.com/deroproject/derosuite/checkpoints"
import "github.com/deroproject/derosuite/crypto"
import "github.com/deroproject/derosuite/crypto/ringct"
import "github.com/deroproject/derosuite/blockchain/rpcserver"
*/
//import "fmt"
import "time"
import "math/rand"
import "github.com/beevik/ntp"
import "github.com/romana/rlog"
import "github.com/deroproject/derohe/globals"
// these servers automatically rotate every hour as per documentation
// we also rotate them randomly
// TODO support ipv6
var timeservers = []string{
"0.pool.ntp.org",
"1.pool.ntp.org",
"2.pool.ntp.org",
"3.pool.ntp.org",
}
// continusosly checks time for deviation if possible
func time_check_routine() {
// initial initial warning should NOT get hidden in messages
random := rand.New(globals.NewCryptoRandSource())
timeinsync := false
for {
if !timeinsync {
time.Sleep(5 * time.Second)
} else {
time.Sleep(2 * 60 * time.Second) // check every 2 minutes
}
server := timeservers[random.Int()%len(timeservers)]
response, err := ntp.Query(server)
if err != nil {
rlog.Warnf("error while querying time server %s err %s", server, err)
} else {
//globals.Logger.Infof("Local UTC time %+v server UTC time %+v", time.Now().UTC(), response.Time.UTC())
if response.ClockOffset.Seconds() > -1.1 && response.ClockOffset.Seconds() < 1.1 {
timeinsync = true
} else {
globals.Logger.Warnf("\nYour system time deviation is more than 1 secs (%s)."+
"\nYou may experience chain sync issues and/or other side-effects."+
"\nIf you are mining, your blocks may get rejected."+
"\nPlease sync your system using NTP software (default availble in all OS)."+
"\n eg. ntpdate pool.ntp.org (for linux/unix)", response.ClockOffset)
}
}
}
}

View File

@ -1,319 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
//import "fmt"
import "net"
import "time"
import "io"
//import "io/ioutil"
//import "net/http"
import "context"
import "strings"
import "math/rand"
import "encoding/base64"
import "encoding/json"
import "runtime/debug"
import "encoding/binary"
//import "crypto/tls"
import "github.com/blang/semver"
import "github.com/miekg/dns"
import "github.com/romana/rlog"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
/* this needs to be set on update.dero.io. as TXT record, in encoded form as base64
*
{ "version" : "1.0.2",
"message" : "\n\n\u001b[32m This is a mandatory update\u001b[0m",
"critical" : ""
}
base64 eyAidmVyc2lvbiIgOiAiMS4wLjIiLAogIm1lc3NhZ2UiIDogIlxuXG5cdTAwMWJbMzJtIFRoaXMgaXMgYSBtYW5kYXRvcnkgdXBkYXRlXHUwMDFiWzBtIiwgCiJjcml0aWNhbCIgOiAiIiAKfQ==
TXT record should be set as update=eyAidmVyc2lvbiIgOiAiMS4wLjIiLAogIm1lc3NhZ2UiIDogIlxuXG5cdTAwMWJbMzJtIFRoaXMgaXMgYSBtYW5kYXRvcnkgdXBkYXRlXHUwMDFiWzBtIiwgCiJjcml0aWNhbCIgOiAiIiAKfQ==
*/
func check_update_loop() {
for {
if config.DNS_NOTIFICATION_ENABLED {
globals.Logger.Debugf("Checking update..")
check_update()
}
time.Sleep(2 * 3600 * time.Second) // check every 2 hours
}
}
// wrapper to make requests using proxy
func dialContextwrapper(ctx context.Context, network, address string) (net.Conn, error) {
return globals.Dialer.Dial(network, address)
}
type socks_dialer net.Dialer
func (d *socks_dialer) Dial(network, address string) (net.Conn, error) {
globals.Logger.Infof("Using our dial")
return globals.Dialer.Dial(network, address)
}
func (d *socks_dialer) DialContext(ctx context.Context, network, address string) (net.Conn, error) {
globals.Logger.Infof("Using our context dial")
return globals.Dialer.Dial(network, address)
}
func dial_random_read_response(in []byte) (out []byte, err error) {
defer func() {
if r := recover(); r != nil {
rlog.Warnf("Recovered while checking updates, Stack trace below", r)
rlog.Warnf("Stack trace \n%s", debug.Stack())
}
}()
// since we may be connecting through socks, grab the remote ip for our purpose rightnow
//conn, err := globals.Dialer.Dial("tcp", "208.67.222.222:53")
//conn, err := net.Dial("tcp", "8.8.8.8:53")
random_feeder := rand.New(globals.NewCryptoRandSource()) // use crypto secure resource
server_address := config.DNS_servers[random_feeder.Intn(len(config.DNS_servers))] // choose a random server cryptographically
conn, err := net.Dial("tcp", server_address)
//conn, err := tls.Dial("tcp", remote_ip.String(),&tls.Config{InsecureSkipVerify: true})
if err != nil {
rlog.Warnf("Dial failed err %s", err.Error())
return
}
defer conn.Close() // close connection at end
// upgrade connection TO TLS ( tls.Dial does NOT support proxy)
//conn = tls.Client(conn, &tls.Config{InsecureSkipVerify: true})
rlog.Tracef(1, "Sending %d bytes", len(in))
var buf [2]byte
binary.BigEndian.PutUint16(buf[:], uint16(len(in)))
conn.Write(buf[:]) // write length in bigendian format
conn.Write(in) // write data
// now we must wait for response to arrive
var frame_length_buf [2]byte
conn.SetReadDeadline(time.Now().Add(20 * time.Second))
nbyte, err := io.ReadFull(conn, frame_length_buf[:])
if err != nil || nbyte != 2 {
// error while reading from connection we must disconnect it
rlog.Warnf("Could not read DNS length prefix err %s", err)
return
}
frame_length := binary.BigEndian.Uint16(frame_length_buf[:])
if frame_length == 0 {
// most probably memory DDOS attack, kill the connection
rlog.Warnf("Frame length is too small")
return
}
out = make([]byte, frame_length)
conn.SetReadDeadline(time.Now().Add(20 * time.Second))
data_size, err := io.ReadFull(conn, out)
if err != nil || data_size <= 0 || uint16(data_size) != frame_length {
// error while reading from connection we must kiil it
rlog.Warnf("Could not read DNS data size read %d, frame length %d err %s", data_size, frame_length, err)
return
}
out = out[:frame_length]
return
}
func check_update() {
// add panic handler, in case DNS acts rogue and tries to attack
defer func() {
if r := recover(); r != nil {
rlog.Warnf("Recovered while checking updates, Stack trace below", r)
rlog.Warnf("Stack trace \n%s", debug.Stack())
}
}()
if !config.DNS_NOTIFICATION_ENABLED { // if DNS notifications are disabled bail out
return
}
/* var u update_message
u.Version = "2.0.0"
u.Message = "critical msg txt\x1b[35m should \n be in RED"
globals.Logger.Infof("testing %s",u.Message)
j,err := json.Marshal(u)
globals.Logger.Infof("json format %s err %s",j,err)
*/
/*extract_parse_version("update=eyAidmVyc2lvbiIgOiAiMS4xLjAiLCAibWVzc2FnZSIgOiAiXG5cblx1MDAxYlszMm0gVGhpcyBpcyBhIG1hbmRhdG9yeSB1cGdyYWRlIHBsZWFzZSB1cGdyYWRlIGZyb20geHl6IFx1MDAxYlswbSIsICJjcml0aWNhbCIgOiAiIiB9")
return
*/
m1 := new(dns.Msg)
// m1.SetEdns0(65000, true), dnssec probably leaks current timestamp, it's disabled until more invetigation
m1.Id = dns.Id()
m1.RecursionDesired = true
m1.Question = make([]dns.Question, 1)
m1.Question[0] = dns.Question{config.DNS_UPDATE_CHECK, dns.TypeTXT, dns.ClassINET}
packed, err := m1.Pack()
if err != nil {
globals.Logger.Warnf("Error which packing DNS query for program update err %s", err)
return
}
/*
// setup a http client
httpTransport := &http.Transport{}
httpClient := &http.Client{Transport: httpTransport}
// set our socks5 as the dialer
httpTransport.Dial = globals.Dialer.Dial
packed_base64:= base64.RawURLEncoding.EncodeToString(packed)
response, err := httpClient.Get("https://1.1.1.1/dns-query?ct=application/dns-udpwireformat&dns="+packed_base64)
_ = packed_base64
if err != nil {
rlog.Warnf("error making DOH request err %s",err)
return
}
defer response.Body.Close()
contents, err := ioutil.ReadAll(response.Body)
if err != nil {
rlog.Warnf("error reading DOH response err %s",err)
return
}
*/
contents, err := dial_random_read_response(packed)
if err != nil {
rlog.Warnf("error reading response from DNS server err %s", err)
return
}
rlog.Debugf("DNS response length from DNS server %d bytes", len(contents))
err = m1.Unpack(contents)
if err != nil {
rlog.Warnf("error decoding DOH response err %s", err)
return
}
for i := range m1.Answer {
if t, ok := m1.Answer[i].(*dns.TXT); ok {
// replace any spaces so as records could be joined
rlog.Tracef(1, "Process record %+v", t.Txt)
joined := strings.Join(t.Txt, "")
extract_parse_version(joined)
}
}
//globals.Logger.Infof("response %+v err ",m1,err)
}
type update_message struct {
Version string `json:"version"`
Message string `json:"message"`
Critical string `json:"critical"` // always broadcasted, without checks for version
}
// our version are TXT record of following format
// version=base64 encoded json
func extract_parse_version(str string) {
strl := strings.ToLower(str)
if !strings.HasPrefix(strl, "update=") {
rlog.Tracef(1, "Skipping record %s", str)
return
}
parts := strings.SplitN(str, "=", 2)
if len(parts) != 2 {
return
}
rlog.Tracef(1, "parts %s", parts[1])
data, err := base64.StdEncoding.DecodeString(parts[1])
if err != nil {
rlog.Tracef(1, "Could NOT decode base64 update message %s", err)
return
}
var u update_message
err = json.Unmarshal(data, &u)
//globals.Logger.Infof("data %+v", u)
if err != nil {
rlog.Tracef(1, "Could NOT decode json update message %s", err)
return
}
uversion, err := semver.ParseTolerant(u.Version)
if err != nil {
rlog.Tracef(1, "Could NOT update version %s", err)
}
current_version := config.Version
current_version.Pre = current_version.Pre[:0]
current_version.Build = current_version.Build[:0]
// give warning to update the daemon
if u.Message != "" && err == nil { // check semver
if current_version.LT(uversion) {
if current_version.Major != uversion.Major { // if major version is different give extract warning
globals.Logger.Infof("\033[31m CRITICAL MAJOR update, please upgrade ASAP.\033[0m")
}
globals.Logger.Infof("%s", u.Message) // give the version upgrade message
globals.Logger.Infof("\033[33mCurrent Version %s \033[32m-> Upgrade Version %s\033[0m ", current_version.String(), uversion.String())
}
}
if u.Critical != "" { // give the critical upgrade message
globals.Logger.Infof("%s", u.Critical)
}
}

View File

@ -1,285 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "io"
import "net"
import "fmt"
import "net/http"
import "time"
import "sync"
import "sync/atomic"
import "context"
import "strings"
import "runtime/debug"
import "net/http/pprof"
import "github.com/romana/rlog"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/metrics"
import "github.com/deroproject/derohe/blockchain"
import "github.com/deroproject/derohe/glue/rwc"
import log "github.com/sirupsen/logrus"
import "github.com/gorilla/websocket"
import "github.com/creachadair/jrpc2"
import "github.com/creachadair/jrpc2/handler"
import "github.com/creachadair/jrpc2/channel"
import "github.com/creachadair/jrpc2/server"
import "github.com/creachadair/jrpc2/jhttp"
/* this file implements the rpcserver api, so as wallet and block explorer tools can work without migration */
// all components requiring access to blockchain must use , this struct to communicate
// this structure must be update while mutex
type RPCServer struct {
srv *http.Server
mux *http.ServeMux
Exit_Event chan bool // blockchain is shutting down and we must quit ASAP
sync.RWMutex
}
//var Exit_In_Progress bool
var chain *blockchain.Blockchain
var logger *log.Entry
var client_connections sync.Map
var options = &jrpc2.ServerOptions{AllowPush: true}
// this function triggers notification to all clients that they should repoll
func Notify_Block_Addition() {
for {
chain.RPC_NotifyNewBlock.L.Lock()
chain.RPC_NotifyNewBlock.Wait()
chain.RPC_NotifyNewBlock.L.Unlock()
client_connections.Range(func(key, value interface{}) bool {
key.(*jrpc2.Server).Notify(context.Background(), "Repoll", nil)
return true
})
}
}
func Notify_Height_Changes() {
for {
chain.RPC_NotifyNewBlock.L.Lock()
chain.RPC_NotifyNewBlock.Wait()
chain.RPC_NotifyNewBlock.L.Unlock()
client_connections.Range(func(key, value interface{}) bool {
key.(*jrpc2.Server).Notify(context.Background(), "HRepoll", nil)
return true
})
}
}
func RPCServer_Start(params map[string]interface{}) (*RPCServer, error) {
var err error
var r RPCServer
_ = err
r.Exit_Event = make(chan bool)
logger = globals.Logger.WithFields(log.Fields{"com": "RPC"}) // all components must use this logger
chain = params["chain"].(*blockchain.Blockchain)
/*
// test whether chain is okay
if chain.Get_Height() == 0 {
return nil, fmt.Errorf("Chain DOES NOT have genesis block")
}
*/
go r.Run()
logger.Infof("RPC/Websocket server started")
atomic.AddUint32(&globals.Subsystem_Active, 1) // increment subsystem
return &r, nil
}
// shutdown the rpc server component
func (r *RPCServer) RPCServer_Stop() {
r.Lock()
defer r.Unlock()
close(r.Exit_Event) // send signal to all connections to exit
if r.srv != nil {
r.srv.Shutdown(context.Background()) // shutdown the server
}
// TODO we must wait for connections to kill themselves
time.Sleep(1 * time.Second)
logger.Infof("RPC Shutdown")
atomic.AddUint32(&globals.Subsystem_Active, ^uint32(0)) // this decrement 1 fom subsystem
}
// setup handlers
func (r *RPCServer) Run() {
// create a new mux
r.mux = http.NewServeMux()
default_address := "127.0.0.1:" + fmt.Sprintf("%d", config.Mainnet.RPC_Default_Port)
if !globals.IsMainnet() {
default_address = "127.0.0.1:" + fmt.Sprintf("%d", config.Testnet.RPC_Default_Port)
}
if _, ok := globals.Arguments["--rpc-bind"]; ok && globals.Arguments["--rpc-bind"] != nil {
addr, err := net.ResolveTCPAddr("tcp", globals.Arguments["--rpc-bind"].(string))
if err != nil {
logger.Warnf("--rpc-bind address is invalid, err = %s", err)
} else {
if addr.Port == 0 {
logger.Infof("RPC server is disabled, No ports will be opened for RPC")
return
} else {
default_address = addr.String()
}
}
}
logger.Infof("RPC will listen on %s", default_address)
r.Lock()
r.srv = &http.Server{Addr: default_address, Handler: r.mux}
r.Unlock()
r.mux.HandleFunc("/json_rpc", translate_http_to_jsonrpc_and_vice_versa)
r.mux.HandleFunc("/metrics", metrics.WritePrometheus) // write all the metrics
r.mux.HandleFunc("/ws", ws_handler)
r.mux.HandleFunc("/", hello)
r.mux.HandleFunc("/debug/pprof/", pprof.Index)
r.mux.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
r.mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
r.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
r.mux.HandleFunc("/debug/pprof/trace", pprof.Trace)
//if DEBUG_MODE {
// r.mux.HandleFunc("/debug/pprof/", pprof.Index)
// Register pprof handlers individually if required
/* r.mux.HandleFunc("/debug/pprof/", pprof.Index)
r.mux.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
r.mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
r.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
r.mux.HandleFunc("/debug/pprof/trace", pprof.Trace)
*/
go Notify_Block_Addition() // process all blocks
go Notify_Height_Changes() // gives notification of changed height
if err := r.srv.ListenAndServe(); err != http.ErrServerClosed {
logger.Warnf("ERR listening to address err %s", err)
}
}
func hello(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "DERO BLOCKCHAIN Hello world!")
}
var upgrader = websocket.Upgrader{CheckOrigin: func(r *http.Request) bool { return true }} // use default options
func ws_handler(w http.ResponseWriter, r *http.Request) {
var ws_server *jrpc2.Server
defer func() {
// safety so if anything wrong happens, verification fails
if r := recover(); r != nil {
logger.Warnf("Recovered while processing websocket request, Stack trace below ")
logger.Warnf("Stack trace \n%s", debug.Stack())
}
if ws_server != nil {
client_connections.Delete(ws_server)
}
}()
c, err := upgrader.Upgrade(w, r, nil)
if err != nil {
rlog.Warnf("upgrade:", err)
return
}
defer c.Close()
input_output := rwc.New(c)
ws_server = jrpc2.NewServer(assigner, options).Start(channel.RawJSON(input_output, input_output))
client_connections.Store(ws_server, 1)
ws_server.Wait()
}
var assigner = handler.ServiceMap{
"DAEMON": handler.NewService(DAEMON_RPC_APIS{}),
"DERO": handler.NewService(DERO_RPC_APIS{}),
}
type DAEMON_RPC_APIS struct{} // exports daemon status and other RPC apis
func (DAEMON_RPC_APIS) Echo(ctx context.Context, args []string) string {
return "DAEMON " + strings.Join(args, " ")
}
type DERO_RPC_APIS struct{} // exports DERO specific apis, such as transaction
// used to verify whether the connection is alive
func (DERO_RPC_APIS) Ping(ctx context.Context) string {
return "Pong "
}
func (DERO_RPC_APIS) Echo(ctx context.Context, args []string) string {
return "DERO " + strings.Join(args, " ")
}
//var internal_server = server.NewLocal(assigner,nil) // Use DERO.GetInfo names
var internal_server = server.NewLocal(historical_apis, nil) // uses traditional "getinfo" for compatibility reasons
// Bridge HTTP to the JSON-RPC server.
var bridge = jhttp.NewBridge(internal_server.Client)
var dero_apis DERO_RPC_APIS
var historical_apis = handler.Map{"getinfo": handler.New(dero_apis.GetInfo),
"get_info": handler.New(dero_apis.GetInfo), // this is just an alias to above
"getblock": handler.New(dero_apis.GetBlock),
"getblockheaderbytopoheight": handler.New(dero_apis.GetBlockHeaderByTopoHeight),
"getblockheaderbyhash": handler.New(dero_apis.GetBlockHeaderByHash),
"gettxpool": handler.New(dero_apis.GetTxPool),
"getrandomaddress": handler.New(dero_apis.GetRandomAddress),
"gettransactions": handler.New(dero_apis.GetTransaction),
"sendrawtransaction": handler.New(dero_apis.SendRawTransaction),
"submitblock": handler.New(dero_apis.SubmitBlock),
"getheight": handler.New(dero_apis.GetHeight),
"getblockcount": handler.New(dero_apis.GetBlockCount),
"getlastblockheader": handler.New(dero_apis.GetLastBlockHeader),
"getblocktemplate": handler.New(dero_apis.GetBlockTemplate),
"getencryptedbalance": handler.New(dero_apis.GetEncryptedBalance),
"getsc": handler.New(dero_apis.GetSC)}
func translate_http_to_jsonrpc_and_vice_versa(w http.ResponseWriter, r *http.Request) {
bridge.ServeHTTP(w, r)
}

View File

@ -1,90 +0,0 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -1,12 +0,0 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
package main
import "testing"
func Test_Part1(t *testing.T) {
}

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

View File

@ -1,182 +0,0 @@
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
//import "os"
import "fmt"
import "time"
import "crypto/sha1"
import "github.com/romana/rlog"
import "etcd.io/bbolt"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/walletapi"
import "github.com/ybbus/jsonrpc"
const PLUGIN_NAME = "pong_server"
const DEST_PORT = uint64(0x1234567812345678)
var expected_arguments = rpc.Arguments{
{rpc.RPC_DESTINATION_PORT, rpc.DataUint64, DEST_PORT},
// { rpc.RPC_EXPIRY , rpc.DataTime, time.Now().Add(time.Hour).UTC()},
{rpc.RPC_COMMENT, rpc.DataString, "Purchase PONG"},
//{"float64", rpc.DataFloat64, float64(0.12345)}, // in atomic units
{rpc.RPC_VALUE_TRANSFER, rpc.DataUint64, uint64(12345)}, // in atomic units
}
// currently the interpreter seems to have a glitch if this gets initialized within the code
// see limitations github.com/traefik/yaegi
var response = rpc.Arguments{
{rpc.RPC_DESTINATION_PORT, rpc.DataUint64, uint64(0)},
{rpc.RPC_SOURCE_PORT, rpc.DataUint64, DEST_PORT},
{rpc.RPC_COMMENT, rpc.DataString, "Successfully purchased pong (this could be serial/license key or download link or further)"},
}
var rpcClient = jsonrpc.NewClient("http://127.0.0.1:40403/json_rpc")
// empty place holder
func main() {
var err error
fmt.Printf("Pong Server to demonstrate RPC over dero chain.\n")
var addr *rpc.Address
var addr_result rpc.GetAddress_Result
err = rpcClient.CallFor(&addr_result, "GetAddress")
if err != nil || addr_result.Address == "" {
fmt.Printf("Could not obtain address from wallet err %s\n", err)
return
}
if addr, err = rpc.NewAddress(addr_result.Address); err != nil {
fmt.Printf("address could not be parsed: addr:%s err:%s\n", addr_result.Address, err)
return
}
shasum := fmt.Sprintf("%x", sha1.Sum([]byte(addr.String())))
db_name := fmt.Sprintf("%s_%s.bbolt.db", PLUGIN_NAME, shasum)
db, err := bbolt.Open(db_name, 0600, nil)
if err != nil {
fmt.Printf("could not open db err:%s\n", err)
return
}
//defer db.Close()
err = db.Update(func(tx *bbolt.Tx) error {
_, err := tx.CreateBucketIfNotExists([]byte("SALE"))
return err
})
if err != nil {
fmt.Printf("err creating bucket. err %s\n", err)
}
fmt.Printf("Persistant store created in '%s'\n", db_name)
fmt.Printf("Wallet Address: %s\n", addr)
service_address_without_amount := addr.Clone()
service_address_without_amount.Arguments = expected_arguments[:len(expected_arguments)-1]
fmt.Printf("Integrated address to activate '%s', (without hardcoded amount) service: \n%s\n", PLUGIN_NAME, service_address_without_amount.String())
// service address can be created client side for now
service_address := addr.Clone()
service_address.Arguments = expected_arguments
fmt.Printf("Integrated address to activate '%s', service: \n%s\n", PLUGIN_NAME, service_address.String())
processing_thread(db) // rkeep processing
//time.Sleep(time.Second)
//return
}
func processing_thread(db *bbolt.DB) {
var err error
for { // currently we traverse entire history
time.Sleep(time.Second)
var transfers rpc.Get_Transfers_Result
err = rpcClient.CallFor(&transfers, "GetTransfers", rpc.Get_Transfers_Params{In: true, DestinationPort: DEST_PORT})
if err != nil {
rlog.Warnf("Could not obtain gettransfers from wallet err %s\n", err)
continue
}
for _, e := range transfers.Entries {
if e.Coinbase || !e.Incoming { // skip coinbase or outgoing, self generated transactions
continue
}
// check whether the entry has been processed before, if yes skip it
var already_processed bool
db.View(func(tx *bbolt.Tx) error {
if b := tx.Bucket([]byte("SALE")); b != nil {
if ok := b.Get([]byte(e.TXID)); ok != nil { // if existing in bucket
already_processed = true
}
}
return nil
})
if already_processed { // if already processed skip it
continue
}
// check whether this service should handle the transfer
if !e.Payload_RPC.Has(rpc.RPC_DESTINATION_PORT, rpc.DataUint64) ||
DEST_PORT != e.Payload_RPC.Value(rpc.RPC_DESTINATION_PORT, rpc.DataUint64).(uint64) { // this service is expecting value to be specfic
continue
}
rlog.Infof("tx should be processed %s\n", e.TXID)
if expected_arguments.Has(rpc.RPC_VALUE_TRANSFER, rpc.DataUint64) { // this service is expecting value to be specfic
value_expected := expected_arguments.Value(rpc.RPC_VALUE_TRANSFER, rpc.DataUint64).(uint64)
if e.Amount != value_expected { // TODO we should mark it as faulty
rlog.Warnf("user transferred %d, we were expecting %d. so we will not do anything\n", e.Amount, value_expected) // this is an unexpected situation
continue
}
// value received is what we are expecting, so time for response
response[0].Value = e.SourcePort // source port now becomes destination port, similar to TCP
response[2].Value = fmt.Sprintf("Sucessfully purchased pong (could be serial, license or download link or anything).You sent %s at height %d", walletapi.FormatMoney(e.Amount), e.Height)
//_, err := response.CheckPack(transaction.PAYLOAD0_LIMIT)) // we only have 144 bytes for RPC
// sender of ping now becomes destination
var str string
tparams := rpc.Transfer_Params{Transfers: []rpc.Transfer{{Destination: e.Sender, Amount: uint64(1), Payload_RPC: response}}}
err = rpcClient.CallFor(&str, "Transfer", tparams)
if err != nil {
rlog.Warnf("sending reply tx err %s\n", err)
continue
}
err = db.Update(func(tx *bbolt.Tx) error {
b := tx.Bucket([]byte("SALE"))
return b.Put([]byte(e.TXID), []byte("done"))
})
if err != nil {
rlog.Warnf("err updating db to err %s\n", err)
} else {
rlog.Infof("ping replied successfully with pong")
}
}
}
}
}

View File

@ -1,2 +0,0 @@
Various RPC servers can be developed, which can represent various activitities not representable on any existing blockchain.

View File

@ -1,90 +0,0 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -1,127 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package config
import "github.com/satori/go.uuid"
import "github.com/deroproject/derohe/cryptography/crypto"
// all global configuration variables are picked from here
// though testing has complete successfully with 3 secs block time, however
// consider homeusers/developing countries we will be targetting 9 secs
// later hardforks can make it lower by 1 sec, say every 6 months or so, until the system reaches 3 secs
// by that time, networking,space requirements and processing requiremtn will probably outgrow homeusers
// since most mining nodes will be running in datacenter, 3 secs blocks c
const BLOCK_TIME = uint64(18)
// note we are keeping the tree name small for disk savings, since they will be stored n times (atleast or archival nodes)
// this is used by graviton
const BALANCE_TREE = "B" // keeps main balance
const SC_META = "M" // keeps all SCs balance, their state, their OWNER, their data tree top hash is stored here
// one are open SCs, which provide i/o privacy
// one are private SCs which are truly private, in which no one has visibility of io or functionality
// 1.25 MB block every 12 secs is equal to roughly 75 TX per second
// if we consider side blocks, TPS increase to > 100 TPS
// we can easily improve TPS by changing few parameters in this file
// the resources compute/network may not be easy for the developing countries
// we need to trade of TPS as per community
const STARGATE_HE_MAX_BLOCK_SIZE = uint64((1 * 1024 * 1024) + (256 * 1024)) // max block size limit
const STARGATE_HE_MAX_TX_SIZE = 300 * 1024 // max size
const MIN_MIXIN = 2 // >= 2 , mixin will be accepted
const MAX_MIXIN = 128 // <= 128, mixin will be accepted
// ATLANTIS FEE calculation constants are here
const FEE_PER_KB = uint64(1000000000) // .001 dero per kb
const MAINNET_BOOTSTRAP_DIFFICULTY = uint64(800 * BLOCK_TIME) // atlantis mainnet botstrapped at 200 MH/s
const MAINNET_MINIMUM_DIFFICULTY = uint64(800 * BLOCK_TIME) // 5 KH/s
// testnet bootstraps at 1 MH
//const TESTNET_BOOTSTRAP_DIFFICULTY = uint64(1000*1000*BLOCK_TIME)
const TESTNET_BOOTSTRAP_DIFFICULTY = uint64(800 * BLOCK_TIME) // testnet bootstrap at 800 H/s
const TESTNET_MINIMUM_DIFFICULTY = uint64(800 * BLOCK_TIME) // 800 H
//this controls the batch size which controls till how many blocks incoming funds cannot be spend
const BLOCK_BATCH_SIZE = crypto.BLOCK_BATCH_SIZE
// this single parameter controls lots of various parameters
// within the consensus, it should never go below 7
// if changed responsibly, we can have one second or lower blocks (ignoring chain bloat/size issues)
// gives immense scalability,
const STABLE_LIMIT = int64(8)
// we can have number of chains running for testing reasons
type CHAIN_CONFIG struct {
Name string
Network_ID uuid.UUID // network ID
P2P_Default_Port int
RPC_Default_Port int
Wallet_RPC_Default_Port int
Dev_Address string // to which address the dev's share of fees must go
Genesis_Nonce uint32
Genesis_Block_Hash crypto.Hash
Genesis_Tx string
}
var Mainnet = CHAIN_CONFIG{Name: "mainnet",
Network_ID: uuid.FromBytesOrNil([]byte{0x59, 0xd7, 0xf7, 0xe9, 0xdd, 0x48, 0xd5, 0xfd, 0x13, 0x0a, 0xf6, 0xe0, 0x9a, 0x44, 0x45, 0x0}),
P2P_Default_Port: 10101,
RPC_Default_Port: 10102,
Wallet_RPC_Default_Port: 10103,
Genesis_Nonce: 10000,
Genesis_Block_Hash: crypto.HashHexToHash("e14e318562db8d22f8d00bd41c7938807c7ff70e4380acc6f7f2427cf49f474a"),
Genesis_Tx: "" +
"01" + // version
"00" + // PREMINE_FLAG
"8fff7f" + // PREMINE_VALUE
"a01f9bcc1208dee302769931ad378a4c0c4b2c21b0cfb3e752607e12d2b6fa6425", // miners public key
Dev_Address: "deto1qxsplx7vzgydacczw6vnrtfh3fxqcjevyxcvlvl82fs8uykjkmaxgfgulfha5",
}
var Testnet = CHAIN_CONFIG{Name: "testnet", // testnet will always have last 3 bytes 0
Network_ID: uuid.FromBytesOrNil([]byte{0x59, 0xd7, 0xf7, 0xe9, 0xdd, 0x48, 0xd5, 0xfd, 0x13, 0x0a, 0xf6, 0xe0, 0x26, 0x00, 0x02, 0x00}),
P2P_Default_Port: 40401,
RPC_Default_Port: 40402,
Wallet_RPC_Default_Port: 40403,
Genesis_Nonce: 10000,
Genesis_Block_Hash: crypto.HashHexToHash("7be4a8f27bcadf556132dba38c2d3d78214beec8a959be17caf172317122927a"),
Genesis_Tx: "" +
"01" + // version
"00" + // PREMINE_FLAG
"8fff7f" + // PREMINE_VALUE
"a01f9bcc1208dee302769931ad378a4c0c4b2c21b0cfb3e752607e12d2b6fa6425", // miners public key
Dev_Address: "deto1qxsplx7vzgydacczw6vnrtfh3fxqcjevyxcvlvl82fs8uykjkmaxgfgulfha5",
}
// mainnet has a remote daemon node, which can be used be default, if user provides a --remote flag
const REMOTE_DAEMON = "https://rwallet.dero.live"

View File

@ -1,12 +0,0 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
package config
import "testing"
func Test_Part1(t *testing.T) {
}

View File

@ -1,33 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package config
// all global configuration variables are picked from here
// some seed nodes for mainnet (these seed node are not compliant with earlier protocols)
// only version 2
var Mainnet_seed_nodes = []string{
"212.8.250.158:20202",
"190.2.135.218:20202",
"212.8.242.60:20202",
"89.38.97.110:20202",
}
// some seed node for testnet
var Testnet_seed_nodes = []string{
"212.8.242.60:40401",
}

View File

@ -1,33 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package config
var DNS_NOTIFICATION_ENABLED = true // will disable update notfications
const DNS_UPDATE_CHECK = "update.dero.io." // in dns form
var DNS_servers = []string{
//"127.0.1.1:53", // for testing
"8.8.8.8:53",
"8.8.4.4:53",
"1.1.1.1:53",
"208.67.222.222:53",
"208.67.220.220:53",
"77.88.8.8:53", // yandex
"77.88.8.1:53", // yandex
}

View File

@ -1,23 +0,0 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package config
import "github.com/blang/semver"
// right now it has to be manually changed
// do we need to include git commitsha??
var Version = semver.MustParse("3.2.15-1.DEROHE.STARGATE+08082021")

View File

@ -1,47 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at [opensource@clearmatics.com][email]. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
[email]: mailto:opensource@clearmatics.com
[homepage]: http://contributor-covenant.org
[version]: http://contributor-covenant.org/version/1/4/

View File

@ -1,27 +0,0 @@
Copyright (c) 2009 The Go Authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@ -1,34 +0,0 @@
SHELL = bash
GO_FILES = $(shell find . -name "*.go" | grep -vE ".git")
GO_COVER_FILE = `find . -name "coverage.out"`
.PHONY: all test format cover-clean check fmt vet lint
test: $(GO_FILES)
go test ./...
format:
gofmt -s -w ${GO_FILES}
cover: $(GO_FILES)
go test -coverprofile=coverage.out ./...
go tool cover -html=coverage.out
cover-clean:
rm -f $(GO_COVER_FILE)
deps:
go mod download
check:
if [ -n "$(shell gofmt -l ${GO_FILES})" ]; then \
echo 1>&2 'The following files need to be formatted:'; \
gofmt -l .; \
exit 1; \
fi
vet:
go vet $(GO_FILES)
lint:
golint $(GO_FILES)

View File

@ -1,33 +0,0 @@
# BN256
[![Build Status](https://travis-ci.org/clearmatics/bn256.svg?branch=master)](https://travis-ci.org/clearmatics/bn256)
This package implements a [particular](https://eprint.iacr.org/2013/507.pdf) bilinear group.
The code is imported from https://github.com/ethereum/go-ethereum/tree/master/crypto/bn256/cloudflare
:rotating_light: **WARNING** This package originally claimed to operate at a 128-bit level. However, [recent work](https://ellipticnews.wordpress.com/2016/05/02/kim-barbulescu-variant-of-the-number-field-sieve-to-compute-discrete-logarithms-in-finite-fields/) suggest that **this is no longer the case**.
## A note on the selection of the bilinear group
The parameters defined in the `constants.go` file follow the parameters used in [alt-bn128 (libff)](https://github.com/scipr-lab/libff/blob/master/libff/algebra/curves/alt_bn128/alt_bn128_init.cpp). These parameters were selected so that `r1` has a high 2-adic order. This is key to improve efficiency of the key and proof generation algorithms of the SNARK used.
## Installation
go get github.com/clearmatics/bn256
## Development
This project uses [go modules](https://github.com/golang/go/wiki/Modules).
If you develop in your `GOPATH` and use GO 1.11, make sure to run:
```bash
export GO111MODULE=on
```
In fact:
> (Inside $GOPATH/src, for compatibility, the go command still runs in the old GOPATH mode, even if a go.mod is found.)
See: https://blog.golang.org/using-go-modules
> For more fine-grained control, the module support in Go 1.11 respects a temporary environment variable, GO111MODULE, which can be set to one of three string values: off, on, or auto (the default). If GO111MODULE=off, then the go command never uses the new module support. Instead it looks in vendor directories and GOPATH to find dependencies; we now refer to this as "GOPATH mode." If GO111MODULE=on, then the go command requires the use of modules, never consulting GOPATH. We refer to this as the command being module-aware or running in "module-aware mode". If GO111MODULE=auto or is unset, then the go command enables or disables module support based on the current directory. Module support is enabled only when the current directory is outside GOPATH/src and itself contains a go.mod file or is below a directory containing a go.mod file.
See: https://golang.org/cmd/go/#hdr-Preliminary_module_support
The project follows standard Go conventions using `gofmt`. If you wish to contribute to the project please follow standard Go conventions. The CI server automatically runs these checks.

View File

@ -1,490 +0,0 @@
// Package bn256 implements a particular bilinear group at the 128-bit security
// level.
//
// Bilinear groups are the basis of many of the new cryptographic protocols that
// have been proposed over the past decade. They consist of a triplet of groups
// (G₁, G₂ and GT) such that there exists a function e(g₁ˣ,g₂ʸ)=gTˣʸ (where gₓ
// is a generator of the respective group). That function is called a pairing
// function.
//
// This package specifically implements the Optimal Ate pairing over a 256-bit
// Barreto-Naehrig curve as described in
// http://cryptojedi.org/papers/dclxvi-20100714.pdf. Its output is compatible
// with the implementation described in that paper.
package bn256
import (
"crypto/rand"
"errors"
"io"
"math/big"
)
func randomK(r io.Reader) (k *big.Int, err error) {
for {
k, err = rand.Int(r, Order)
if k.Sign() > 0 || err != nil {
return
}
}
}
// G1 is an abstract cyclic group. The zero value is suitable for use as the
// output of an operation, but cannot be used as an input.
type G1 struct {
p *curvePoint
}
// RandomG1 returns x and g₁ˣ where x is a random, non-zero number read from r.
func RandomG1(r io.Reader) (*big.Int, *G1, error) {
k, err := randomK(r)
if err != nil {
return nil, nil, err
}
return k, new(G1).ScalarBaseMult(k), nil
}
func (e *G1) String() string {
return "bn256.G1" + e.p.String()
}
// ScalarBaseMult sets e to g*k where g is the generator of the group and then
// returns e.
func (e *G1) ScalarBaseMult(k *big.Int) *G1 {
if e.p == nil {
e.p = &curvePoint{}
}
e.p.Mul(curveGen, k)
return e
}
// ScalarMult sets e to a*k and then returns e.
func (e *G1) ScalarMult(a *G1, k *big.Int) *G1 {
if e.p == nil {
e.p = &curvePoint{}
}
e.p.Mul(a.p, k)
return e
}
// Add sets e to a+b and then returns e.
func (e *G1) Add(a, b *G1) *G1 {
if e.p == nil {
e.p = &curvePoint{}
}
e.p.Add(a.p, b.p)
return e
}
// Neg sets e to -a and then returns e.
func (e *G1) Neg(a *G1) *G1 {
if e.p == nil {
e.p = &curvePoint{}
}
e.p.Neg(a.p)
return e
}
// Set sets e to a and then returns e.
func (e *G1) Set(a *G1) *G1 {
if e.p == nil {
e.p = &curvePoint{}
}
e.p.Set(a.p)
return e
}
// Marshal converts e to a byte slice.
func (e *G1) Marshal() []byte {
// Each value is a 256-bit number.
const numBytes = 256 / 8
if e.p == nil {
e.p = &curvePoint{}
}
e.p.MakeAffine()
ret := make([]byte, numBytes*2)
if e.p.IsInfinity() {
return ret
}
temp := &gfP{}
montDecode(temp, &e.p.x)
temp.Marshal(ret)
montDecode(temp, &e.p.y)
temp.Marshal(ret[numBytes:])
return ret
}
// Unmarshal sets e to the result of converting the output of Marshal back into
// a group element and then returns e.
func (e *G1) Unmarshal(m []byte) ([]byte, error) {
// Each value is a 256-bit number.
const numBytes = 256 / 8
if len(m) < 2*numBytes {
return nil, errors.New("bn256: not enough data")
}
// Unmarshal the points and check their caps
if e.p == nil {
e.p = &curvePoint{}
} else {
e.p.x, e.p.y = gfP{0}, gfP{0}
}
var err error
if err = e.p.x.Unmarshal(m); err != nil {
return nil, err
}
if err = e.p.y.Unmarshal(m[numBytes:]); err != nil {
return nil, err
}
// Encode into Montgomery form and ensure it's on the curve
montEncode(&e.p.x, &e.p.x)
montEncode(&e.p.y, &e.p.y)
zero := gfP{0}
if e.p.x == zero && e.p.y == zero {
// This is the point at infinity.
e.p.y = *newGFp(1)
e.p.z = gfP{0}
e.p.t = gfP{0}
} else {
e.p.z = *newGFp(1)
e.p.t = *newGFp(1)
if !e.p.IsOnCurve() {
return nil, errors.New("bn256: malformed point")
}
}
return m[2*numBytes:], nil
}
// G2 is an abstract cyclic group. The zero value is suitable for use as the
// output of an operation, but cannot be used as an input.
type G2 struct {
p *twistPoint
}
// RandomG2 returns x and g₂ˣ where x is a random, non-zero number read from r.
func RandomG2(r io.Reader) (*big.Int, *G2, error) {
k, err := randomK(r)
if err != nil {
return nil, nil, err
}
return k, new(G2).ScalarBaseMult(k), nil
}
func (e *G2) String() string {
return "bn256.G2" + e.p.String()
}
// ScalarBaseMult sets e to g*k where g is the generator of the group and then
// returns out.
func (e *G2) ScalarBaseMult(k *big.Int) *G2 {
if e.p == nil {
e.p = &twistPoint{}
}
e.p.Mul(twistGen, k)
return e
}
// ScalarMult sets e to a*k and then returns e.
func (e *G2) ScalarMult(a *G2, k *big.Int) *G2 {
if e.p == nil {
e.p = &twistPoint{}
}
e.p.Mul(a.p, k)
return e
}
// Add sets e to a+b and then returns e.
func (e *G2) Add(a, b *G2) *G2 {
if e.p == nil {
e.p = &twistPoint{}
}
e.p.Add(a.p, b.p)
return e
}
// Neg sets e to -a and then returns e.
func (e *G2) Neg(a *G2) *G2 {
if e.p == nil {
e.p = &twistPoint{}
}
e.p.Neg(a.p)
return e
}
// Set sets e to a and then returns e.
func (e *G2) Set(a *G2) *G2 {
if e.p == nil {
e.p = &twistPoint{}
}
e.p.Set(a.p)
return e
}
// Marshal converts e into a byte slice.
func (e *G2) Marshal() []byte {
// Each value is a 256-bit number.
const numBytes = 256 / 8
if e.p == nil {
e.p = &twistPoint{}
}
e.p.MakeAffine()
ret := make([]byte, numBytes*4)
if e.p.IsInfinity() {
return ret
}
temp := &gfP{}
montDecode(temp, &e.p.x.x)
temp.Marshal(ret)
montDecode(temp, &e.p.x.y)
temp.Marshal(ret[numBytes:])
montDecode(temp, &e.p.y.x)
temp.Marshal(ret[2*numBytes:])
montDecode(temp, &e.p.y.y)
temp.Marshal(ret[3*numBytes:])
return ret
}
// Unmarshal sets e to the result of converting the output of Marshal back into
// a group element and then returns e.
func (e *G2) Unmarshal(m []byte) ([]byte, error) {
// Each value is a 256-bit number.
const numBytes = 256 / 8
if len(m) < 4*numBytes {
return nil, errors.New("bn256: not enough data")
}
// Unmarshal the points and check their caps
if e.p == nil {
e.p = &twistPoint{}
}
var err error
if err = e.p.x.x.Unmarshal(m); err != nil {
return nil, err
}
if err = e.p.x.y.Unmarshal(m[numBytes:]); err != nil {
return nil, err
}
if err = e.p.y.x.Unmarshal(m[2*numBytes:]); err != nil {
return nil, err
}
if err = e.p.y.y.Unmarshal(m[3*numBytes:]); err != nil {
return nil, err
}
// Encode into Montgomery form and ensure it's on the curve
montEncode(&e.p.x.x, &e.p.x.x)
montEncode(&e.p.x.y, &e.p.x.y)
montEncode(&e.p.y.x, &e.p.y.x)
montEncode(&e.p.y.y, &e.p.y.y)
if e.p.x.IsZero() && e.p.y.IsZero() {
// This is the point at infinity.
e.p.y.SetOne()
e.p.z.SetZero()
e.p.t.SetZero()
} else {
e.p.z.SetOne()
e.p.t.SetOne()
if !e.p.IsOnCurve() {
return nil, errors.New("bn256: malformed point")
}
}
return m[4*numBytes:], nil
}
// GT is an abstract cyclic group. The zero value is suitable for use as the
// output of an operation, but cannot be used as an input.
type GT struct {
p *gfP12
}
// Pair calculates an Optimal Ate pairing.
func Pair(g1 *G1, g2 *G2) *GT {
return &GT{optimalAte(g2.p, g1.p)}
}
// PairingCheck calculates the Optimal Ate pairing for a set of points.
func PairingCheck(a []*G1, b []*G2) bool {
acc := new(gfP12)
acc.SetOne()
for i := 0; i < len(a); i++ {
if a[i].p.IsInfinity() || b[i].p.IsInfinity() {
continue
}
acc.Mul(acc, miller(b[i].p, a[i].p))
}
return finalExponentiation(acc).IsOne()
}
// Miller applies Miller's algorithm, which is a bilinear function from the
// source groups to F_p^12. Miller(g1, g2).Finalize() is equivalent to Pair(g1,
// g2).
func Miller(g1 *G1, g2 *G2) *GT {
return &GT{miller(g2.p, g1.p)}
}
func (e *GT) String() string {
return "bn256.GT" + e.p.String()
}
// ScalarMult sets e to a*k and then returns e.
func (e *GT) ScalarMult(a *GT, k *big.Int) *GT {
if e.p == nil {
e.p = &gfP12{}
}
e.p.Exp(a.p, k)
return e
}
// Add sets e to a+b and then returns e.
func (e *GT) Add(a, b *GT) *GT {
if e.p == nil {
e.p = &gfP12{}
}
e.p.Mul(a.p, b.p)
return e
}
// Neg sets e to -a and then returns e.
func (e *GT) Neg(a *GT) *GT {
if e.p == nil {
e.p = &gfP12{}
}
e.p.Conjugate(a.p)
return e
}
// Set sets e to a and then returns e.
func (e *GT) Set(a *GT) *GT {
if e.p == nil {
e.p = &gfP12{}
}
e.p.Set(a.p)
return e
}
// Finalize is a linear function from F_p^12 to GT.
func (e *GT) Finalize() *GT {
ret := finalExponentiation(e.p)
e.p.Set(ret)
return e
}
// Marshal converts e into a byte slice.
func (e *GT) Marshal() []byte {
// Each value is a 256-bit number.
const numBytes = 256 / 8
if e.p == nil {
e.p = &gfP12{}
e.p.SetOne()
}
ret := make([]byte, numBytes*12)
temp := &gfP{}
montDecode(temp, &e.p.x.x.x)
temp.Marshal(ret)
montDecode(temp, &e.p.x.x.y)
temp.Marshal(ret[numBytes:])
montDecode(temp, &e.p.x.y.x)
temp.Marshal(ret[2*numBytes:])
montDecode(temp, &e.p.x.y.y)
temp.Marshal(ret[3*numBytes:])
montDecode(temp, &e.p.x.z.x)
temp.Marshal(ret[4*numBytes:])
montDecode(temp, &e.p.x.z.y)
temp.Marshal(ret[5*numBytes:])
montDecode(temp, &e.p.y.x.x)
temp.Marshal(ret[6*numBytes:])
montDecode(temp, &e.p.y.x.y)
temp.Marshal(ret[7*numBytes:])
montDecode(temp, &e.p.y.y.x)
temp.Marshal(ret[8*numBytes:])
montDecode(temp, &e.p.y.y.y)
temp.Marshal(ret[9*numBytes:])
montDecode(temp, &e.p.y.z.x)
temp.Marshal(ret[10*numBytes:])
montDecode(temp, &e.p.y.z.y)
temp.Marshal(ret[11*numBytes:])
return ret
}
// Unmarshal sets e to the result of converting the output of Marshal back into
// a group element and then returns e.
func (e *GT) Unmarshal(m []byte) ([]byte, error) {
// Each value is a 256-bit number.
const numBytes = 256 / 8
if len(m) < 12*numBytes {
return nil, errors.New("bn256: not enough data")
}
if e.p == nil {
e.p = &gfP12{}
}
var err error
if err = e.p.x.x.x.Unmarshal(m); err != nil {
return nil, err
}
if err = e.p.x.x.y.Unmarshal(m[numBytes:]); err != nil {
return nil, err
}
if err = e.p.x.y.x.Unmarshal(m[2*numBytes:]); err != nil {
return nil, err
}
if err = e.p.x.y.y.Unmarshal(m[3*numBytes:]); err != nil {
return nil, err
}
if err = e.p.x.z.x.Unmarshal(m[4*numBytes:]); err != nil {
return nil, err
}
if err = e.p.x.z.y.Unmarshal(m[5*numBytes:]); err != nil {
return nil, err
}
if err = e.p.y.x.x.Unmarshal(m[6*numBytes:]); err != nil {
return nil, err
}
if err = e.p.y.x.y.Unmarshal(m[7*numBytes:]); err != nil {
return nil, err
}
if err = e.p.y.y.x.Unmarshal(m[8*numBytes:]); err != nil {
return nil, err
}
if err = e.p.y.y.y.Unmarshal(m[9*numBytes:]); err != nil {
return nil, err
}
if err = e.p.y.z.x.Unmarshal(m[10*numBytes:]); err != nil {
return nil, err
}
if err = e.p.y.z.y.Unmarshal(m[11*numBytes:]); err != nil {
return nil, err
}
montEncode(&e.p.x.x.x, &e.p.x.x.x)
montEncode(&e.p.x.x.y, &e.p.x.x.y)
montEncode(&e.p.x.y.x, &e.p.x.y.x)
montEncode(&e.p.x.y.y, &e.p.x.y.y)
montEncode(&e.p.x.z.x, &e.p.x.z.x)
montEncode(&e.p.x.z.y, &e.p.x.z.y)
montEncode(&e.p.y.x.x, &e.p.y.x.x)
montEncode(&e.p.y.x.y, &e.p.y.x.y)
montEncode(&e.p.y.y.x, &e.p.y.y.x)
montEncode(&e.p.y.y.y, &e.p.y.y.y)
montEncode(&e.p.y.z.x, &e.p.y.z.x)
montEncode(&e.p.y.z.y, &e.p.y.z.y)
return m[12*numBytes:], nil
}

View File

@ -1,116 +0,0 @@
package bn256
import (
"bytes"
"crypto/rand"
"testing"
)
func TestG1Marshal(t *testing.T) {
_, Ga, err := RandomG1(rand.Reader)
if err != nil {
t.Fatal(err)
}
ma := Ga.Marshal()
Gb := new(G1)
_, err = Gb.Unmarshal(ma)
if err != nil {
t.Fatal(err)
}
mb := Gb.Marshal()
if !bytes.Equal(ma, mb) {
t.Fatal("bytes are different")
}
}
func TestG2Marshal(t *testing.T) {
_, Ga, err := RandomG2(rand.Reader)
if err != nil {
t.Fatal(err)
}
ma := Ga.Marshal()
Gb := new(G2)
_, err = Gb.Unmarshal(ma)
if err != nil {
t.Fatal(err)
}
mb := Gb.Marshal()
if !bytes.Equal(ma, mb) {
t.Fatal("bytes are different")
}
}
func TestBilinearity(t *testing.T) {
for i := 0; i < 2; i++ {
a, p1, _ := RandomG1(rand.Reader)
b, p2, _ := RandomG2(rand.Reader)
e1 := Pair(p1, p2)
e2 := Pair(&G1{curveGen}, &G2{twistGen})
e2.ScalarMult(e2, a)
e2.ScalarMult(e2, b)
if *e1.p != *e2.p {
t.Fatalf("bad pairing result: %s", e1)
}
}
}
func TestTripartiteDiffieHellman(t *testing.T) {
a, _ := rand.Int(rand.Reader, Order)
b, _ := rand.Int(rand.Reader, Order)
c, _ := rand.Int(rand.Reader, Order)
pa, pb, pc := new(G1), new(G1), new(G1)
qa, qb, qc := new(G2), new(G2), new(G2)
pa.Unmarshal(new(G1).ScalarBaseMult(a).Marshal())
qa.Unmarshal(new(G2).ScalarBaseMult(a).Marshal())
pb.Unmarshal(new(G1).ScalarBaseMult(b).Marshal())
qb.Unmarshal(new(G2).ScalarBaseMult(b).Marshal())
pc.Unmarshal(new(G1).ScalarBaseMult(c).Marshal())
qc.Unmarshal(new(G2).ScalarBaseMult(c).Marshal())
k1 := Pair(pb, qc)
k1.ScalarMult(k1, a)
k1Bytes := k1.Marshal()
k2 := Pair(pc, qa)
k2.ScalarMult(k2, b)
k2Bytes := k2.Marshal()
k3 := Pair(pa, qb)
k3.ScalarMult(k3, c)
k3Bytes := k3.Marshal()
if !bytes.Equal(k1Bytes, k2Bytes) || !bytes.Equal(k2Bytes, k3Bytes) {
t.Errorf("keys didn't agree")
}
}
func BenchmarkG1(b *testing.B) {
x, _ := rand.Int(rand.Reader, Order)
b.ResetTimer()
for i := 0; i < b.N; i++ {
new(G1).ScalarBaseMult(x)
}
}
func BenchmarkG2(b *testing.B) {
x, _ := rand.Int(rand.Reader, Order)
b.ResetTimer()
for i := 0; i < b.N; i++ {
new(G2).ScalarBaseMult(x)
}
}
func BenchmarkPairing(b *testing.B) {
for i := 0; i < b.N; i++ {
Pair(&G1{curveGen}, &G2{twistGen})
}
}

View File

@ -1,79 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package bn256
import (
"math/big"
)
func bigFromBase10(s string) *big.Int {
n, _ := new(big.Int).SetString(s, 10)
return n
}
// u is the BN parameter.
var u = bigFromBase10("4965661367192848881")
// Order is the number of elements in both G₁ and G₂: 36u⁴+36u³+18u²+6u+1.
var Order = bigFromBase10("21888242871839275222246405745257275088548364400416034343698204186575808495617")
// P is a prime over which we form a basic field: 36u⁴+36u³+24u²+6u+1.
var P = bigFromBase10("21888242871839275222246405745257275088696311157297823662689037894645226208583")
// p2 is p, represented as little-endian 64-bit words.
var p2 = [4]uint64{0x3c208c16d87cfd47, 0x97816a916871ca8d, 0xb85045b68181585d, 0x30644e72e131a029}
// np is the negative inverse of p, mod 2^256.
var np = [4]uint64{0x87d20782e4866389, 0x9ede7d651eca6ac9, 0xd8afcbd01833da80, 0xf57a22b791888c6b}
// <sage>
// p = 21888242871839275222246405745257275088696311157297823662689037894645226208583; Fp = GF(p)
// r = Fp(2^256) # 6350874878119819312338956282401532409788428879151445726012394534686998597021
// rInv = 1/r # 20988524275117001072002809824448087578619730785600314334253784976379291040311
// hex(20988524275117001072002809824448087578619730785600314334253784976379291040311)
// # 2e67157159e5c639 cf63e9cfb74492d9 eb2022850278edf8 ed84884a014afa37
// <\sage>
//
// rN1 is R^-1 where R = 2^256 mod p.
var rN1 = &gfP{0xed84884a014afa37, 0xeb2022850278edf8, 0xcf63e9cfb74492d9, 0x2e67157159e5c639}
// <sage>
// r2 = r^2 # 3096616502983703923843567936837374451735540968419076528771170197431451843209
// hex(3096616502983703923843567936837374451735540968419076528771170197431451843209)
// # 06d89f71cab8351f 47ab1eff0a417ff6 b5e71911d44501fb f32cfc5b538afa89
// <\sage>
//
// r2 is R^2 where R = 2^256 mod p.
var r2 = &gfP{0xf32cfc5b538afa89, 0xb5e71911d44501fb, 0x47ab1eff0a417ff6, 0x06d89f71cab8351f}
// r3 is R^3 where R = 2^256 mod p.
var r3 = &gfP{0xb1cd6dafda1530df, 0x62f210e6a7283db6, 0xef7f0b0c0ada0afb, 0x20fd6e902d592544}
// <sage>
// xiToPMinus1Over6 = Fp2(i + 9) ^ ((p-1)/6); xiToPMinus1Over6
// # 16469823323077808223889137241176536799009286646108169935659301613961712198316*i + 8376118865763821496583973867626364092589906065868298776909617916018768340080
// <\sage>
//
// The value of `xiToPMinus1Over6` below is the same as the one obtained in sage, but where every field element is montgomery encoded
// xiToPMinus1Over6 is ξ^((p-1)/6) where ξ = i+9.
var xiToPMinus1Over6 = &gfP2{gfP{0xa222ae234c492d72, 0xd00f02a4565de15b, 0xdc2ff3a253dfc926, 0x10a75716b3899551}, gfP{0xaf9ba69633144907, 0xca6b1d7387afb78a, 0x11bded5ef08a2087, 0x02f34d751a1f3a7c}}
// xiToPMinus1Over3 is ξ^((p-1)/3) where ξ = i+9.
var xiToPMinus1Over3 = &gfP2{gfP{0x6e849f1ea0aa4757, 0xaa1c7b6d89f89141, 0xb6e713cdfae0ca3a, 0x26694fbb4e82ebc3}, gfP{0xb5773b104563ab30, 0x347f91c8a9aa6454, 0x7a007127242e0991, 0x1956bcd8118214ec}}
// xiToPMinus1Over2 is ξ^((p-1)/2) where ξ = i+9.
var xiToPMinus1Over2 = &gfP2{gfP{0xa1d77ce45ffe77c7, 0x07affd117826d1db, 0x6d16bd27bb7edc6b, 0x2c87200285defecc}, gfP{0xe4bbdd0c2936b629, 0xbb30f162e133bacb, 0x31a9d1b6f9645366, 0x253570bea500f8dd}}
// xiToPSquaredMinus1Over3 is ξ^((p²-1)/3) where ξ = i+9.
var xiToPSquaredMinus1Over3 = &gfP{0x3350c88e13e80b9c, 0x7dce557cdb5e56b9, 0x6001b4b8b615564a, 0x2682e617020217e0}
// xiTo2PSquaredMinus2Over3 is ξ^((2p²-2)/3) where ξ = i+9 (a cubic root of unity, mod p).
var xiTo2PSquaredMinus2Over3 = &gfP{0x71930c11d782e155, 0xa6bb947cffbe3323, 0xaa303344d4741444, 0x2c3b3f0d26594943}
// xiToPSquaredMinus1Over6 is ξ^((1p²-1)/6) where ξ = i+9 (a cubic root of -1, mod p).
var xiToPSquaredMinus1Over6 = &gfP{0xca8d800500fa1bf2, 0xf0c5d61468b39769, 0x0e201271ad0d4418, 0x04290f65bad856e6}
// xiTo2PMinus2Over3 is ξ^((2p-2)/3) where ξ = i+9.
var xiTo2PMinus2Over3 = &gfP2{gfP{0x5dddfd154bd8c949, 0x62cb29a5a4445b60, 0x37bc870a0c7dd2b9, 0x24830a9d3171f0fd}, gfP{0x7361d77f843abe92, 0xa5bb2bd3273411fb, 0x9c941f314b3e2399, 0x15df9cddbb9fd3ec}}

View File

@ -1,318 +0,0 @@
package bn256
import (
"math/big"
)
// curvePoint implements the elliptic curve y²=x³+3. Points are kept in Jacobian
// form and t=z² when valid. G₁ is the set of points of this curve on GF(p).
type curvePoint struct {
x, y, z, t gfP
}
var curveB = newGFp(3)
// curveGen is the generator of G₁.
var curveGen = &curvePoint{
x: *newGFp(1),
y: *newGFp(2),
z: *newGFp(1),
t: *newGFp(1),
}
func (c *curvePoint) String() string {
c.MakeAffine()
x, y := &gfP{}, &gfP{}
montDecode(x, &c.x)
montDecode(y, &c.y)
return "(" + x.String() + ", " + y.String() + ")"
}
func (c *curvePoint) Set(a *curvePoint) {
c.x.Set(&a.x)
c.y.Set(&a.y)
c.z.Set(&a.z)
c.t.Set(&a.t)
}
// IsOnCurve returns true iff c is on the curve.
func (c *curvePoint) IsOnCurve() bool {
c.MakeAffine()
if c.IsInfinity() {
return true
}
y2, x3 := &gfP{}, &gfP{}
gfpMul(y2, &c.y, &c.y)
gfpMul(x3, &c.x, &c.x)
gfpMul(x3, x3, &c.x)
gfpAdd(x3, x3, curveB)
return *y2 == *x3
}
func (c *curvePoint) SetInfinity() {
c.x = gfP{0}
c.y = *newGFp(1)
c.z = gfP{0}
c.t = gfP{0}
}
func (c *curvePoint) IsInfinity() bool {
return c.z == gfP{0}
}
func (c *curvePoint) Add(a, b *curvePoint) {
if a.IsInfinity() {
c.Set(b)
return
}
if b.IsInfinity() {
c.Set(a)
return
}
// See http://hyperelliptic.org/EFD/g1p/auto-code/shortw/jacobian-0/addition/add-2007-bl.op3
// Normalize the points by replacing a = [x1:y1:z1] and b = [x2:y2:z2]
// by [u1:s1:z1·z2] and [u2:s2:z1·z2]
// where u1 = x1·z2², s1 = y1·z2³ and u1 = x2·z1², s2 = y2·z1³
z12, z22 := &gfP{}, &gfP{}
gfpMul(z12, &a.z, &a.z)
gfpMul(z22, &b.z, &b.z)
u1, u2 := &gfP{}, &gfP{}
gfpMul(u1, &a.x, z22)
gfpMul(u2, &b.x, z12)
t, s1 := &gfP{}, &gfP{}
gfpMul(t, &b.z, z22)
gfpMul(s1, &a.y, t)
s2 := &gfP{}
gfpMul(t, &a.z, z12)
gfpMul(s2, &b.y, t)
// Compute x = (2h)²(s²-u1-u2)
// where s = (s2-s1)/(u2-u1) is the slope of the line through
// (u1,s1) and (u2,s2). The extra factor 2h = 2(u2-u1) comes from the value of z below.
// This is also:
// 4(s2-s1)² - 4h²(u1+u2) = 4(s2-s1)² - 4h³ - 4h²(2u1)
// = r² - j - 2v
// with the notations below.
h := &gfP{}
gfpSub(h, u2, u1)
xEqual := *h == gfP{0}
gfpAdd(t, h, h)
// i = 4h²
i := &gfP{}
gfpMul(i, t, t)
// j = 4h³
j := &gfP{}
gfpMul(j, h, i)
gfpSub(t, s2, s1)
yEqual := *t == gfP{0}
if xEqual && yEqual {
c.Double(a)
return
}
r := &gfP{}
gfpAdd(r, t, t)
v := &gfP{}
gfpMul(v, u1, i)
// t4 = 4(s2-s1)²
t4, t6 := &gfP{}, &gfP{}
gfpMul(t4, r, r)
gfpAdd(t, v, v)
gfpSub(t6, t4, j)
gfpSub(&c.x, t6, t)
// Set y = -(2h)³(s1 + s*(x/4h²-u1))
// This is also
// y = - 2·s1·j - (s2-s1)(2x - 2i·u1) = r(v-x) - 2·s1·j
gfpSub(t, v, &c.x) // t7
gfpMul(t4, s1, j) // t8
gfpAdd(t6, t4, t4) // t9
gfpMul(t4, r, t) // t10
gfpSub(&c.y, t4, t6)
// Set z = 2(u2-u1)·z1·z2 = 2h·z1·z2
gfpAdd(t, &a.z, &b.z) // t11
gfpMul(t4, t, t) // t12
gfpSub(t, t4, z12) // t13
gfpSub(t4, t, z22) // t14
gfpMul(&c.z, t4, h)
}
func (c *curvePoint) Double(a *curvePoint) {
// See http://hyperelliptic.org/EFD/g1p/auto-code/shortw/jacobian-0/doubling/dbl-2009-l.op3
A, B, C := &gfP{}, &gfP{}, &gfP{}
gfpMul(A, &a.x, &a.x)
gfpMul(B, &a.y, &a.y)
gfpMul(C, B, B)
t, t2 := &gfP{}, &gfP{}
gfpAdd(t, &a.x, B)
gfpMul(t2, t, t)
gfpSub(t, t2, A)
gfpSub(t2, t, C)
d, e, f := &gfP{}, &gfP{}, &gfP{}
gfpAdd(d, t2, t2)
gfpAdd(t, A, A)
gfpAdd(e, t, A)
gfpMul(f, e, e)
gfpAdd(t, d, d)
gfpSub(&c.x, f, t)
gfpAdd(t, C, C)
gfpAdd(t2, t, t)
gfpAdd(t, t2, t2)
gfpSub(&c.y, d, &c.x)
gfpMul(t2, e, &c.y)
gfpSub(&c.y, t2, t)
gfpMul(t, &a.y, &a.z)
gfpAdd(&c.z, t, t)
}
func (c *curvePoint) Mul(a *curvePoint, scalar *big.Int) {
precomp := [1 << 2]*curvePoint{nil, {}, {}, {}}
precomp[1].Set(a)
precomp[2].Set(a)
gfpMul(&precomp[2].x, &precomp[2].x, xiTo2PSquaredMinus2Over3)
precomp[3].Add(precomp[1], precomp[2])
multiScalar := curveLattice.Multi(scalar)
sum := &curvePoint{}
sum.SetInfinity()
t := &curvePoint{}
for i := len(multiScalar) - 1; i >= 0; i-- {
t.Double(sum)
if multiScalar[i] == 0 {
sum.Set(t)
} else {
sum.Add(t, precomp[multiScalar[i]])
}
}
c.Set(sum)
}
// Transforms Jacobian coordinates to Affine coordinates
// (X' : Y' : Z) -> (X'/(Z^2) : Y'/(Z^3) : 1)
func (c *curvePoint) MakeAffine() {
// point0 := *newGFp(0)
// point1 := *newGFp(1)
if c.z == point1 {
return
} else if c.z == point0 { // return point at infinity if z = 0
c.x = gfP{0}
c.y = point1
c.t = gfP{0}
return
}
zInv := &gfP{}
zInv.Invert(&c.z)
t, zInv2 := &gfP{}, &gfP{}
gfpMul(t, &c.y, zInv) // t = y/z
gfpMul(zInv2, zInv, zInv) // zInv2 = 1/(z^2)
gfpMul(&c.x, &c.x, zInv2) // x = x/(z^2)
gfpMul(&c.y, t, zInv2) // y = y/(z^3)
c.z = point1
c.t = point1
}
func (c *curvePoint) Neg(a *curvePoint) {
c.x.Set(&a.x)
gfpNeg(&c.y, &a.y)
c.z.Set(&a.z)
c.t = gfP{0}
}
var point0 = *newGFp(0)
var point1 = *newGFp(1)
// this will do batch inversions and thus optimize lookup table generation
// Montgomery Batch Inversion based trick
type G1Array []*G1
func (points G1Array) MakeAffine() {
// point0 := *newGFp(0)
// point1 := *newGFp(1)
accum := newGFp(1)
var scratch_backup [256]gfP
var scratch []gfP
if len(points) <= 256 {
scratch = scratch_backup[:0] // avoid allocation is possible
}
for _, e := range points {
if e.p == nil {
e.p = &curvePoint{}
}
scratch = append(scratch, *accum)
if e.p.z == point1 {
continue
} else if e.p.z == point0 { // return point at infinity if z = 0
e.p.x = gfP{0}
e.p.y = point1
e.p.t = gfP{0}
continue
}
gfpMul(accum, accum, &e.p.z) // accum *= z
/*
zInv := &gfP{}
zInv.Invert(&e.p.z)
fmt.Printf("%d inv %s\n",i, zInv)
*/
}
zInv_accum := gfP{}
zInv_accum.Invert(accum)
tmp := gfP{}
zInv := &gfP{}
for i := len(points) - 1; i >= 0; i-- {
e := points[i]
if e.p.z == point1 {
continue
} else if e.p.z == point0 { // return point at infinity if z = 0
continue
}
tmp = gfP{}
gfpMul(&tmp, &zInv_accum, &e.p.z)
gfpMul(zInv, &zInv_accum, &scratch[i])
zInv_accum = tmp
// fmt.Printf("%d inv %s\n",i, zInv)
t, zInv2 := &gfP{}, &gfP{}
gfpMul(t, &e.p.y, zInv) // t = y/z
gfpMul(zInv2, zInv, zInv) // zInv2 = 1/(z^2)
gfpMul(&e.p.x, &e.p.x, zInv2) // x = x/(z^2)
gfpMul(&e.p.y, t, zInv2) // y = y/(z^3)
e.p.z = point1
e.p.t = point1
}
}

View File

@ -1,66 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package bn256
import (
"crypto/rand"
"testing"
"github.com/stretchr/testify/require"
)
func TestG1Array(t *testing.T) {
count := 8
var g1array G1Array
var g1array_opt G1Array
for i := 0; i < count; i++ {
a, _ := rand.Int(rand.Reader, Order)
g1array = append(g1array, new(G1).ScalarBaseMult(a))
g1array_opt = append(g1array_opt, new(G1).ScalarBaseMult(a))
}
g1array_opt.MakeAffine()
for i := range g1array_opt {
require.Equal(t, g1array_opt[i].p.z, *newGFp(1)) // current we are not testing points of infinity
}
}
func benchmarksingleinverts(count int, b *testing.B) {
var g1array, g1backup G1Array
for i := 0; i < count; i++ {
a, _ := rand.Int(rand.Reader, Order)
g1backup = append(g1backup, new(G1).ScalarBaseMult(a))
}
for n := 0; n < b.N; n++ {
g1array = g1array[:0]
for i := range g1backup {
g1array = append(g1array, new(G1).Set(g1backup[i]))
g1array[i].p.MakeAffine()
}
}
}
func benchmarkbatchedinverts(count int, b *testing.B) {
var g1array, g1backup G1Array
for i := 0; i < count; i++ {
a, _ := rand.Int(rand.Reader, Order)
g1backup = append(g1backup, new(G1).ScalarBaseMult(a))
}
for n := 0; n < b.N; n++ {
g1array = g1array[:0]
for i := range g1backup {
g1array = append(g1array, new(G1).Set(g1backup[i]))
}
g1array.MakeAffine()
}
}
func BenchmarkInverts_Single_256(b *testing.B) { benchmarksingleinverts(256, b) }
func BenchmarkInverts_Batched_256(b *testing.B) { benchmarkbatchedinverts(256, b) }

View File

@ -1,51 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package bn256
import (
"crypto/rand"
"testing"
"github.com/stretchr/testify/require"
)
func TestExamplePair(t *testing.T) {
// This implements the tripartite Diffie-Hellman algorithm from "A One
// Round Protocol for Tripartite Diffie-Hellman", A. Joux.
// http://www.springerlink.com/content/cddc57yyva0hburb/fulltext.pdf
// Each of three parties, a, b and c, generate a private value.
a, _ := rand.Int(rand.Reader, Order)
b, _ := rand.Int(rand.Reader, Order)
c, _ := rand.Int(rand.Reader, Order)
// Then each party calculates g₁ and g₂ times their private value.
pa := new(G1).ScalarBaseMult(a)
qa := new(G2).ScalarBaseMult(a)
pb := new(G1).ScalarBaseMult(b)
qb := new(G2).ScalarBaseMult(b)
pc := new(G1).ScalarBaseMult(c)
qc := new(G2).ScalarBaseMult(c)
// Now each party exchanges its public values with the other two and
// all parties can calculate the shared key.
k1 := Pair(pb, qc)
k1.ScalarMult(k1, a)
k2 := Pair(pc, qa)
k2.ScalarMult(k2, b)
k3 := Pair(pa, qb)
k3.ScalarMult(k3, c)
// k1, k2 and k3 will all be equal.
require.Equal(t, k1, k2)
require.Equal(t, k1, k3)
require.Equal(t, len(np), 4) //Avoid gometalinter varcheck err on np
}

View File

@ -1,424 +0,0 @@
// Package bn256 implements a particular bilinear group at the 128-bit security
// level.
//
// Bilinear groups are the basis of many of the new cryptographic protocols that
// have been proposed over the past decade. They consist of a triplet of groups
// (G₁, G₂ and GT) such that there exists a function e(g₁ˣ,g₂ʸ)=gTˣʸ (where gₓ
// is a generator of the respective group). That function is called a pairing
// function.
//
// This package specifically implements the Optimal Ate pairing over a 256-bit
// Barreto-Naehrig curve as described in
// http://cryptojedi.org/papers/dclxvi-20100714.pdf. Its output is compatible
// with the implementation described in that paper.
package bn256
// This file implement some util functions for the MPC
// especially the serialization and deserialization functions for points in G1
import (
"errors"
"math/big"
)
// Constants related to the bn256 pairing friendly curve
const (
FqElementSize = 32
G1CompressedSize = FqElementSize + 1 // + 1 accounts for the additional byte used for masking
G1UncompressedSize = 2*FqElementSize + 1 // + 1 accounts for the additional byte used for masking
)
// https://github.com/ebfull/pairing/tree/master/src/bls12_381#serialization
// Bytes used to detect the formatting. By reading the first byte of the encoded point we can know it's nature
// ie: we can know if the point is the point at infinity, if it is encoded uncompressed or if it is encoded compressed
// Bit masking used to detect the serialization of the points and their nature
//
// The BSL12-381 curve is built over a 381-bit prime field.
// Thus each point coordinate is represented over 381 bits = 47bytes + 5bits
// Thus, to represent a point we need to have 48bytes, but the last 3 bits of the 48th byte will be set to 0
// These are these bits that are used to implement the masking, hence why the masking proposed by ebfull was:
const (
serializationMask = (1 << 5) - 1 // 0001 1111 // Enable to pick the 3 MSB corresponding to the serialization flag
serializationCompressed = 1 << 7 // 1000 0000
serializationInfinity = 1 << 6 // 0100 0000
serializationBigY = 1 << 5 // 0010 0000
)
// IsHigherY is used to distinguish between the 2 points of E
// that have the same x-coordinate
// The point e is assumed to be given in the affine form
func (e *G1) IsHigherY() bool {
// Check nil pointers
if e.p == nil {
e.p = &curvePoint{}
}
var yCoord gfP
//yCoord.Set(&e.p.y)
yCoord = e.p.y
var yCoordNeg gfP
gfpNeg(&yCoordNeg, &yCoord)
res := gfpCmp(&yCoord, &yCoordNeg)
if res == 1 { // yCoord > yCoordNeg
return true
} else if res == -1 {
return false
}
return false
}
// EncodeCompressed converts the compressed point e into bytes
// This function takes a point in the Jacobian form
// This function does not modify the point e
// (the variable `temp` is introduced to avoid to modify e)
func (e *G1) EncodeCompressed() []byte {
// Check nil pointers
if e.p == nil {
e.p = &curvePoint{}
}
e.p.MakeAffine()
ret := make([]byte, G1CompressedSize)
// Flag the encoding with the compressed flag
ret[0] |= serializationCompressed
if e.p.IsInfinity() {
// Flag the encoding with the infinity flag
ret[0] |= serializationInfinity
return ret
}
if e.IsHigherY() {
// Flag the encoding with the bigY flag
ret[0] |= serializationBigY
}
// We start the serializagtion of the coordinates at the index 1
// Since the index 0 in the `ret` corresponds to the masking
temp := &gfP{}
montDecode(temp, &e.p.x)
temp.Marshal(ret[1:])
return ret
}
// returns to buffer rather than allocation from GC
func (e *G1) EncodeCompressedToBuf(ret []byte) {
// Check nil pointers
if e.p == nil {
e.p = &curvePoint{}
}
e.p.MakeAffine()
//ret := make([]byte, G1CompressedSize)
// Flag the encoding with the compressed flag
ret[0] |= serializationCompressed
if e.p.IsInfinity() {
// Flag the encoding with the infinity flag
ret[0] |= serializationInfinity
return
}
if e.IsHigherY() {
// Flag the encoding with the bigY flag
ret[0] |= serializationBigY
}
// We start the serializagtion of the coordinates at the index 1
// Since the index 0 in the `ret` corresponds to the masking
temp := &gfP{}
montDecode(temp, &e.p.x)
temp.Marshal(ret[1:])
return
}
// EncodeUncompressed converts the compressed point e into bytes
// Take a point P in Jacobian form (where each coordinate is MontEncoded)
// and encodes it by going back to affine coordinates and montDecode all coordinates
// This function does not modify the point e
// (the variable `temp` is introduced to avoid to modify e)
/*
func (e *G1) EncodeUncompressed() []byte {
// Check nil pointers
if e.p == nil {
e.p = &curvePoint{}
}
e.p.MakeAffine()
ret := make([]byte, G1UncompressedSize)
if e.p.IsInfinity() {
// Flag the encoding with the infinity flag
ret[0] |= serializationInfinity
return ret
}
// We start the serialization of the coordinates at the index 1
// Since the index 0 in the `ret` corresponds to the masking
temp := &gfP{}
montDecode(temp, &e.p.x) // Store the montgomery decoding in temp
temp.Marshal(ret[1:33]) // Write temp in the `ret` slice, this is the x-coordinate
montDecode(temp, &e.p.y)
temp.Marshal(ret[33:]) // this is the y-coordinate
return ret
}
*/
func (e *G1) EncodeUncompressed() []byte {
// Check nil pointers
if e.p == nil {
e.p = &curvePoint{}
}
// Set the right flags
ret := make([]byte, G1UncompressedSize)
if e.p.IsInfinity() {
// Flag the encoding with the infinity flag
ret[0] |= serializationInfinity
return ret
}
// Marshal
marshal := e.Marshal()
// The encoding = flags || marshalledPoint
copy(ret[1:], marshal)
return ret
}
// Takes a MontEncoded x and finds the corresponding y (one of the two possible y's)
func getYFromMontEncodedX(x *gfP) (*gfP, error) {
// Check nil pointers
if x == nil {
return nil, errors.New("Cannot retrieve the y-coordinate form a nil pointer")
}
// Operations on montgomery encoded field elements
x2 := &gfP{}
gfpMul(x2, x, x)
x3 := &gfP{}
gfpMul(x3, x2, x)
rhs := &gfP{}
gfpAdd(rhs, x3, curveB) // curveB is MontEncoded, since it is create with newGFp
// Montgomery decode rhs
// Needed because when we create a GFp element
// with gfP{}, then it is not montEncoded. However
// if we create an element of GFp by using `newGFp()`
// then this field element is Montgomery encoded
// Above, we have been working on Montgomery encoded field elements
// here we solve the quad. resid. over F (not encoded)
// and then we encode back and return the encoded result
//
// Eg:
// - Px := &gfP{1} => 0000000000000000000000000000000000000000000000000000000000000001
// - PxNew := newGFp(1) => 0e0a77c19a07df2f666ea36f7879462c0a78eb28f5c70b3dd35d438dc58f0d9d
montDecode(rhs, rhs)
rhsBig, err := rhs.gFpToBigInt()
if err != nil {
return nil, err
}
// Note, if we use the ModSqrt method, we don't need the exponent, so we can comment these lines
yCoord := big.NewInt(0)
res := yCoord.ModSqrt(rhsBig, P)
if res == nil {
return nil, errors.New("not a square mod P")
}
yCoordGFp := newGFpFromBigInt(yCoord)
montEncode(yCoordGFp, yCoordGFp)
return yCoordGFp, nil
}
// DecodeCompressed decodes a point in the compressed form
// Take a point P encoded (ie: written in affine form where each coordinate is MontDecoded)
// and encodes it by going back to Jacobian coordinates and montEncode all coordinates
func (e *G1) DecodeCompressed(encoding []byte) error {
if len(encoding) != G1CompressedSize {
return errors.New("wrong encoded point size")
}
if encoding[0]&serializationCompressed == 0 { // Also test the length of the encoding to make sure it is 33bytes
return errors.New("point isn't compressed")
}
// Unmarshal the points and check their caps
if e.p == nil {
e.p = &curvePoint{}
}
{
e.p.x, e.p.y = gfP{0}, gfP{0}
e.p.z, e.p.t = *newGFp(1), *newGFp(1)
}
// Removes the bits of the masking (This does a bitwise AND with `0001 1111`)
// And thus removes the first 3 bits corresponding to the masking
bin := make([]byte, G1CompressedSize)
copy(bin, encoding)
bin[0] &= serializationMask
// Decode the point at infinity in the compressed form
if encoding[0]&serializationInfinity != 0 {
if encoding[0]&serializationBigY != 0 {
return errors.New("high Y bit improperly set")
}
// Similar to `for i:=0; i<len(bin); i++ {}`
for i := range bin {
// Makes sense to check that all bytes of bin are 0x0 since we removed the masking above
if bin[i] != 0 {
return errors.New("invalid infinity encoding")
}
}
e.p.SetInfinity()
//panic("point is infinity")
return nil
}
// Decompress the point P (P =/= ∞)
var err error
if err = e.p.x.Unmarshal(bin[1:]); err != nil {
return err
}
// MontEncode our field elements for fast finite field arithmetic
// Needs to be done since the z and t coordinates are also encoded (ie: created with newGFp)
montEncode(&e.p.x, &e.p.x)
y, err := getYFromMontEncodedX(&e.p.x)
if err != nil {
return err
}
e.p.y = *y
// The flag serializationBigY is set (so the point pt with the higher Y is encoded)
// but the point e retrieved from the `getYFromX` is NOT the higher, then we inverse
if !e.IsHigherY() {
if encoding[0]&serializationBigY != 0 {
e.Neg(e)
}
} else {
if encoding[0]&serializationBigY == 0 { // The point given by getYFromX is the higher but the mask is not set for higher y
e.Neg(e)
}
}
// No need to check that the point e.p is on the curve
// since we retrieved y from x by using the curve equation.
// Adding it would be redundant
return nil
}
// DecodeUncompressed decodes a point in the uncompressed form
// Take a point P encoded (ie: written in affine form where each coordinate is MontDecoded)
// and encodes it by going back to Jacobian coordinates and montEncode all coordinates
/*
func (e *G1) DecodeUncompressed(encoding []byte) error {
if len(encoding) != G1UncompressedSize {
return errors.New("wrong encoded point size")
}
if encoding[0]&serializationCompressed != 0 { // Also test the length of the encoding to make sure it is 65bytes
return errors.New("point is compressed")
}
if encoding[0]&serializationBigY != 0 { // Also test that the bigY flag if not set
return errors.New("bigY flag should not be set")
}
// Unmarshal the points and check their caps
if e.p == nil {
e.p = &curvePoint{}
} else {
e.p.x, e.p.y = gfP{0}, gfP{0}
e.p.z, e.p.t = *newGFp(1), *newGFp(1)
}
// Removes the bits of the masking (This does a bitwise AND with `0001 1111`)
// And thus removes the first 3 bits corresponding to the masking
// Useless for now because in bn256, we added a full byte to enable masking
// However, this is needed if we work over BLS12 and its underlying field
bin := make([]byte, G1UncompressedSize)
copy(bin, encoding)
bin[0] &= serializationMask
// Decode the point at infinity in the compressed form
if encoding[0]&serializationInfinity != 0 {
// Makes sense to check that all bytes of bin are 0x0 since we removed the masking above}
for i := range bin {
if bin[i] != 0 {
return errors.New("invalid infinity encoding")
}
}
e.p.SetInfinity()
return nil
}
// Decode the point P (P =/= ∞)
var err error
// Decode the x-coordinate
if err = e.p.x.Unmarshal(bin[1:33]); err != nil {
return err
}
// Decode the y-coordinate
if err = e.p.y.Unmarshal(bin[33:]); err != nil {
return err
}
// MontEncode our field elements for fast finite field arithmetic
montEncode(&e.p.x, &e.p.x)
montEncode(&e.p.y, &e.p.y)
if !e.p.IsOnCurve() {
return errors.New("malformed point: Not on the curve")
}
return nil
}
*/
func (e *G1) DecodeUncompressed(encoding []byte) error {
if len(encoding) != G1UncompressedSize {
return errors.New("wrong encoded point size")
}
if encoding[0]&serializationCompressed != 0 { // Also test the length of the encoding to make sure it is 65bytes
return errors.New("point is compressed")
}
if encoding[0]&serializationBigY != 0 { // Also test that the bigY flag if not set
return errors.New("bigY flag should not be set")
}
// Unmarshal the points and check their caps
if e.p == nil {
e.p = &curvePoint{}
}
// Removes the bits of the masking (This does a bitwise AND with `0001 1111`)
// And thus removes the first 3 bits corresponding to the masking
// Useless for now because in bn256, we added a full byte to enable masking
// However, this is needed if we work over BLS12 and its underlying field
bin := make([]byte, G1UncompressedSize)
copy(bin, encoding)
bin[0] &= serializationMask
// Decode the point at infinity in the compressed form
if encoding[0]&serializationInfinity != 0 {
// Makes sense to check that all bytes of bin are 0x0 since we removed the masking above}
for i := range bin {
if bin[i] != 0 {
return errors.New("invalid infinity encoding")
}
}
e.p.SetInfinity()
return nil
}
// We remote the flags and unmarshall the data
_, err := e.Unmarshal(encoding[1:])
return err
}

View File

@ -1,241 +0,0 @@
package bn256
import (
"crypto/rand"
"fmt"
"math/big"
"testing"
"github.com/stretchr/testify/assert"
)
func assertGFpEqual(t *testing.T, a, b *gfP) {
for i := 0; i < FpUint64Size; i++ {
assert.Equal(t, a[i], b[i], fmt.Sprintf("The %d's elements differ between the 2 field elements", i))
}
}
func TestEncodeCompressed(t *testing.T) {
// Case1: Create random point (Jacobian form)
_, GaInit, err := RandomG1(rand.Reader)
if err != nil {
t.Fatal(err)
}
// Affine form of GaInit
GaAffine := new(G1)
GaAffine.Set(GaInit)
GaAffine.p.MakeAffine()
// Encode GaCopy1 with the EncodeCompress function
GaCopy1 := new(G1)
GaCopy1.Set(GaInit)
compressed := GaCopy1.EncodeCompressed()
// Encode GaCopy2 with the Marshal function
GaCopy2 := new(G1)
GaCopy2.Set(GaInit)
marshalled := GaCopy2.Marshal() // Careful Marshal modifies the point since it makes it an affine point!
// Make sure that the x-coordinate is encoded as it is when we call the Marshal function
assert.Equal(
t,
compressed[1:], // Ignore the masking byte
marshalled[:32], // Get only the x-coordinate
"The EncodeCompressed and Marshal function yield different results for the x-coordinate")
// Unmarshal the point Ga with the unmarshal function
Gb1 := new(G1)
_, err = Gb1.Unmarshal(marshalled)
assert.Nil(t, err)
assert.Equal(t, GaAffine.p.x.String(), Gb1.p.x.String(), "The x-coord of the unmarshalled point should equal the x-coord of the intial point")
assert.Equal(t, GaAffine.p.y.String(), Gb1.p.y.String(), "The y-coord of the unmarshalled point should equal the y-coord of the intial point")
// Decode the point Ga with the decodeCompress function
Gb2 := new(G1)
err = Gb2.DecodeCompressed(compressed)
assert.Nil(t, err)
assert.Equal(t, GaAffine.p.x.String(), Gb2.p.x.String(), "The x-coord of the decompressed point should equal the x-coord of the intial point")
assert.Equal(t, GaAffine.p.y.String(), Gb2.p.y.String(), "The y-coord of the decompressed point should equal the y-coord of the intial point")
// Case2: Encode the point at infinity
GInfinity := new(G1)
GInfinity.p = &curvePoint{}
GInfinity.p.SetInfinity()
// Get the point in affine form
GInfinityAffine := new(G1)
GInfinityAffine.Set(GInfinity)
GInfinityAffine.p.MakeAffine()
// Encode GaCopy1 with the EncodeCompress function
GInfinityCopy1 := new(G1)
GInfinityCopy1.Set(GInfinity)
compressed = GInfinityCopy1.EncodeCompressed()
// Encode GaCopy2 with the Marshal function
GInfinityCopy2 := new(G1)
GInfinityCopy2.Set(GInfinity)
marshalled = GInfinityCopy2.Marshal() // Careful Marshal modifies the point since it makes it an affine point!
// Make sure that the x-coordinate is encoded as it is when we call the Marshal function
assert.Equal(
t,
compressed[1:], // Ignore the masking byte
marshalled[:32],
"The EncodeCompressed and Marshal function yield different results")
// Unmarshal the point Ga with the unmarshal function
Gb1 = new(G1)
_, err = Gb1.Unmarshal(marshalled)
assert.Nil(t, err)
assert.Equal(t, GInfinityAffine.p.x.String(), Gb1.p.x.String(), "The x-coord of the unmarshalled point should equal the x-coord of the intial point")
assert.Equal(t, GInfinityAffine.p.y.String(), Gb1.p.y.String(), "The y-coord of the unmarshalled point should equal the y-coord of the intial point")
// Decode the point Ga with the decodeCompress function
Gb2 = new(G1)
err = Gb2.DecodeCompressed(compressed)
assert.Nil(t, err)
assert.Equal(t, GInfinityAffine.p.x.String(), Gb2.p.x.String(), "The x-coord of the decompressed point should equal the x-coord of the intial point")
assert.Equal(t, GInfinityAffine.p.y.String(), Gb2.p.y.String(), "The y-coord of the decompressed point should equal the y-coord of the intial point")
}
func TestIsHigherY(t *testing.T) {
_, Ga, err := RandomG1(rand.Reader)
if err != nil {
t.Fatal(err)
}
Ga.p.MakeAffine()
GaYString := Ga.p.y.String()
GaYBig := new(big.Int)
_, ok := GaYBig.SetString(GaYString, 16)
assert.True(t, ok, "ok should be True")
GaNeg := new(G1)
GaNeg.Neg(Ga)
GaNeg.p.MakeAffine()
GaNegYString := GaNeg.p.y.String()
GaNegYBig := new(big.Int)
_, ok = GaNegYBig.SetString(GaNegYString, 16)
assert.True(t, ok, "ok should be True")
// Verify that Ga.p.y + GaNeg.p.y == 0
sumYs := &gfP{}
fieldZero := newGFp(0)
gfpAdd(sumYs, &Ga.p.y, &GaNeg.p.y)
assert.Equal(t, *sumYs, *fieldZero, "The y-coordinates of P and -P should add up to zero")
// Find which point between Ga and GaNeg is the one witht eh higher Y
res := gfpCmp(&GaNeg.p.y, &Ga.p.y)
if res > 0 { // GaNeg.p.y > Ga.p.y
assert.True(t, GaNeg.IsHigherY(), "GaNeg.IsHigherY should be true if GaNeg.p.y > Ga.p.y")
// Test the comparision of the big int also, should be the same result
assert.Equal(t, GaNegYBig.Cmp(GaYBig), 1, "GaNegYBig should be bigger than GaYBig")
} else if res < 0 { // GaNeg.p.y < Ga.p.y
assert.False(t, GaNeg.IsHigherY(), "GaNeg.IsHigherY should be false if GaNeg.p.y < Ga.p.y")
// Test the comparision of the big int also, should be the same result
assert.Equal(t, GaYBig.Cmp(GaNegYBig), 1, "GaYBig should be bigger than GaNegYBig")
}
}
func TestGetYFromMontEncodedX(t *testing.T) {
// We know that the generator of the curve is P = (x: 1, y: 2, z: 1, t: 1)
// We take x = 1 and we see if we retrieve P such that y = 2 or -P such that y' = Inv(2)
// Create the GFp element 1 and MontEncode it
PxMontEncoded := newGFp(1)
yRetrieved, err := getYFromMontEncodedX(PxMontEncoded)
assert.Nil(t, err)
smallYMontEncoded := newGFp(2)
bigYMontEncoded := &gfP{}
gfpNeg(bigYMontEncoded, smallYMontEncoded)
testCondition := (*yRetrieved == *smallYMontEncoded) || (*yRetrieved == *bigYMontEncoded)
assert.True(t, testCondition, "The retrieved Y should either equal 2 or Inv(2)")
}
func TestEncodeUncompressed(t *testing.T) {
// Case1: Create random point (Jacobian form)
_, GaInit, err := RandomG1(rand.Reader)
if err != nil {
t.Fatal(err)
}
// Affine form of GaInit
GaAffine := new(G1)
GaAffine.Set(GaInit)
GaAffine.p.MakeAffine()
// Encode GaCopy1 with the EncodeUncompress function
GaCopy1 := new(G1)
GaCopy1.Set(GaInit)
encoded := GaCopy1.EncodeUncompressed()
// Encode GaCopy2 with the Marshal function
GaCopy2 := new(G1)
GaCopy2.Set(GaInit)
marshalled := GaCopy2.Marshal() // Careful Marshal modifies the point since it makes it an affine point!
// Make sure that the x-coordinate is encoded as it is when we call the Marshal function
assert.Equal(
t,
encoded[1:], // Ignore the masking byte
marshalled[:],
"The EncodeUncompressed and Marshal function yield different results")
// Unmarshal the point Ga with the unmarshal function
Gb1 := new(G1)
_, err = Gb1.Unmarshal(marshalled)
assert.Nil(t, err)
assert.Equal(t, GaAffine.p.x.String(), Gb1.p.x.String(), "The x-coord of the unmarshalled point should equal the x-coord of the intial point")
assert.Equal(t, GaAffine.p.y.String(), Gb1.p.y.String(), "The y-coord of the unmarshalled point should equal the y-coord of the intial point")
// Decode the point Ga with the decodeUncompress function
Gb2 := new(G1)
err = Gb2.DecodeUncompressed(encoded)
assert.Nil(t, err)
assert.Equal(t, GaAffine.p.x.String(), Gb2.p.x.String(), "The x-coord of the decoded point should equal the x-coord of the intial point")
assert.Equal(t, GaAffine.p.y.String(), Gb2.p.y.String(), "The y-coord of the decoded point should equal the y-coord of the intial point")
// Case2: Encode the point at infinity
GInfinity := new(G1)
GInfinity.p = &curvePoint{}
GInfinity.p.SetInfinity()
// Get the point in affine form
GInfinityAffine := new(G1)
GInfinityAffine.Set(GInfinity)
GInfinityAffine.p.MakeAffine()
// Encode GaCopy1 with the EncodeUncompress function
GInfinityCopy1 := new(G1)
GInfinityCopy1.Set(GInfinity)
encoded = GInfinityCopy1.EncodeUncompressed()
// Encode GaCopy2 with the Marshal function
GInfinityCopy2 := new(G1)
GInfinityCopy2.Set(GInfinity)
marshalled = GInfinityCopy2.Marshal() // Careful Marshal modifies the point since it makes it an affine point!
// Make sure that the x-coordinate is encoded as it is when we call the Marshal function
assert.Equal(
t,
encoded[1:], // Ignore the masking byte
marshalled[:],
"The EncodeUncompressed and Marshal function yield different results")
// Unmarshal the point Ga with the unmarshal function
Gb1 = new(G1)
_, err = Gb1.Unmarshal(marshalled)
assert.Nil(t, err)
assert.Equal(t, GInfinityAffine.p.x.String(), Gb1.p.x.String(), "The x-coord of the unmarshalled point should equal the x-coord of the intial point")
assert.Equal(t, GInfinityAffine.p.y.String(), Gb1.p.y.String(), "The y-coord of the unmarshalled point should equal the y-coord of the intial point")
// Decode the point Ga with the decodeCompress function
Gb2 = new(G1)
err = Gb2.DecodeUncompressed(encoded)
assert.Nil(t, err)
assert.Equal(t, GInfinityAffine.p.x.String(), Gb2.p.x.String(), "The x-coord of the decompressed point should equal the x-coord of the intial point")
assert.Equal(t, GInfinityAffine.p.y.String(), Gb2.p.y.String(), "The y-coord of the decompressed point should equal the y-coord of the intial point")
}

View File

@ -1,266 +0,0 @@
// Package bn256 implements a particular bilinear group at the 128-bit security
// level.
//
// Bilinear groups are the basis of many of the new cryptographic protocols that
// have been proposed over the past decade. They consist of a triplet of groups
// (G₁, G₂ and GT) such that there exists a function e(g₁ˣ,g₂ʸ)=gTˣʸ (where gₓ
// is a generator of the respective group). That function is called a pairing
// function.
//
// This package specifically implements the Optimal Ate pairing over a 256-bit
// Barreto-Naehrig curve as described in
// http://cryptojedi.org/papers/dclxvi-20100714.pdf. Its output is compatible
// with the implementation described in that paper.
package bn256
import (
"errors"
)
// This file implement some util functions for the MPC
// especially the serialization and deserialization functions for points in G1
// Constants related to the bn256 pairing friendly curve
const (
Fq2ElementSize = 2 * FqElementSize
G2CompressedSize = Fq2ElementSize + 1 // + 1 accounts for the additional byte used for masking
G2UncompressedSize = 2*Fq2ElementSize + 1 // + 1 accounts for the additional byte used for masking
)
// EncodeUncompressed converts the compressed point e into bytes
// Take a point P in Jacobian form (where each coordinate is MontEncoded)
// and encodes it by going back to affine coordinates and montDecode all coordinates
// This function does not modify the point e
// (the variable `temp` is introduced to avoid to modify e)
func (e *G2) EncodeUncompressed() []byte {
// Check nil pointers
if e.p == nil {
e.p = &twistPoint{}
}
// Set the right flags
ret := make([]byte, G2UncompressedSize)
if e.p.IsInfinity() {
// Flag the encoding with the infinity flag
ret[0] |= serializationInfinity
return ret
}
// Marshal
marshal := e.Marshal()
// The encoding = flags || marshalledPoint
copy(ret[1:], marshal)
return ret
}
// DecodeUncompressed decodes a point in the uncompressed form
// Take a point P encoded (ie: written in affine form where each coordinate is MontDecoded)
// and encodes it by going back to Jacobian coordinates and montEncode all coordinates
func (e *G2) DecodeUncompressed(encoding []byte) error {
if len(encoding) != G2UncompressedSize {
return errors.New("wrong encoded point size")
}
if encoding[0]&serializationCompressed != 0 { // Also test the length of the encoding to make sure it is 65bytes
return errors.New("point is compressed")
}
if encoding[0]&serializationBigY != 0 { // Also test that the bigY flag if not set
return errors.New("bigY flag should not be set")
}
// Unmarshal the points and check their caps
if e.p == nil {
e.p = &twistPoint{}
}
// Removes the bits of the masking (This does a bitwise AND with `0001 1111`)
// And thus removes the first 3 bits corresponding to the masking
// Useless for now because in bn256, we added a full byte to enable masking
// However, this is needed if we work over BLS12 and its underlying field
bin := make([]byte, G2UncompressedSize)
copy(bin, encoding)
bin[0] &= serializationMask
// Decode the point at infinity in the compressed form
if encoding[0]&serializationInfinity != 0 {
// Makes sense to check that all bytes of bin are 0x0 since we removed the masking above}
for i := range bin {
if bin[i] != 0 {
return errors.New("invalid infinity encoding")
}
}
e.p.SetInfinity()
return nil
}
// We remove the flags and unmarshal the data
_, err := e.Unmarshal(encoding[1:])
return err
}
func (e *G2) IsHigherY() bool {
// Check nil pointers
if e.p == nil {
e.p = &twistPoint{}
e.p.MakeAffine()
}
// Note: the structures attributes are quite confusing here
// In fact, each element of Fp2 is a polynomial with 2 terms
// the `x` and `y` denote these coefficients, ie: xi + y
// However, `x` and `y` are also used to denote the x and y **coordinates**
// of an elliptic curve point. Hence, e.p.y represents the y-coordinate of the
// point e, and e.p.y.y represents the **coefficient** y of the y-coordinate
// of the elliptic curve point e.
//
// TODO: Rename the coefficients of the elements of Fp2 as c0 and c1 to clarify the code
yCoordY := &gfP{}
yCoordY.Set(&e.p.y.y)
yCoordYNeg := &gfP{}
gfpNeg(yCoordYNeg, yCoordY)
res := gfpCmp(yCoordY, yCoordYNeg)
if res == 1 { // yCoordY > yCoordNegY
return true
} else if res == -1 {
return false
}
return false
}
func (e *G2) EncodeCompressed() []byte {
// Check nil pointers
if e.p == nil {
e.p = &twistPoint{}
}
e.p.MakeAffine()
ret := make([]byte, G2CompressedSize)
// Flag the encoding with the compressed flag
ret[0] |= serializationCompressed
if e.p.IsInfinity() {
// Flag the encoding with the infinity flag
ret[0] |= serializationInfinity
return ret
}
if e.IsHigherY() {
// Flag the encoding with the bigY flag
ret[0] |= serializationBigY
}
// We start the serialization of the coordinates at the index 1
// Since the index 0 in the `ret` corresponds to the masking
//
// `temp` contains the the x-coordinate of the point
// Thus, to fully encode `temp`, we need to Marshal it's x coefficient and y coefficient
temp := gfP2Decode(&e.p.x)
temp.x.Marshal(ret[1:])
temp.y.Marshal(ret[FqElementSize+1:])
return ret
}
// Takes a MontEncoded x and finds the corresponding y (one of the two possible y's)
func getYFromMontEncodedXG2(x *gfP2) (*gfP2, error) {
// Check nil pointers
if x == nil {
return nil, errors.New("Cannot retrieve the y-coordinate from a nil pointer")
}
x2 := new(gfP2).Mul(x, x)
x3 := new(gfP2).Mul(x2, x)
rhs := new(gfP2).Add(x3, twistB) // twistB is MontEncoded, since it is create with newGFp
yCoord, err := rhs.Sqrt()
if err != nil {
return nil, err
}
return yCoord, nil
}
// DecodeCompressed decodes a point in the compressed form
// Take a point P in G2 decoded (ie: written in affine form where each coordinate is MontDecoded)
// and encodes it by going back to Jacobian coordinates and montEncode all coordinates
func (e *G2) DecodeCompressed(encoding []byte) error {
if len(encoding) != G2CompressedSize {
return errors.New("wrong encoded point size")
}
if encoding[0]&serializationCompressed == 0 { // Also test the length of the encoding to make sure it is 33bytes
return errors.New("point isn't compressed")
}
// Unmarshal the points and check their caps
if e.p == nil {
e.p = &twistPoint{}
} else {
e.p.x.SetZero()
e.p.y.SetZero()
e.p.z.SetOne()
e.p.t.SetOne()
}
// Removes the bits of the masking (This does a bitwise AND with `0001 1111`)
// And thus removes the first 3 bits corresponding to the masking
bin := make([]byte, G2CompressedSize)
copy(bin, encoding)
bin[0] &= serializationMask
// Decode the point at infinity in the compressed form
if encoding[0]&serializationInfinity != 0 {
if encoding[0]&serializationBigY != 0 {
return errors.New("high Y bit improperly set")
}
// Similar to `for i:=0; i<len(bin); i++ {}`
for i := range bin {
// Makes sense to check that all bytes of bin are 0x0 since we removed the masking above
if bin[i] != 0 {
return errors.New("invalid infinity encoding")
}
}
e.p.SetInfinity()
return nil
}
// Decompress the point P (P =/= ∞)
var err error
if err = e.p.x.x.Unmarshal(bin[1:]); err != nil {
return err
}
if err = e.p.x.y.Unmarshal(bin[FqElementSize+1:]); err != nil {
return err
}
// MontEncode our field elements for fast finite field arithmetic
// Needs to be done since the z and t coordinates are also encoded (ie: created with newGFp)
montEncode(&e.p.x.x, &e.p.x.x)
montEncode(&e.p.x.y, &e.p.x.y)
y, err := getYFromMontEncodedXG2(&e.p.x)
if err != nil {
return err
}
e.p.y = *y
// The flag serializationBigY is set (so the point pt with the higher Y is encoded)
// but the point e retrieved from the `getYFromX` is NOT the higher, then we inverse
if !e.IsHigherY() {
if encoding[0]&serializationBigY != 0 {
e.Neg(e)
}
} else {
if encoding[0]&serializationBigY == 0 { // The point given by getYFromX is the higher but the mask is not set for higher y
e.Neg(e)
}
}
// No need to check that the point e.p is on the curve
// since we retrieved y from x by using the curve equation.
// Adding it would be redundant
return nil
}

Some files were not shown because too many files have changed in this diff Show More