DERO-HE STARGATE Testnet Release35

This commit is contained in:
Captain 2021-12-04 16:42:11 +00:00
parent afaa747e94
commit f0d3e7a6e8
No known key found for this signature in database
GPG Key ID: 18CDB3ED5E85D2D4
4766 changed files with 1379954 additions and 1 deletions

52
Captain_Dero_pub.txt Normal file
View File

@ -0,0 +1,52 @@
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQSuBFpgP9IRDAC5HFDj9beW/6THlCHMPmjSCUeT0lKtT22uHbTA5CpZFTRvrjF8
l1QFpECuax2LiQUWCg2rl5LZtjE2BL53uNhPagGiUOnMC7w50i3YD/KWoanM9or4
8uNmkYRp7pgnjQKX+NK9TWJmLE94UMUgCUach+WXRG4ito/mc2U2A37Lonokpjb2
hnc3d2wSESg+N0Am91TNSiEo80/JVRcKlttyEHJo6FE1sW5Ll84hW8QeROwYa/kU
N8/jAAVTUc2KzMKknlVlGYRcfNframwCu2xUMlyX5Ghjrr3PmLgQX3qc3k/eTwAr
fHifdvZnsBTquLuOxFHk0xlvdSyoGeX3F0LKAXw1+Y6uyX9v7F4Ap7vEGsuCWfNW
hNIayxIM8iOeb6AOFQycL/GkI0Mv+SCd/8KqdAHT8FWjsJUnOWcYYKvFdN5jcORw
C6OVxf296Sj1Zrti6XVQv63/iaJ9at142AcVwbnvaR2h5IqyXdmzmszmoYVvf7jG
JVsmkwTrRvIgyMcBAOLrwQ7I4JGlL54nKr1mIvGRLZ2lH/2sfM2QHcTgcCQ5DACi
P0wOKlt6UgRQ27Aeh0LtOuFuZReXE8dIpD8f6l+zLS5Kii1SB1yffeSsQbTD6bvt
Ic6h88iUKypNHiFcFNncyad6f4zFYPB1ULXyFoZcpPo3jKjwNW/h//AymgfbqFUa
4dWgdVhdkSKB1BzSMamxKSv9O87Q/Zc2vTcA/0j9RjPsrRIfOCziob+kIcpuylA9
a71R9dJ7r2ivwvdOK2De/VHkEanM8qyPgmxdD03jLsx159fX7B9ItSdxg5i0K9sV
6mgfyGiHETminsW28f36O/WMH0SUnwjdG2eGJsZE2IOS/BqTXHRXQeFVR4b44Ubg
U9h8moORPxc1+/0IFN2Bq4AiLQZ9meCtTmCe3QHOWbKRZ3JydMpoohdU3l96ESXl
hNpD6C+froqQgemID51xe3iPRY947oXjeTD87AHDBcLD/vwE6Ys2Vi9mD5bXwoym
hrXCIh+v823HsJSQiN8QUDFfIMIgbATNemJTXs84EnWwBGLozvmuUvpVWXZSstcL
/ROivKTKRkTYqVZ+sX/yXzQM5Rp2LPF13JDeeATwrgTR9j8LSiycOOFcp3n+ndvy
tNg+GQAKYC5NZWL/OrrqRuFmjWkZu0234qZIFd0/oUQ5tqDGwy84L9f6PGPvshTR
yT6B4FpOqvPt10OQFfpD/h9ocFguNBw0AELjXUHk89bnBTU5cKGLkb1iOnGwtAgJ
mV6MJRjS/TKL6Ne2ddiv46fXlY05zJfg0ZHehe49BIZXQK8/9h5YJGmtcUZP19+6
xPTF5zXWs0k3yzoTGP2iCW/Ksf6b0t0fIIASGFAhQJUmGW1lKAcZTTt425G3NYOc
jmhJaFzcLpTnoqB8RKOTUzWXESXmA86cq4DtyQ2yzeLKBkroRGdpwvpZLH3MeDJ4
EIWSmcKPxm8oafMk6Ni9I4qQLFeSTHcF2qFoBMLKai1lqLd+NAzQmbXHDw6gOac8
+DBfIcaj0f5AK/0G39dOV+pg29pISt2PWDDhZ/XsjetrqcrnhsqNNRyplmmy0xR0
srQwQ2FwdGFpbiBEZXJvIChodHRwczovL2Rlcm8uaW8pIDxzdXBwb3J0QGRlcm8u
aW8+iJAEExEIADgWIQQPOeQljGU5R3AqgjQIsgNgoDqd6AUCWmA/0gIbAwULCQgH
AgYVCAkKCwIEFgIDAQIeAQIXgAAKCRAIsgNgoDqd6FYnAQChtgDnzVwe28s6WDTK
4bBa60dSZf1T08PCKl3+c3xx1QEA2R9K2CLQ6IsO9NXD5kA/pTQs5AxYc9bLo/eD
CZSe/4u5Aw0EWmA/0hAMALjwoBe35jZ7blE9n5mg6e57H0Bri43dkGsQEQ1fNaDq
7XByD0JAiZ20vrrfDsbXZQc+1SBGGOa38pGi6RKEf/q4krGe7EYx4hihHQuc+hco
PqOs6rN3+hfHerUolKpYlkGOSxO1ZjpvMOPBF1hz0Bj9NoPMWwVb5fdWis2BzKAu
GHFAX5Ls86KKZs19DRejWsdFtytEiqM7bAjUW75o3O24faxtByTa2SVmmkavCFS4
BpjDhIU2d5RqhJRkb9fqBU8MDFrmCQqSraQs/CqmOTYzM7E8wlk1SwylXN6yBFX3
RAwq1koFMw8yRMVzswEy917kTHS4IyM2yfYjbnENmWJuHiYJmgn8Lqw1QA3syIfP
E4qpzGBTBq3YXXOSymsNKZmKH0rK/G0l3p33rIagl5UXfr1LVd5XJRu6BzjKuk+q
uL3zb6d0ZSaT+aQ/Sju3shhWjGdCRVoT1shvBbQeyEU5ZLe5by6sp0FH9As3hRkN
0PDALEkhgQwl5hU8aIkwewADBQv/Xt31aVh+k/l+CwThAt9rMCDf2PQl0FKDH0pd
7Tcg1LgbqM20sF62PeLpRq+9iMe/pD/rNDEq94ANnCoqC5yyZvxganjG2Sxryzwc
jseZeq3t/He8vhiDxs3WwFbJSylzPG3u9xgyGkKDfGA74Iu+ASPOPOEOT4oLjI5E
s/tB7muD8l/lpkWij2BOopiZzieQntn8xW8eCFTocSAjZW52SoI1x/gw3NasILoB
nrTy0yOYlM01ucZOTB/0JKpzidkJg336amZdF4bLkfUPyCTE6kzG0PrLrQSeycr4
jkDfWfuFmRhKD2lDtoWDHqiPfe9IJkcTMnp5XfXAG3V2pAc+Mer1WIYajuHieO8m
oFNCzBc0obe9f+zEIBjoINco4FumxP78UZMzwe+hHrj8nFtju7WbKqGWumYH0L34
47tUoWXkCZs9Ni9DUIBVYWzEobgS7pl/H1HLR36klfAHLut0T9PZgipKRjSx1Ljz
M78wxVhupdDvHDEdKnq9E9lD6018iHgEGBEIACAWIQQPOeQljGU5R3AqgjQIsgNg
oDqd6AUCWmA/0gIbDAAKCRAIsgNgoDqd6LTZAQDESAvVHbtyKTwMmrx88p6Ljmtp
pKxKP0O5AFM7b7INbQEAtE3lAIBUA31x3fjC5L6UyGk/a2ssOWTsJx98YxMcPhs=
=H4Qj
-----END PGP PUBLIC KEY BLOCK-----

66
Changelog.md Normal file
View File

@ -0,0 +1,66 @@
### Welcome to the DEROHE Testnet
[Explorer](https://testnetexplorer.dero.io) [Source](https://github.com/deroproject/derohe) [Twitter](https://twitter.com/DeroProject) [Discord](https://discord.gg/H95TJDp) [Wiki](https://wiki.dero.io) [Github](https://github.com/deroproject/derohe) [DERO CryptoNote Mainnet Stats](http://network.dero.io) [Mainnet WebWallet](https://wallet.dero.io/)
### DERO HE Changelog
[From Wikipedia: ](https://en.wikipedia.org/wiki/Homomorphic_encryption)
###At this point in time, DERO blockchain has the first mover advantage in the following
* Private SCs ( no one knows who owns what tokens and who is transferring to whom and how much is being transferred.)
* Homomorphic protocol
* Ability to do instant sync (takes couple of seconds or minutes), depends on network bandwidth.
* DAG/MINIDAG with 1 miniblock every second
* Mining Decentralization.No more mining pools, daily 100000 reward blocks, no need for pools and thus no attacks
* Erasure coded blocks, lower bandwidth requirements, very low propagation time.
* Ability to deliver encrypted license keys and other data.
* Pruned chains are the core.
* Ability to model 99.9% earth based financial model of the world.
* Privacy by design, backed by crypto algorithms. Many years of research in place.
- Sample Token contract is available with guide.
- Multi-send is now possible. sending to multiple destination per tx
- DERO Simulator for faster development/testing
- Few more ideas implemented and will be tested for review in upcoming technology preview.
-
###3.4
- DAG/MINIDAG with blocks flowing every second
- Mining Decentralization.No more mining pools, daily 100000 reward blocks, no need for pools and thus no attacks
- Erasure coded blocks, lower bandwidth requirements, very low propagation time. Tested with upto 20 MB blocks.
- DERO Simulator for faster Development cycle
###3.3
* Private SCs are now supported. (90% completed).
* Sample Token contract is available with guide.
* Multi-send is now possible. sending to multiple destination per tx
* Few more ideas implemented and will be tested for review in upcoming technology preview.
###3.2
* Open SCs are now supported
* Private SCs which have their balance encrypted at all times (under implementation)
* SCs can now update themselves. however, new code will only run on next invocation
* Multi Send is under implementation.
###3.1
* TX now have significant savings of around 31 * ringsize bytes for every tx
* Daemon now supports pruned chains.
* Daemon by default bootstraps a pruned chain.
* Daemon currently syncs full node by using --fullnode option.
* P2P has been rewritten for various improvements and easier understanding of state machine
* Address specification now enables to embed various RPC parameters for easier transaction
* DERO blockchain represents transaction finality in a couple of blocks (less than 1 minute), unlike other blockchains.
* Proving and parsing of embedded data is now available in explorer.
* Senders/Receivers both have proofs which confirm data sent on execution.
* All tx now have inbuilt space of 144 bytes for user defined data
* User defined space has inbuilt RPC which can be used to implement most practical use-cases.All user defined data is encrypted.
* The model currrently defines data on chain while execution is referred to wallet extensions. A dummy example of pongserver extension showcases how to enable purchases/delivery of license keys/information privately.
* Burn transactions which burn value are now working.
###3.0
* DERO HE implemented

90
LICENSE Normal file
View File

@ -0,0 +1,90 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

294
Readme.md
View File

@ -1 +1,293 @@
Repo Cleaned. ### Welcome to the DEROHE Testnet
[Explorer](https://testnetexplorer.dero.io) [Source](https://github.com/deroproject/derohe) [Twitter](https://twitter.com/DeroProject) [Discord](https://discord.gg/H95TJDp) [Wiki](https://wiki.dero.io) [Github](https://github.com/deroproject/derohe) [DERO CryptoNote Mainnet Stats](http://network.dero.io) [Mainnet WebWallet](https://wallet.dero.io/)
### DERO HE [ DERO Homomorphic Encryption]
[From Wikipedia: ](https://en.wikipedia.org/wiki/Homomorphic_encryption)
**Homomorphic encryption is a form of encryption allowing one to perform calculations on encrypted data without decrypting it first. The result of the computation is in an encrypted form, when decrypted the output is the same as if the operations had been performed on the unencrypted data.**
Homomorphic encryption can be used for privacy-preserving outsourced storage and computation. This allows data to be encrypted and out-sourced to commercial cloud environments for processing, all while encrypted. In highly regulated industries, such as health care, homomorphic encryption can be used to enable new services by removing privacy barriers inhibiting data sharing. For example, predictive analytics in health care can be hard to apply via a third party service provider due to medical data privacy concerns, but if the predictive analytics service provider can operate on encrypted data instead, these privacy concerns are diminished.
**DERO is pleased to announce release of DERO Homomorphic Encryption Protocol testnet.**
DERO will migrate from exisiting CryptoNote Protocol to it's own DERO Homomorphic Encryption Blockchain Protocol(DHEBP).
### Table of Contents [DEROHE]
1. [ABOUT DERO PROJECT](#about-dero-project)
2. [DERO HE Features](#dero-he-features)
3. [DERO HE TX Sizes](#dero-he-tx-sizes)
4. [DERO Crypto](#dero-crypto)
5. [DERO HE PORTS](#dero-he-ports)
6. [Technical](#technical)
7. [DERO blockchain salient features](#dero-blockchain-salient-features)
8. [DERO Innovations](#dero-innovations)
1. [Dero DAG](#dero-dag)
2. [Client Protocol](#client-protocol)
3. [Dero Rocket Bulletproofs](#dero-rocket-bulletproofs)
4. [51% Attack Resistant](#51-attack-resistant)
9. [DERO Mining](#dero-mining)
10. [DERO Installation](#dero-installation)
1. [Installation From Source](#installation-from-source)
2. [Installation From Binary](#installation-from-binary)
11. [Next Step After DERO Installation](#next-step-after-dero-installation)
1. [Running DERO Daemon](#running-dero-daemon)
2. [Running DERO wallet](#running-dero-wallet)
1. [DERO Cmdline Wallet](#dero-cmdline-wallet)
2. [DERO WebWallet](#dero-web-wallet)
3. [DERO Gui Wallet ](#dero-gui-wallet)
12. [DERO Explorer](#dero-explorer)
13. [Proving DERO Transactions](#proving-dero-transactions)
#### ABOUT DERO PROJECT
&nbsp; &nbsp; &nbsp; &nbsp; [DERO](https://github.com/deroproject/derosuite) is decentralized DAG(Directed Acyclic Graph) based blockchain with enhanced reliability, privacy, security, and usability. Consensus algorithm is PoW based on [DERO AstroBWT: ASIC/FPGA/GPU resistant CPU mining algorithm ](https://github.com/deroproject/astrobwt). DERO is industry leading and the first blockchain to have bulletproofs, TLS encrypted Network.
&nbsp; &nbsp; &nbsp; &nbsp; DERO is the first crypto project to combine a Proof of Work blockchain with a DAG block structure and fully anonymous transactions based on [Homomorphic Encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption). The fully distributed ledger processes transactions with a sixty-seconds average block time and is secure against majority hashrate attacks. DERO will be the first Homomorphic Encryption based blockchain to have smart contracts on its native chain without any extra layers or secondary blockchains. At present DERO has Smart Contracts on old CryptoNote protocol [testnet](https://github.com/deroproject/documentation/blob/master/testnet/stargate.md).
#### DERO HE Features
1. **Homomorphic account based model** [First privacy chain to have this.](Check blockchain/transaction_execute.go line 82-95).
2. Instant account balances[ Need to get 66 bytes of data only from the blockchain].
3. DAG/MINIDAG with 1 miniblock every second
4. Mining Decentralization.No more mining pools, daily 100000 reward blocks, no need for pools and thus no attacks
5. Erasure coded blocks, lower bandwidth requirements, very low propagation time.
6. No more chain scanning or wallet scanning to detect funds, no key images etc.
7. Truly light weight and efficient wallets.
8. Fixed per account cost of 66 bytes in blockchain[Immense scalability].
9. Perfectly anonymous transactions with many-out-of-many proofs [bulletproofs and sigma protocol]
10. Deniability
11. Fixed transaction size say ~2.5KB (ring size 8) or ~3.4 KB (ring size 16) etc based on chosen anonymity group size[ logarithmic growth]
12. Anonymity group can be chosen in powers of 2.
13. Allows homomorphic assets ( programmable SCs with fixed overhead per asset ), with open Smart Contract but encrypted data [Internal testing/implementation not on this current testnet branch].
14. Allows open assets ( programmable SCs with fixed overhead per asset ) [Internal testing/implementation not on this current testnet branch]
15. Allows chain pruning on daemons to control growth of data on daemons.
16. Transaction generation takes less than 25 ms.
17. Transaction verification takes even less than 25ms time.
18. No trusted setup, no hidden parameters.
19. Pruning chain/history for immense scalibility[while still secured using merkle proofs].
20. Example disk requirements of 1 billion accounts ( assumming it does not want to keep history of transactions, but keeps proofs to prove that the node is in sync with all other nodes)
```
Requirement of 1 account = 66 bytes
Assumming storage overhead per account of 128 bytes ( constant )
Total requirements = (66 + 128)GB ~ 200GB
Assuming we are off by factor of 4 = 800GB
```
21. Note that, Even after 1 trillion transactions, 1 billion accounts will consume 800GB only, If history is not maintained, and everything still will be in proved state using merkle roots.
And so, Even Raspberry Pi can host the entire chain.
22. Senders can prove to receiver what amount they have send (without revealing themselves).
23. Worlds first Erasure Coded Propagation protocol, which allows 100x block size without increasing propagation delays.
24. Entire chain is rsyncable while in operation.
25. Testnet released with source code.
#### DERO HE TX Sizes
| Ring Size | DEROHE TX Size |
| --------- | -------------- |
| 2 | 1553 bytes |
| 4 | 2013 bytes |
| 8 | 2605 bytes |
| 16 | 3461 bytes |
| 32 | 4825 bytes |
| 64 | 7285 bytes |
| 128 | 11839 bytes |
| 512 | ~35000 bytes |
**NB:** Plan to reduce TX sizes further.
#### DERO Crypto
&nbsp; &nbsp; &nbsp; &nbsp; Secure and fast crypto is the basic necessity of this project and adequate amount of time has been devoted to develop/study/implement/audit it. Most of the crypto such as ring signatures have been studied by various researchers and are in production by number of projects. As far as the Bulletproofs are considered, since DERO is the first one to implement/deploy, they have been given a more detailed look. First, a bare bones bulletproofs was implemented, then implementations in development were studied (Benedict Bunz,XMR, Dalek Bulletproofs) and thus improving our own implementation.
&nbsp; &nbsp; &nbsp; &nbsp; Some new improvements were discovered and implemented (There are number of other improvements which are not explained here). Major improvements are in the Double-Base Double-Scalar Multiplication while validating bulletproofs. A typical bulletproof takes ~15-17 ms to verify. Optimised bulletproofs takes ~1 to ~2 ms(simple bulletproof, no aggregate/batching). Since, in the case of bulletproofs the bases are fixed, we can use precompute table to convert 64*2 Base Scalar multiplication into doublings and additions (NOTE: We do not use Bos-Coster/Pippienger methods). This time can be again easily decreased to .5 ms with some more optimizations. With batching and aggregation, 5000 range-proofs (~2500 TX) can be easily verified on even a laptop. The implementation for bulletproofs is in github.com/deroproject/derosuite/crypto/ringct/bulletproof.go , optimized version is in github.com/deroproject/derosuite/crypto/ringct/bulletproof_ultrafast.go
&nbsp; &nbsp; &nbsp; &nbsp; There are other optimizations such as base-scalar multiplication could be done in less than a microsecond. Some of these optimizations are not yet deployed and may be deployed at a later stage.
#### DEROHE PORTS
**Mainnet:**
P2P Default Port: 10101
RPC Default Port: 10102
Wallet RPC Default Port: 10103
**Testnet:**
P2P Default Port: 40401
RPC Default Port: 40402
Wallet RPC Default Port: 40403
#### Technical
&nbsp; &nbsp; &nbsp; &nbsp; For specific details of current DERO core (daemon) implementation and capabilities, see below:
1. **DAG:** No orphan blocks, No soft-forks.
2. **BulletProofs:** Zero Knowledge range-proofs(NIZK)
3. **AstroBWT:** This is memory-bound algorithm. This provides assurance that all miners are equal. ( No miner has any advantage over common miners).
4. **P2P Protocol:** This layers controls exchange of blocks, transactions and blockchain itself.
5. **Pederson Commitment:** (Part of ring confidential transactions): Pederson commitment algorithm is a cryptographic primitive that allows user to commit to a chosen value while keeping it hidden to others. Pederson commitment is used to hide all amounts without revealing the actual amount. It is a homomorphic commitment scheme.
6. **Homomorphic Encryption:** Homomorphic Encryption is used to to do operations such as addition/substraction to settle balances with data being always encrypted (Balances are never decrypted before/during/after operations in any form.).
7. **Homomorphic Ring Confidential Transactions:** Gives untraceability , privacy and fungibility while making sure that the system is stable and secure.
8. **Core-Consensus Protocol implemented:** Consensus protocol serves 2 major purpose
1. Protects the system from adversaries and protects it from forking and tampering.
2. Next block in the chain is the one and only correct version of truth ( balances).
9. **Proof-of-Work(PoW) algorithm:** PoW part of core consensus protocol which is used to cryptographically prove that X amount of work has been done to successfully find a block.
10. **Difficulty algorithm**: Difficulty algorithm controls the system so as blocks are found roughly at the same speed, irrespective of the number and amount of mining power deployed.
11. **Serialization/De-serialization of blocks**: Capability to encode/decode/process blocks .
12. **Serialization/De-serialization of transactions**: Capability to encode/decode/process transactions.
13. **Transaction validity and verification**: Any transactions flowing within the DERO network are validated,verified.
14. **Socks proxy:** Socks proxy has been implemented and integrated within the daemon to decrease user identifiability and improve user anonymity.
15. **Interactive daemon** can print blocks, txs, even entire blockchain from within the daemon
16. **status, diff, print_bc, print_block, print_tx** and several other commands implemented
17. GO DERO Daemon has both mainnet, testnet support.
18. **Enhanced Reliability, Privacy, Security, Useability, Portabilty assured.**
#### DERO blockchain salient features
- [DAG Based: No orphan blocks, No soft-forks.](#dero-dag)
- [51% Attack resistant.](#51-attack-resistant)
- 60 Second Block time.
- Extremely fast transactions with one minute/block confirmation time.
- SSL/TLS P2P Network.
- Homomorphic: Fully Encrypted Blockchain
- [Dero Fastest Rocket BulletProofs](#dero-rocket-bulletproofs): Zero Knowledge range-proofs(NIZK).
- Ring signatures.
- Fully Auditable Supply.
- DERO blockchain is written from scratch in Golang. [See all unique blockchains from scratch.](https://twitter.com/cryptic_monk/status/999227961059528704)
- Developed and maintained by original developers.
#### DERO Innovations
&nbsp; &nbsp; &nbsp; &nbsp; Following are DERO first and leading innovations.
#### DERO DAG
&nbsp; &nbsp; &nbsp; &nbsp; DERO DAG implementation builds outs a main chain from the DAG network of blocks which refers to main blocks (100% reward) and side blocks (8% rewards).
![DERO DAG stats.dero.io](https://raw.githubusercontent.com/deroproject/documentation/master/images/Dag1.jpeg)
*DERO DAG Screenshot* [Live](https://stats.dero.io/)
![DERO DAG network.dero.io](https://raw.githubusercontent.com/deroproject/documentation/master/images/dagx4.png)
*DERO DAG Screenshot* [Live](https://network.dero.io/)
#### **Erasure Coded Blocks**
        Traditional Blockchains process blocks as single unit of computation(if a double-spend tx occurs within the block, entire block is rejected). As soon as a block is found, it is sent to all its peers.DERO blockchain erasure codes the block into 48 chunks, dispersing and chunks are dispersed to peers randomly.Any peer receiving any 16 chunks( from 48 chunks) can regerate the block and thus lower overheads and lower propagation time.
#### Client Protocol
&nbsp; &nbsp; &nbsp; &nbsp; Traditional Blockchains process blocks as single unit of computation(if a double-spend tx occurs within the block, entire block is rejected). However DERO network accepts such blocks since DERO blockchain considers transaction as a single unit of computation.DERO blocks may contain duplicate or double-spend transactions which are filtered by client protocol and ignored by the network. DERO DAG processes transactions atomically one transaction at a time.
#### DERO Rocket Bulletproofs
- Dero ultrafast bulletproofs optimization techniques in the form used did not exist anywhere in publicly available cryptography literature at the time of implementation. Please contact for any source/reference to include here if it exists. Ultrafast optimizations verifies Dero bulletproofs 10 times faster than other/original bulletproof implementations. See: https://github.com/deroproject/derosuite/blob/master/crypto/ringct/bulletproof_ultrafast.go
- DERO rocket bulletproof implementations are hardened, which protects DERO from certain class of attacks.
- DERO rocket bulletproof transactions structures are not compatible with other implementations.
&nbsp; &nbsp; &nbsp; &nbsp; Also there are several optimizations planned in near future in Dero rocket bulletproofs which will lead to several times performance boost. Presently they are under study for bugs, verifications, compatibilty etc.
#### 51% Attack Resistant
&nbsp; &nbsp; &nbsp; &nbsp; DERO DAG implementation builds outs a main chain from the DAG network of blocks which refers to main blocks (100% reward) and side blocks (8% rewards). Side blocks contribute to chain PoW security and thus traditional 51% attacks are not possible on DERO network. If DERO network finds another block at the same height, instead of choosing one, DERO include both blocks. Thus, rendering the 51% attack futile.
#### DERO Mining
[Mining](https://github.com/deroproject/wiki/wiki/Mining)
#### DERO Installation
&nbsp; &nbsp; &nbsp; &nbsp; DERO is written in golang and very easy to install both from source and binary.
#### Installation From Source
1. Install Golang, Golang version 1.12.12 required.
2. In go workspace: ```go get -u github.com/deroproject/derohe/...```
3. Check go workspace bin folder for binaries.
4. For example on Linux machine following binaries will be created:
1. derod-linux-amd64 -> DERO daemon.
2. dero-wallet-cli-linux-amd64 -> DERO cmdline wallet.
3. explorer-linux-amd64 -> DERO Explorer. Yes, DERO has prebuilt personal explorer also for advance privacy users.
#### Installation From Binary
&nbsp; &nbsp; &nbsp; &nbsp; Download [DERO binaries](https://github.com/deroproject/derosuite/releases) for ARM, INTEL, MAC platform and Windows, Mac, FreeBSD, OpenBSD, Linux etc. operating systems.
Most users required following binaries:
[Windows 7-10, Server 64bit/amd64 ](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_windows_amd64_2.1.6-1.alpha.atlantis.07032019.zip)
[Windows 32bit/x86/386](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_windows_x86_2.1.6-1.alpha.atlantis.07032019.zip)
[Linux 64bit/amd64](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_linux_amd64_2.1.6-1.alpha.atlantis.07032019.tar.gz)
[Linux 32bit/x86](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_linux_386_2.1.6-1.alpha.atlantis.07032019.tar.gz)
[FreeBSD 64bit/amd64](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_freebsd_amd64_2.1.6-1.alpha.atlantis.07032019.tar.gz)
[OpenBSD 64bit/amd64](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_openbsd_amd64_2.1.6-1.alpha.atlantis.07032019.tar.gz)
[Mac OS](https://github.com/deroproject/derosuite/releases/download/v2.1.6-1/dero_apple_mac_darwin_amd64_2.1.6-1.alpha.atlantis.07032019.tar.gz)
Contact for support of other hardware and OS.
#### Next Step After DERO Installation
&nbsp; &nbsp; &nbsp; &nbsp; Running DERO daemon supports DERO network and shows your support to privacy.
#### Running DERO Daemon
&nbsp; &nbsp; &nbsp; &nbsp; Run derod.exe or derod-linux-amd64 depending on your operating system. It will start syncing.
1. DERO daemon core cryptography is highly optimized and fast.
2. Use dedicated machine and SSD for best results.
3. VPS with 2-4 Cores, 4GB RAM,15GB disk is recommended.
![DERO Daemon](https://raw.githubusercontent.com/deroproject/documentation/master/images/derod1.png)
*DERO Daemon Screenshot*
#### Running DERO Wallet
Dero cmdline wallet is most reliable and has support of all functions. Cmdline wallet is most secure and reliable.
#### DERO Cmdline Wallet
&nbsp; &nbsp; &nbsp; &nbsp; DERO cmdline wallet is menu based and very easy to operate.
Use various options to create, recover, transfer balance etc.
**NOTE:** DERO cmdline wallet by default connects DERO daemon running on local machine on port 20206.
If DERO daemon is not running start DERO wallet with --remote option like following:
**./dero-wallet-cli-linux-amd64 --remote**
![DERO Wallet](https://raw.githubusercontent.com/deroproject/documentation/master/images/wallet-recover2.png)
*DERO Cmdline Wallet Screenshot*
#### DERO Explorer
[DERO Explorer](https://explorer.dero.io/) is used to check and confirm transaction on DERO Network.
[DERO testnet Explorer](https://testnetexplorer.dero.io/) is used to check and confirm transaction on DERO Network.
DERO users can run their own explorer on local machine and can [browse](http://127.0.0.1:8080) on local machine port 8080.
![DERO Explorer](https://github.com/deroproject/documentation/raw/master/images/dero_explorer.png)
*DERO EXPLORER Screenshot*
#### Proving DERO Transactions
DERO blockchain is completely private, so anyone cannot view, confirm, verify any other's wallet balance or any transactions.
So to prove any transaction you require *TXID* and *deroproof*.
deroproof can be obtained using get_tx_key command in dero-wallet-cli.
Enter the *TXID* and *deroproof* in [DERO EXPLORER](https://testnetexplorer.dero.io)
![DERO Explorer Proving Transaction](https://github.com/deroproject/documentation/raw/master/images/explorer-prove-tx.png)
*DERO Explorer Proving Transaction*

30
Start.md Normal file
View File

@ -0,0 +1,30 @@
1] ### DEROHE Installation, https://github.com/deroproject/derohe
DERO is written in golang and very easy to install both from source and binary.
Installation From Source:
Install Golang, minimum Golang 1.17 required.
In go workspace: go get -u github.com/deroproject/derohe/...
Check go workspace bin folder for binaries.
For example on Linux machine following binaries will be created:
derod-linux-amd64 -> DERO daemon.
dero-wallet-cli-linux-amd64 -> DERO cmdline wallet.
explorer-linux-amd64 -> DERO Explorer. Yes, DERO has prebuilt personal explorer also for advance privacy users.
Installation From Binary
Download DERO binaries for ARM, INTEL, MAC platform and Windows, Mac, FreeBSD, OpenBSD, Linux etc. operating systems.
https://github.com/deroproject/derohe/releases
2] ### Running DERO Daemon
./derod-linux-amd64
3] ### Running DERO Wallet (Use local or remote daemon)
./dero-wallet-cli-linux-amd64 --remote
https://wallet.dero.io [Web wallet]
4] ### DERO Mining Quickstart
Run miner with wallet address and no. of threads based on your CPU.
./dero-miner --mining-threads 2 --daemon-rpc-address=http://testnetexplorer.dero.io:40402 --wallet-address deto1qy0ehnqjpr0wxqnknyc66du2fsxyktppkr8m8e6jvplp954klfjz2qqdzcd8p
NOTE: Miners keep your system clock sync with NTP etc.
Eg on linux machine: ntpdate pool.ntp.org
For details visit http://wiki.dero.io

26
astrobwt/LICENSE.txt Normal file
View File

@ -0,0 +1,26 @@
Copyright (c) 2020 DERO Foundation. All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors
may be used to endorse or promote products derived from this software without
specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

68
astrobwt/astrobwt.go Normal file
View File

@ -0,0 +1,68 @@
package astrobwt
import "fmt"
import "golang.org/x/crypto/sha3"
import "golang.org/x/crypto/salsa20/salsa"
// see here to improve the algorithms more https://github.com/y-256/libdivsufsort/blob/wiki/SACA_Benchmarks.md
var x = fmt.Sprintf
const stage1_length int = 9973 // it is a prime
func POW16(inputdata []byte) (outputhash [32]byte) {
var output [stage1_length]byte
var counter [16]byte
key := sha3.Sum256(inputdata)
var stage1 [stage1_length]byte // stages are taken from it
salsa.XORKeyStream(stage1[:stage1_length], stage1[:stage1_length], &counter, &key)
var sa [stage1_length]int16
text_16_0alloc(stage1[:], sa[:])
for i := range sa {
output[i] = stage1[sa[i]]
}
// fmt.Printf("input %+v\n",inputdata)
// fmt.Printf("sa %+v\n",sa)
outputhash = sha3.Sum256(output[:])
return
}
func text_16_0alloc(text []byte, sa []int16) {
if int(int16(len(text))) != len(text) || len(text) != len(sa) {
panic("suffixarray: misuse of text_16")
}
var memory [2 * 256]int16
sais_8_16(text, 256, sa, memory[:])
}
func POW32(inputdata []byte) (outputhash [32]byte) {
var output [stage1_length]byte
var counter [16]byte
key := sha3.Sum256(inputdata)
var stage1 [stage1_length]byte // stages are taken from it
salsa.XORKeyStream(stage1[:stage1_length], stage1[:stage1_length], &counter, &key)
var sa [stage1_length]int32
text_32_0alloc(stage1[:], sa[:])
for i := range sa {
output[i] = stage1[sa[i]]
}
outputhash = sha3.Sum256(output[:])
return
}
func text_32_0alloc(text []byte, sa []int32) {
if int(int16(len(text))) != len(text) || len(text) != len(sa) {
panic("suffixarray: misuse of text_16")
}
var memory [2 * 256]int32
sais_8_32(text, 256, sa, memory[:])
}

159
astrobwt/astrobwt_test.go Normal file
View File

@ -0,0 +1,159 @@
package astrobwt
import "time"
import "math/rand"
import "testing"
// see https://www.geeksforgeeks.org/burrows-wheeler-data-transform-algorithm/
// see https://www.geeksforgeeks.org/suffix-tree-application-4-build-linear-time-suffix-array/
func TestSuffixArray(t *testing.T) {
s := "abcabxabcd"
result32 := []int32{0, 6, 3, 1, 7, 4, 2, 8, 9, 5}
var sa32 [10]int32
var sa16 [10]int16
text_32([]byte(s), sa32[:])
text_16([]byte(s), sa16[:])
for i := range result32 {
if result32[i] != sa32[i] || result32[i] != int32(sa16[i]) {
t.Fatalf("suffix array failed")
}
}
}
/*
func TestSuffixArrayOptimized(t *testing.T) {
s := "abcabxabcdaaaaaaa"
result := []int16{0,6,3,1,7,4,2,8,9,5}
var output [10]int16
//var sa_bytes *[10*4]uint8 = (*[stage1_length*4]uint8)(unsafe.Pointer(&sa))
sort_indices_local(10,[]byte(s),output[:])
t.Logf("output %+v\n",output[:])
for i := range result {
if result[i] != output[i] {
t.Fatalf("suffix array failed")
}
}
}
*/
func TestPows(t *testing.T) {
for loop_var := 0; loop_var < 100000; loop_var++ {
seed := time.Now().UnixNano()
//seed = 1635948770488138379
rand.Seed(seed)
var input [stage1_length + 16]byte
rand.Read(input[:stage1_length])
result16 := POW16(input[:stage1_length])
result32 := POW32(input[:stage1_length])
//resultopt := POW_optimized(input[:stage1_length])
if result16 != result32 {
t.Fatalf("pow test failed, seed %d %x %x ", seed, result16, result32)
}
}
}
/*func TestSuffixArrays(t *testing.T) {
//200 length seed 1635933734608607364
//100 length seed 1635933812384665346
//20 length seed 1635933934317660796
//10 length seed 1635933991384310043
//5 length seed 1635942855521802761
//for loop_var :=0 ; loop_var < 100000;loop_var++ {
{
seed := time.Now().UnixNano()
seed = 1635942855521802761
rand.Seed(seed)
var input [stage1_length+16]byte
var result_sa16 [stage1_length]int16
var result_sa32 [stage1_length]int32
var result_optimized [stage1_length+16]int16
rand.Read(input[:stage1_length])
text_16(input[:stage1_length], result_sa16[:])
text_32(input[:stage1_length], result_sa32[:])
sort_indices_local(stage1_length,input[:],result_optimized[:])
t.Logf("inputt %+v\n", input)
t.Logf("output16 %+v\n", result_sa16)
t.Logf("outputoo %+v\n", result_optimized[:stage1_length])
diff_count := 0
for i := range result_sa16 {
if result_sa16[i] != result_optimized[i] {
diff_count++
}}
//t.Logf("difference count %d ",diff_count)
for i := range result_sa16 {
if int32(result_sa16[i]) != result_sa32[i] || result_sa16[i] != result_optimized[i] {
t.Fatalf("suffix array internal failed %d, seed %d",i,seed)
}
}
}
}
*/
var cases [][]byte
func init() {
rand.Seed(1)
alphabet := "abcdefghjijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890"
n := len(alphabet)
_ = n
scales := []int{stage1_length}
cases = make([][]byte, len(scales))
for i, scale := range scales {
l := scale
buf := make([]byte, int(l))
for j := 0; j < int(l); j++ {
buf[j] = byte(rand.Uint32() & 0xff) //alphabet[rand.Intn(n)]
}
cases[i] = buf
}
//POW16([]byte{0x99})
}
func BenchmarkPOW16(t *testing.B) {
rand.Read(cases[0][:])
for i := 0; i < t.N; i++ {
_ = POW16(cases[0][:])
}
}
func BenchmarkPOW32(t *testing.B) {
rand.Read(cases[0][:])
for i := 0; i < t.N; i++ {
_ = POW32(cases[0][:])
}
}
/*
func BenchmarkOptimized(t *testing.B) {
rand.Read(cases[0][:])
for i := 0; i < t.N; i++ {
_ = POW_optimized(cases[0][:])
}
}
*/

93
astrobwt/gen.go Normal file
View File

@ -0,0 +1,93 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build ignore
// +build ignore
// Gen generates sais2.go by duplicating functions in sais.go
// using different input types.
// See the comment at the top of sais.go for details.
package main
import (
"bytes"
"io/ioutil"
"log"
"strings"
)
func main() {
log.SetPrefix("gen: ")
log.SetFlags(0)
data, err := ioutil.ReadFile("sais.go")
if err != nil {
log.Fatal(err)
}
x := bytes.Index(data, []byte("\n\n"))
if x < 0 {
log.Fatal("cannot find blank line after copyright comment")
}
var buf bytes.Buffer
buf.Write(data[:x])
buf.WriteString("\n\n// Code generated by go generate; DO NOT EDIT.\n\npackage suffixarray\n")
for {
x := bytes.Index(data, []byte("\nfunc "))
if x < 0 {
break
}
data = data[x:]
p := bytes.IndexByte(data, '(')
if p < 0 {
p = len(data)
}
name := string(data[len("\nfunc "):p])
x = bytes.Index(data, []byte("\n}\n"))
if x < 0 {
log.Fatalf("cannot find end of func %s", name)
}
fn := string(data[:x+len("\n}\n")])
data = data[x+len("\n}"):]
if strings.HasSuffix(name, "_32") {
buf.WriteString(fix32.Replace(fn))
}
if strings.HasSuffix(name, "_8_32") {
// x_8_32 -> x_8_64 done above
fn = fix8_32.Replace(stripByteOnly(fn))
buf.WriteString(fn)
buf.WriteString(fix32.Replace(fn))
}
}
if err := ioutil.WriteFile("sais2.go", buf.Bytes(), 0666); err != nil {
log.Fatal(err)
}
}
var fix32 = strings.NewReplacer(
"32", "64",
"int32", "int64",
)
var fix8_32 = strings.NewReplacer(
"_8_32", "_32",
"byte", "int32",
)
func stripByteOnly(s string) string {
lines := strings.SplitAfter(s, "\n")
w := 0
for _, line := range lines {
if !strings.Contains(line, "256") && !strings.Contains(line, "byte-only") {
lines[w] = line
w++
}
}
return strings.Join(lines[:w], "")
}

93
astrobwt/gen16.go Normal file
View File

@ -0,0 +1,93 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:build ignore
// +build ignore
// Gen generates sais16.go by duplicating functions in sais.go
// using different input types.
// See the comment at the top of sais.go for details.
package main
import (
"bytes"
"io/ioutil"
"log"
"strings"
)
func main() {
log.SetPrefix("gen: ")
log.SetFlags(0)
data, err := ioutil.ReadFile("sais.go")
if err != nil {
log.Fatal(err)
}
x := bytes.Index(data, []byte("\n\n"))
if x < 0 {
log.Fatal("cannot find blank line after copyright comment")
}
var buf bytes.Buffer
buf.Write(data[:x])
buf.WriteString("\n\n// Code generated by go generate; DO NOT EDIT.\n\npackage astrobwt\n")
for {
x := bytes.Index(data, []byte("\nfunc "))
if x < 0 {
break
}
data = data[x:]
p := bytes.IndexByte(data, '(')
if p < 0 {
p = len(data)
}
name := string(data[len("\nfunc "):p])
x = bytes.Index(data, []byte("\n}\n"))
if x < 0 {
log.Fatalf("cannot find end of func %s", name)
}
fn := string(data[:x+len("\n}\n")])
data = data[x+len("\n}"):]
if strings.HasSuffix(name, "_32") {
buf.WriteString(fix32.Replace(fn))
}
if strings.HasSuffix(name, "_8_32") {
// x_8_32 -> x_8_64 done above
fn = fix8_32.Replace(stripByteOnly(fn))
//buf.WriteString(fn)
buf.WriteString(fix32.Replace(fn))
}
}
if err := ioutil.WriteFile("sais16.go", buf.Bytes(), 0666); err != nil {
log.Fatal(err)
}
}
var fix32 = strings.NewReplacer(
"32", "16",
"int32", "int16",
)
var fix8_32 = strings.NewReplacer(
"_8_32", "_16",
"byte", "int16",
)
func stripByteOnly(s string) string {
lines := strings.SplitAfter(s, "\n")
w := 0
for _, line := range lines {
if !strings.Contains(line, "256") && !strings.Contains(line, "byte-only") {
lines[w] = line
w++
}
}
return strings.Join(lines[:w], "")
}

899
astrobwt/sais.go Normal file
View File

@ -0,0 +1,899 @@
// Copyright 2019 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Suffix array construction by induced sorting (SAIS).
// See Ge Nong, Sen Zhang, and Wai Hong Chen,
// "Two Efficient Algorithms for Linear Time Suffix Array Construction",
// especially section 3 (https://ieeexplore.ieee.org/document/5582081).
// See also http://zork.net/~st/jottings/sais.html.
//
// With optimizations inspired by Yuta Mori's sais-lite
// (https://sites.google.com/site/yuta256/sais).
//
// And with other new optimizations.
// Many of these functions are parameterized by the sizes of
// the types they operate on. The generator gen.go makes
// copies of these functions for use with other sizes.
// Specifically:
//
// - A function with a name ending in _8_32 takes []byte and []int32 arguments
// and is duplicated into _32_32, _8_64, and _64_64 forms.
// The _32_32 and _64_64_ suffixes are shortened to plain _32 and _64.
// Any lines in the function body that contain the text "byte-only" or "256"
// are stripped when creating _32_32 and _64_64 forms.
// (Those lines are typically 8-bit-specific optimizations.)
//
// - A function with a name ending only in _32 operates on []int32
// and is duplicated into a _64 form. (Note that it may still take a []byte,
// but there is no need for a version of the function in which the []byte
// is widened to a full integer array.)
// The overall runtime of this code is linear in the input size:
// it runs a sequence of linear passes to reduce the problem to
// a subproblem at most half as big, invokes itself recursively,
// and then runs a sequence of linear passes to turn the answer
// for the subproblem into the answer for the original problem.
// This gives T(N) = O(N) + T(N/2) = O(N) + O(N/2) + O(N/4) + ... = O(N).
//
// The outline of the code, with the forward and backward scans
// through O(N)-sized arrays called out, is:
//
// sais_I_N
// placeLMS_I_B
// bucketMax_I_B
// freq_I_B
// <scan +text> (1)
// <scan +freq> (2)
// <scan -text, random bucket> (3)
// induceSubL_I_B
// bucketMin_I_B
// freq_I_B
// <scan +text, often optimized away> (4)
// <scan +freq> (5)
// <scan +sa, random text, random bucket> (6)
// induceSubS_I_B
// bucketMax_I_B
// freq_I_B
// <scan +text, often optimized away> (7)
// <scan +freq> (8)
// <scan -sa, random text, random bucket> (9)
// assignID_I_B
// <scan +sa, random text substrings> (10)
// map_B
// <scan -sa> (11)
// recurse_B
// (recursive call to sais_B_B for a subproblem of size at most 1/2 input, often much smaller)
// unmap_I_B
// <scan -text> (12)
// <scan +sa> (13)
// expand_I_B
// bucketMax_I_B
// freq_I_B
// <scan +text, often optimized away> (14)
// <scan +freq> (15)
// <scan -sa, random text, random bucket> (16)
// induceL_I_B
// bucketMin_I_B
// freq_I_B
// <scan +text, often optimized away> (17)
// <scan +freq> (18)
// <scan +sa, random text, random bucket> (19)
// induceS_I_B
// bucketMax_I_B
// freq_I_B
// <scan +text, often optimized away> (20)
// <scan +freq> (21)
// <scan -sa, random text, random bucket> (22)
//
// Here, _B indicates the suffix array size (_32 or _64) and _I the input size (_8 or _B).
//
// The outline shows there are in general 22 scans through
// O(N)-sized arrays for a given level of the recursion.
// In the top level, operating on 8-bit input text,
// the six freq scans are fixed size (256) instead of potentially
// input-sized. Also, the frequency is counted once and cached
// whenever there is room to do so (there is nearly always room in general,
// and always room at the top level), which eliminates all but
// the first freq_I_B text scans (that is, 5 of the 6).
// So the top level of the recursion only does 22 - 6 - 5 = 11
// input-sized scans and a typical level does 16 scans.
//
// The linear scans do not cost anywhere near as much as
// the random accesses to the text made during a few of
// the scans (specifically #6, #9, #16, #19, #22 marked above).
// In real texts, there is not much but some locality to
// the accesses, due to the repetitive structure of the text
// (the same reason Burrows-Wheeler compression is so effective).
// For random inputs, there is no locality, which makes those
// accesses even more expensive, especially once the text
// no longer fits in cache.
// For example, running on 50 MB of Go source code, induceSubL_8_32
// (which runs only once, at the top level of the recursion)
// takes 0.44s, while on 50 MB of random input, it takes 2.55s.
// Nearly all the relative slowdown is explained by the text access:
//
// c0, c1 := text[k-1], text[k]
//
// That line runs for 0.23s on the Go text and 2.02s on random text.
//go:generate go run gen.go
package astrobwt
// text_32 returns the suffix array for the input text.
// It requires that len(text) fit in an int32
// and that the caller zero sa.
func text_32(text []byte, sa []int32) {
if int(int32(len(text))) != len(text) || len(text) != len(sa) {
panic("suffixarray: misuse of text_32")
}
sais_8_32(text, 256, sa, make([]int32, 2*256))
}
// sais_8_32 computes the suffix array of text.
// The text must contain only values in [0, textMax).
// The suffix array is stored in sa, which the caller
// must ensure is already zeroed.
// The caller must also provide temporary space tmp
// with len(tmp) ≥ textMax. If len(tmp) ≥ 2*textMax
// then the algorithm runs a little faster.
// If sais_8_32 modifies tmp, it sets tmp[0] = -1 on return.
func sais_8_32(text []byte, textMax int, sa, tmp []int32) {
if len(sa) != len(text) || len(tmp) < int(textMax) {
panic("suffixarray: misuse of sais_8_32")
}
// Trivial base cases. Sorting 0 or 1 things is easy.
if len(text) == 0 {
return
}
if len(text) == 1 {
sa[0] = 0
return
}
// Establish slices indexed by text character
// holding character frequency and bucket-sort offsets.
// If there's only enough tmp for one slice,
// we make it the bucket offsets and recompute
// the character frequency each time we need it.
var freq, bucket []int32
if len(tmp) >= 2*textMax {
freq, bucket = tmp[:textMax], tmp[textMax:2*textMax]
freq[0] = -1 // mark as uninitialized
} else {
freq, bucket = nil, tmp[:textMax]
}
// The SAIS algorithm.
// Each of these calls makes one scan through sa.
// See the individual functions for documentation
// about each's role in the algorithm.
numLMS := placeLMS_8_32(text, sa, freq, bucket)
if numLMS <= 1 {
// 0 or 1 items are already sorted. Do nothing.
} else {
induceSubL_8_32(text, sa, freq, bucket)
induceSubS_8_32(text, sa, freq, bucket)
length_8_32(text, sa, numLMS)
maxID := assignID_8_32(text, sa, numLMS)
if maxID < numLMS {
map_32(sa, numLMS)
recurse_32(sa, tmp, numLMS, maxID)
unmap_8_32(text, sa, numLMS)
} else {
// If maxID == numLMS, then each LMS-substring
// is unique, so the relative ordering of two LMS-suffixes
// is determined by just the leading LMS-substring.
// That is, the LMS-suffix sort order matches the
// (simpler) LMS-substring sort order.
// Copy the original LMS-substring order into the
// suffix array destination.
copy(sa, sa[len(sa)-numLMS:])
}
expand_8_32(text, freq, bucket, sa, numLMS)
}
induceL_8_32(text, sa, freq, bucket)
induceS_8_32(text, sa, freq, bucket)
// Mark for caller that we overwrote tmp.
tmp[0] = -1
}
// freq_8_32 returns the character frequencies
// for text, as a slice indexed by character value.
// If freq is nil, freq_8_32 uses and returns bucket.
// If freq is non-nil, freq_8_32 assumes that freq[0] >= 0
// means the frequencies are already computed.
// If the frequency data is overwritten or uninitialized,
// the caller must set freq[0] = -1 to force recomputation
// the next time it is needed.
func freq_8_32(text []byte, freq, bucket []int32) []int32 {
if freq != nil && freq[0] >= 0 {
return freq // already computed
}
if freq == nil {
freq = bucket
}
freq = freq[:256] // eliminate bounds check for freq[c] below
for i := range freq {
freq[i] = 0
}
for _, c := range text {
freq[c]++
}
return freq
}
// bucketMin_8_32 stores into bucket[c] the minimum index
// in the bucket for character c in a bucket-sort of text.
func bucketMin_8_32(text []byte, freq, bucket []int32) {
freq = freq_8_32(text, freq, bucket)
freq = freq[:256] // establish len(freq) = 256, so 0 ≤ i < 256 below
bucket = bucket[:256] // eliminate bounds check for bucket[i] below
total := int32(0)
for i, n := range freq {
bucket[i] = total
total += n
}
}
// bucketMax_8_32 stores into bucket[c] the maximum index
// in the bucket for character c in a bucket-sort of text.
// The bucket indexes for c are [min, max).
// That is, max is one past the final index in that bucket.
func bucketMax_8_32(text []byte, freq, bucket []int32) {
freq = freq_8_32(text, freq, bucket)
freq = freq[:256] // establish len(freq) = 256, so 0 ≤ i < 256 below
bucket = bucket[:256] // eliminate bounds check for bucket[i] below
total := int32(0)
for i, n := range freq {
total += n
bucket[i] = total
}
}
// The SAIS algorithm proceeds in a sequence of scans through sa.
// Each of the following functions implements one scan,
// and the functions appear here in the order they execute in the algorithm.
// placeLMS_8_32 places into sa the indexes of the
// final characters of the LMS substrings of text,
// sorted into the rightmost ends of their correct buckets
// in the suffix array.
//
// The imaginary sentinel character at the end of the text
// is the final character of the final LMS substring, but there
// is no bucket for the imaginary sentinel character,
// which has a smaller value than any real character.
// The caller must therefore pretend that sa[-1] == len(text).
//
// The text indexes of LMS-substring characters are always ≥ 1
// (the first LMS-substring must be preceded by one or more L-type
// characters that are not part of any LMS-substring),
// so using 0 as a “not present” suffix array entry is safe,
// both in this function and in most later functions
// (until induceL_8_32 below).
func placeLMS_8_32(text []byte, sa, freq, bucket []int32) int {
bucketMax_8_32(text, freq, bucket)
numLMS := 0
lastB := int32(-1)
bucket = bucket[:256] // eliminate bounds check for bucket[c1] below
// The next stanza of code (until the blank line) loop backward
// over text, stopping to execute a code body at each position i
// such that text[i] is an L-character and text[i+1] is an S-character.
// That is, i+1 is the position of the start of an LMS-substring.
// These could be hoisted out into a function with a callback,
// but at a significant speed cost. Instead, we just write these
// seven lines a few times in this source file. The copies below
// refer back to the pattern established by this original as the
// "LMS-substring iterator".
//
// In every scan through the text, c0, c1 are successive characters of text.
// In this backward scan, c0 == text[i] and c1 == text[i+1].
// By scanning backward, we can keep track of whether the current
// position is type-S or type-L according to the usual definition:
//
// - position len(text) is type S with text[len(text)] == -1 (the sentinel)
// - position i is type S if text[i] < text[i+1], or if text[i] == text[i+1] && i+1 is type S.
// - position i is type L if text[i] > text[i+1], or if text[i] == text[i+1] && i+1 is type L.
//
// The backward scan lets us maintain the current type,
// update it when we see c0 != c1, and otherwise leave it alone.
// We want to identify all S positions with a preceding L.
// Position len(text) is one such position by definition, but we have
// nowhere to write it down, so we eliminate it by untruthfully
// setting isTypeS = false at the start of the loop.
c0, c1, isTypeS := byte(0), byte(0), false
for i := len(text) - 1; i >= 0; i-- {
c0, c1 = text[i], c0
if c0 < c1 {
isTypeS = true
} else if c0 > c1 && isTypeS {
isTypeS = false
// Bucket the index i+1 for the start of an LMS-substring.
b := bucket[c1] - 1
bucket[c1] = b
sa[b] = int32(i + 1)
lastB = b
numLMS++
}
}
// We recorded the LMS-substring starts but really want the ends.
// Luckily, with two differences, the start indexes and the end indexes are the same.
// The first difference is that the rightmost LMS-substring's end index is len(text),
// so the caller must pretend that sa[-1] == len(text), as noted above.
// The second difference is that the first leftmost LMS-substring start index
// does not end an earlier LMS-substring, so as an optimization we can omit
// that leftmost LMS-substring start index (the last one we wrote).
//
// Exception: if numLMS <= 1, the caller is not going to bother with
// the recursion at all and will treat the result as containing LMS-substring starts.
// In that case, we don't remove the final entry.
if numLMS > 1 {
sa[lastB] = 0
}
return numLMS
}
// induceSubL_8_32 inserts the L-type text indexes of LMS-substrings
// into sa, assuming that the final characters of the LMS-substrings
// are already inserted into sa, sorted by final character, and at the
// right (not left) end of the corresponding character bucket.
// Each LMS-substring has the form (as a regexp) /S+L+S/:
// one or more S-type, one or more L-type, final S-type.
// induceSubL_8_32 leaves behind only the leftmost L-type text
// index for each LMS-substring. That is, it removes the final S-type
// indexes that are present on entry, and it inserts but then removes
// the interior L-type indexes too.
// (Only the leftmost L-type index is needed by induceSubS_8_32.)
func induceSubL_8_32(text []byte, sa, freq, bucket []int32) {
// Initialize positions for left side of character buckets.
bucketMin_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bounds check for bucket[cB] below
// As we scan the array left-to-right, each sa[i] = j > 0 is a correctly
// sorted suffix array entry (for text[j:]) for which we know that j-1 is type L.
// Because j-1 is type L, inserting it into sa now will sort it correctly.
// But we want to distinguish a j-1 with j-2 of type L from type S.
// We can process the former but want to leave the latter for the caller.
// We record the difference by negating j-1 if it is preceded by type S.
// Either way, the insertion (into the text[j-1] bucket) is guaranteed to
// happen at sa[i´] for some i´ > i, that is, in the portion of sa we have
// yet to scan. A single pass therefore sees indexes j, j-1, j-2, j-3,
// and so on, in sorted but not necessarily adjacent order, until it finds
// one preceded by an index of type S, at which point it must stop.
//
// As we scan through the array, we clear the worked entries (sa[i] > 0) to zero,
// and we flip sa[i] < 0 to -sa[i], so that the loop finishes with sa containing
// only the indexes of the leftmost L-type indexes for each LMS-substring.
//
// The suffix array sa therefore serves simultaneously as input, output,
// and a miraculously well-tailored work queue.
// placeLMS_8_32 left out the implicit entry sa[-1] == len(text),
// corresponding to the identified type-L index len(text)-1.
// Process it before the left-to-right scan of sa proper.
// See body in loop for commentary.
k := len(text) - 1
c0, c1 := text[k-1], text[k]
if c0 < c1 {
k = -k
}
// Cache recently used bucket index:
// we're processing suffixes in sorted order
// and accessing buckets indexed by the
// byte before the sorted order, which still
// has very good locality.
// Invariant: b is cached, possibly dirty copy of bucket[cB].
cB := c1
b := bucket[cB]
sa[b] = int32(k)
b++
for i := 0; i < len(sa); i++ {
j := int(sa[i])
if j == 0 {
// Skip empty entry.
continue
}
if j < 0 {
// Leave discovered type-S index for caller.
sa[i] = int32(-j)
continue
}
sa[i] = 0
// Index j was on work queue, meaning k := j-1 is L-type,
// so we can now place k correctly into sa.
// If k-1 is L-type, queue k for processing later in this loop.
// If k-1 is S-type (text[k-1] < text[k]), queue -k to save for the caller.
k := j - 1
c0, c1 := text[k-1], text[k]
if c0 < c1 {
k = -k
}
if cB != c1 {
bucket[cB] = b
cB = c1
b = bucket[cB]
}
sa[b] = int32(k)
b++
}
}
// induceSubS_8_32 inserts the S-type text indexes of LMS-substrings
// into sa, assuming that the leftmost L-type text indexes are already
// inserted into sa, sorted by LMS-substring suffix, and at the
// left end of the corresponding character bucket.
// Each LMS-substring has the form (as a regexp) /S+L+S/:
// one or more S-type, one or more L-type, final S-type.
// induceSubS_8_32 leaves behind only the leftmost S-type text
// index for each LMS-substring, in sorted order, at the right end of sa.
// That is, it removes the L-type indexes that are present on entry,
// and it inserts but then removes the interior S-type indexes too,
// leaving the LMS-substring start indexes packed into sa[len(sa)-numLMS:].
// (Only the LMS-substring start indexes are processed by the recursion.)
func induceSubS_8_32(text []byte, sa, freq, bucket []int32) {
// Initialize positions for right side of character buckets.
bucketMax_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bounds check for bucket[cB] below
// Analogous to induceSubL_8_32 above,
// as we scan the array right-to-left, each sa[i] = j > 0 is a correctly
// sorted suffix array entry (for text[j:]) for which we know that j-1 is type S.
// Because j-1 is type S, inserting it into sa now will sort it correctly.
// But we want to distinguish a j-1 with j-2 of type S from type L.
// We can process the former but want to leave the latter for the caller.
// We record the difference by negating j-1 if it is preceded by type L.
// Either way, the insertion (into the text[j-1] bucket) is guaranteed to
// happen at sa[i´] for some i´ < i, that is, in the portion of sa we have
// yet to scan. A single pass therefore sees indexes j, j-1, j-2, j-3,
// and so on, in sorted but not necessarily adjacent order, until it finds
// one preceded by an index of type L, at which point it must stop.
// That index (preceded by one of type L) is an LMS-substring start.
//
// As we scan through the array, we clear the worked entries (sa[i] > 0) to zero,
// and we flip sa[i] < 0 to -sa[i] and compact into the top of sa,
// so that the loop finishes with the top of sa containing exactly
// the LMS-substring start indexes, sorted by LMS-substring.
// Cache recently used bucket index:
cB := byte(0)
b := bucket[cB]
top := len(sa)
for i := len(sa) - 1; i >= 0; i-- {
j := int(sa[i])
if j == 0 {
// Skip empty entry.
continue
}
sa[i] = 0
if j < 0 {
// Leave discovered LMS-substring start index for caller.
top--
sa[top] = int32(-j)
continue
}
// Index j was on work queue, meaning k := j-1 is S-type,
// so we can now place k correctly into sa.
// If k-1 is S-type, queue k for processing later in this loop.
// If k-1 is L-type (text[k-1] > text[k]), queue -k to save for the caller.
k := j - 1
c1 := text[k]
c0 := text[k-1]
if c0 > c1 {
k = -k
}
if cB != c1 {
bucket[cB] = b
cB = c1
b = bucket[cB]
}
b--
sa[b] = int32(k)
}
}
// length_8_32 computes and records the length of each LMS-substring in text.
// The length of the LMS-substring at index j is stored at sa[j/2],
// avoiding the LMS-substring indexes already stored in the top half of sa.
// (If index j is an LMS-substring start, then index j-1 is type L and cannot be.)
// There are two exceptions, made for optimizations in name_8_32 below.
//
// First, the final LMS-substring is recorded as having length 0, which is otherwise
// impossible, instead of giving it a length that includes the implicit sentinel.
// This ensures the final LMS-substring has length unequal to all others
// and therefore can be detected as different without text comparison
// (it is unequal because it is the only one that ends in the implicit sentinel,
// and the text comparison would be problematic since the implicit sentinel
// is not actually present at text[len(text)]).
//
// Second, to avoid text comparison entirely, if an LMS-substring is very short,
// sa[j/2] records its actual text instead of its length, so that if two such
// substrings have matching “length,” the text need not be read at all.
// The definition of “very short” is that the text bytes must pack into an uint32,
// and the unsigned encoding e must be ≥ len(text), so that it can be
// distinguished from a valid length.
func length_8_32(text []byte, sa []int32, numLMS int) {
end := 0 // index of current LMS-substring end (0 indicates final LMS-substring)
// The encoding of N text bytes into a “length” word
// adds 1 to each byte, packs them into the bottom
// N*8 bits of a word, and then bitwise inverts the result.
// That is, the text sequence A B C (hex 41 42 43)
// encodes as ^uint32(0x42_43_44).
// LMS-substrings can never start or end with 0xFF.
// Adding 1 ensures the encoded byte sequence never
// starts or ends with 0x00, so that present bytes can be
// distinguished from zero-padding in the top bits,
// so the length need not be separately encoded.
// Inverting the bytes increases the chance that a
// 4-byte encoding will still be ≥ len(text).
// In particular, if the first byte is ASCII (<= 0x7E, so +1 <= 0x7F)
// then the high bit of the inversion will be set,
// making it clearly not a valid length (it would be a negative one).
//
// cx holds the pre-inverted encoding (the packed incremented bytes).
cx := uint32(0) // byte-only
// This stanza (until the blank line) is the "LMS-substring iterator",
// described in placeLMS_8_32 above, with one line added to maintain cx.
c0, c1, isTypeS := byte(0), byte(0), false
for i := len(text) - 1; i >= 0; i-- {
c0, c1 = text[i], c0
cx = cx<<8 | uint32(c1+1) // byte-only
if c0 < c1 {
isTypeS = true
} else if c0 > c1 && isTypeS {
isTypeS = false
// Index j = i+1 is the start of an LMS-substring.
// Compute length or encoded text to store in sa[j/2].
j := i + 1
var code int32
if end == 0 {
code = 0
} else {
code = int32(end - j)
if code <= 32/8 && ^cx >= uint32(len(text)) { // byte-only
code = int32(^cx) // byte-only
} // byte-only
}
sa[j>>1] = code
end = j + 1
cx = uint32(c1 + 1) // byte-only
}
}
}
// assignID_8_32 assigns a dense ID numbering to the
// set of LMS-substrings respecting string ordering and equality,
// returning the maximum assigned ID.
// For example given the input "ababab", the LMS-substrings
// are "aba", "aba", and "ab", renumbered as 2 2 1.
// sa[len(sa)-numLMS:] holds the LMS-substring indexes
// sorted in string order, so to assign numbers we can
// consider each in turn, removing adjacent duplicates.
// The new ID for the LMS-substring at index j is written to sa[j/2],
// overwriting the length previously stored there (by length_8_32 above).
func assignID_8_32(text []byte, sa []int32, numLMS int) int {
id := 0
lastLen := int32(-1) // impossible
lastPos := int32(0)
for _, j := range sa[len(sa)-numLMS:] {
// Is the LMS-substring at index j new, or is it the same as the last one we saw?
n := sa[j/2]
if n != lastLen {
goto New
}
if uint32(n) >= uint32(len(text)) {
// “Length” is really encoded full text, and they match.
goto Same
}
{
// Compare actual texts.
n := int(n)
this := text[j:][:n]
last := text[lastPos:][:n]
for i := 0; i < n; i++ {
if this[i] != last[i] {
goto New
}
}
goto Same
}
New:
id++
lastPos = j
lastLen = n
Same:
sa[j/2] = int32(id)
}
return id
}
// map_32 maps the LMS-substrings in text to their new IDs,
// producing the subproblem for the recursion.
// The mapping itself was mostly applied by assignID_8_32:
// sa[i] is either 0, the ID for the LMS-substring at index 2*i,
// or the ID for the LMS-substring at index 2*i+1.
// To produce the subproblem we need only remove the zeros
// and change ID into ID-1 (our IDs start at 1, but text chars start at 0).
//
// map_32 packs the result, which is the input to the recursion,
// into the top of sa, so that the recursion result can be stored
// in the bottom of sa, which sets up for expand_8_32 well.
func map_32(sa []int32, numLMS int) {
w := len(sa)
for i := len(sa) / 2; i >= 0; i-- {
j := sa[i]
if j > 0 {
w--
sa[w] = j - 1
}
}
}
// recurse_32 calls sais_32 recursively to solve the subproblem we've built.
// The subproblem is at the right end of sa, the suffix array result will be
// written at the left end of sa, and the middle of sa is available for use as
// temporary frequency and bucket storage.
func recurse_32(sa, oldTmp []int32, numLMS, maxID int) {
dst, saTmp, text := sa[:numLMS], sa[numLMS:len(sa)-numLMS], sa[len(sa)-numLMS:]
// Set up temporary space for recursive call.
// We must pass sais_32 a tmp buffer wiith at least maxID entries.
//
// The subproblem is guaranteed to have length at most len(sa)/2,
// so that sa can hold both the subproblem and its suffix array.
// Nearly all the time, however, the subproblem has length < len(sa)/3,
// in which case there is a subproblem-sized middle of sa that
// we can reuse for temporary space (saTmp).
// When recurse_32 is called from sais_8_32, oldTmp is length 512
// (from text_32), and saTmp will typically be much larger, so we'll use saTmp.
// When deeper recursions come back to recurse_32, now oldTmp is
// the saTmp from the top-most recursion, it is typically larger than
// the current saTmp (because the current sa gets smaller and smaller
// as the recursion gets deeper), and we keep reusing that top-most
// large saTmp instead of the offered smaller ones.
//
// Why is the subproblem length so often just under len(sa)/3?
// See Nong, Zhang, and Chen, section 3.6 for a plausible explanation.
// In brief, the len(sa)/2 case would correspond to an SLSLSLSLSLSL pattern
// in the input, perfect alternation of larger and smaller input bytes.
// Real text doesn't do that. If each L-type index is randomly followed
// by either an L-type or S-type index, then half the substrings will
// be of the form SLS, but the other half will be longer. Of that half,
// half (a quarter overall) will be SLLS; an eighth will be SLLLS, and so on.
// Not counting the final S in each (which overlaps the first S in the next),
// This works out to an average length 2×½ + 3×¼ + 4×⅛ + ... = 3.
// The space we need is further reduced by the fact that many of the
// short patterns like SLS will often be the same character sequences
// repeated throughout the text, reducing maxID relative to numLMS.
//
// For short inputs, the averages may not run in our favor, but then we
// can often fall back to using the length-512 tmp available in the
// top-most call. (Also a short allocation would not be a big deal.)
//
// For pathological inputs, we fall back to allocating a new tmp of length
// max(maxID, numLMS/2). This level of the recursion needs maxID,
// and all deeper levels of the recursion will need no more than numLMS/2,
// so this one allocation is guaranteed to suffice for the entire stack
// of recursive calls.
tmp := oldTmp
if len(tmp) < len(saTmp) {
tmp = saTmp
}
if len(tmp) < numLMS {
// TestSAIS/forcealloc reaches this code.
n := maxID
if n < numLMS/2 {
n = numLMS / 2
}
tmp = make([]int32, n)
}
// sais_32 requires that the caller arrange to clear dst,
// because in general the caller may know dst is
// freshly-allocated and already cleared. But this one is not.
for i := range dst {
dst[i] = 0
}
sais_32(text, maxID, dst, tmp)
}
// unmap_8_32 unmaps the subproblem back to the original.
// sa[:numLMS] is the LMS-substring numbers, which don't matter much anymore.
// sa[len(sa)-numLMS:] is the sorted list of those LMS-substring numbers.
// The key part is that if the list says K that means the K'th substring.
// We can replace sa[:numLMS] with the indexes of the LMS-substrings.
// Then if the list says K it really means sa[K].
// Having mapped the list back to LMS-substring indexes,
// we can place those into the right buckets.
func unmap_8_32(text []byte, sa []int32, numLMS int) {
unmap := sa[len(sa)-numLMS:]
j := len(unmap)
// "LMS-substring iterator" (see placeLMS_8_32 above).
c0, c1, isTypeS := byte(0), byte(0), false
for i := len(text) - 1; i >= 0; i-- {
c0, c1 = text[i], c0
if c0 < c1 {
isTypeS = true
} else if c0 > c1 && isTypeS {
isTypeS = false
// Populate inverse map.
j--
unmap[j] = int32(i + 1)
}
}
// Apply inverse map to subproblem suffix array.
sa = sa[:numLMS]
for i := 0; i < len(sa); i++ {
sa[i] = unmap[sa[i]]
}
}
// expand_8_32 distributes the compacted, sorted LMS-suffix indexes
// from sa[:numLMS] into the tops of the appropriate buckets in sa,
// preserving the sorted order and making room for the L-type indexes
// to be slotted into the sorted sequence by induceL_8_32.
func expand_8_32(text []byte, freq, bucket, sa []int32, numLMS int) {
bucketMax_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bound check for bucket[c] below
// Loop backward through sa, always tracking
// the next index to populate from sa[:numLMS].
// When we get to one, populate it.
// Zero the rest of the slots; they have dead values in them.
x := numLMS - 1
saX := sa[x]
c := text[saX]
b := bucket[c] - 1
bucket[c] = b
for i := len(sa) - 1; i >= 0; i-- {
if i != int(b) {
sa[i] = 0
continue
}
sa[i] = saX
// Load next entry to put down (if any).
if x > 0 {
x--
saX = sa[x] // TODO bounds check
c = text[saX]
b = bucket[c] - 1
bucket[c] = b
}
}
}
// induceL_8_32 inserts L-type text indexes into sa,
// assuming that the leftmost S-type indexes are inserted
// into sa, in sorted order, in the right bucket halves.
// It leaves all the L-type indexes in sa, but the
// leftmost L-type indexes are negated, to mark them
// for processing by induceS_8_32.
func induceL_8_32(text []byte, sa, freq, bucket []int32) {
// Initialize positions for left side of character buckets.
bucketMin_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bounds check for bucket[cB] below
// This scan is similar to the one in induceSubL_8_32 above.
// That one arranges to clear all but the leftmost L-type indexes.
// This scan leaves all the L-type indexes and the original S-type
// indexes, but it negates the positive leftmost L-type indexes
// (the ones that induceS_8_32 needs to process).
// expand_8_32 left out the implicit entry sa[-1] == len(text),
// corresponding to the identified type-L index len(text)-1.
// Process it before the left-to-right scan of sa proper.
// See body in loop for commentary.
k := len(text) - 1
c0, c1 := text[k-1], text[k]
if c0 < c1 {
k = -k
}
// Cache recently used bucket index.
cB := c1
b := bucket[cB]
sa[b] = int32(k)
b++
for i := 0; i < len(sa); i++ {
j := int(sa[i])
if j <= 0 {
// Skip empty or negated entry (including negated zero).
continue
}
// Index j was on work queue, meaning k := j-1 is L-type,
// so we can now place k correctly into sa.
// If k-1 is L-type, queue k for processing later in this loop.
// If k-1 is S-type (text[k-1] < text[k]), queue -k to save for the caller.
// If k is zero, k-1 doesn't exist, so we only need to leave it
// for the caller. The caller can't tell the difference between
// an empty slot and a non-empty zero, but there's no need
// to distinguish them anyway: the final suffix array will end up
// with one zero somewhere, and that will be a real zero.
k := j - 1
c1 := text[k]
if k > 0 {
if c0 := text[k-1]; c0 < c1 {
k = -k
}
}
if cB != c1 {
bucket[cB] = b
cB = c1
b = bucket[cB]
}
sa[b] = int32(k)
b++
}
}
func induceS_8_32(text []byte, sa, freq, bucket []int32) {
// Initialize positions for right side of character buckets.
bucketMax_8_32(text, freq, bucket)
bucket = bucket[:256] // eliminate bounds check for bucket[cB] below
cB := byte(0)
b := bucket[cB]
for i := len(sa) - 1; i >= 0; i-- {
j := int(sa[i])
if j >= 0 {
// Skip non-flagged entry.
// (This loop can't see an empty entry; 0 means the real zero index.)
continue
}
// Negative j is a work queue entry; rewrite to positive j for final suffix array.
j = -j
sa[i] = int32(j)
// Index j was on work queue (encoded as -j but now decoded),
// meaning k := j-1 is L-type,
// so we can now place k correctly into sa.
// If k-1 is S-type, queue -k for processing later in this loop.
// If k-1 is L-type (text[k-1] > text[k]), queue k to save for the caller.
// If k is zero, k-1 doesn't exist, so we only need to leave it
// for the caller.
k := j - 1
c1 := text[k]
if k > 0 {
if c0 := text[k-1]; c0 <= c1 {
k = -k
}
}
if cB != c1 {
bucket[cB] = b
cB = c1
b = bucket[cB]
}
b--
sa[b] = int32(k)
}
}

1196
astrobwt/sais16.go Normal file

File diff suppressed because it is too large Load Diff

1741
astrobwt/sais2.go Normal file

File diff suppressed because it is too large Load Diff

385
astrobwt/suffixarray.go Normal file
View File

@ -0,0 +1,385 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package suffixarray implements substring search in logarithmic time using
// an in-memory suffix array.
//
// Example use:
//
// // create index for some data
// index := suffixarray.New(data)
//
// // lookup byte slice s
// offsets1 := index.Lookup(s, -1) // the list of all indices where s occurs in data
// offsets2 := index.Lookup(s, 3) // the list of at most 3 indices where s occurs in data
//
package astrobwt
import (
"bytes"
"encoding/binary"
"errors"
"io"
"math"
"regexp"
"sort"
)
// Can change for testing
var maxData32 int = realMaxData32
const realMaxData32 = math.MaxInt32
// Index implements a suffix array for fast substring search.
type Index struct {
data []byte
sa ints // suffix array for data; sa.len() == len(data)
}
// An ints is either an []int32 or an []int64.
// That is, one of them is empty, and one is the real data.
// The int64 form is used when len(data) > maxData32
type ints struct {
int32 []int32
int64 []int64
}
func (a *ints) len() int {
return len(a.int32) + len(a.int64)
}
func (a *ints) get(i int) int64 {
if a.int32 != nil {
return int64(a.int32[i])
}
return a.int64[i]
}
func (a *ints) set(i int, v int64) {
if a.int32 != nil {
a.int32[i] = int32(v)
} else {
a.int64[i] = v
}
}
func (a *ints) slice(i, j int) ints {
if a.int32 != nil {
return ints{a.int32[i:j], nil}
}
return ints{nil, a.int64[i:j]}
}
// New creates a new Index for data.
// Index creation time is O(N) for N = len(data).
func New(data []byte) *Index {
ix := &Index{data: data}
if len(data) <= maxData32 {
ix.sa.int32 = make([]int32, len(data))
text_32(data, ix.sa.int32)
} else {
ix.sa.int64 = make([]int64, len(data))
text_64(data, ix.sa.int64)
}
return ix
}
// writeInt writes an int x to w using buf to buffer the write.
func writeInt(w io.Writer, buf []byte, x int) error {
binary.PutVarint(buf, int64(x))
_, err := w.Write(buf[0:binary.MaxVarintLen64])
return err
}
// readInt reads an int x from r using buf to buffer the read and returns x.
func readInt(r io.Reader, buf []byte) (int64, error) {
_, err := io.ReadFull(r, buf[0:binary.MaxVarintLen64]) // ok to continue with error
x, _ := binary.Varint(buf)
return x, err
}
// writeSlice writes data[:n] to w and returns n.
// It uses buf to buffer the write.
func writeSlice(w io.Writer, buf []byte, data ints) (n int, err error) {
// encode as many elements as fit into buf
p := binary.MaxVarintLen64
m := data.len()
for ; n < m && p+binary.MaxVarintLen64 <= len(buf); n++ {
p += binary.PutUvarint(buf[p:], uint64(data.get(n)))
}
// update buffer size
binary.PutVarint(buf, int64(p))
// write buffer
_, err = w.Write(buf[0:p])
return
}
var errTooBig = errors.New("suffixarray: data too large")
// readSlice reads data[:n] from r and returns n.
// It uses buf to buffer the read.
func readSlice(r io.Reader, buf []byte, data ints) (n int, err error) {
// read buffer size
var size64 int64
size64, err = readInt(r, buf)
if err != nil {
return
}
if int64(int(size64)) != size64 || int(size64) < 0 {
// We never write chunks this big anyway.
return 0, errTooBig
}
size := int(size64)
// read buffer w/o the size
if _, err = io.ReadFull(r, buf[binary.MaxVarintLen64:size]); err != nil {
return
}
// decode as many elements as present in buf
for p := binary.MaxVarintLen64; p < size; n++ {
x, w := binary.Uvarint(buf[p:])
data.set(n, int64(x))
p += w
}
return
}
const bufSize = 16 << 10 // reasonable for BenchmarkSaveRestore
// Read reads the index from r into x; x must not be nil.
func (x *Index) Read(r io.Reader) error {
// buffer for all reads
buf := make([]byte, bufSize)
// read length
n64, err := readInt(r, buf)
if err != nil {
return err
}
if int64(int(n64)) != n64 || int(n64) < 0 {
return errTooBig
}
n := int(n64)
// allocate space
if 2*n < cap(x.data) || cap(x.data) < n || x.sa.int32 != nil && n > maxData32 || x.sa.int64 != nil && n <= maxData32 {
// new data is significantly smaller or larger than
// existing buffers - allocate new ones
x.data = make([]byte, n)
x.sa.int32 = nil
x.sa.int64 = nil
if n <= maxData32 {
x.sa.int32 = make([]int32, n)
} else {
x.sa.int64 = make([]int64, n)
}
} else {
// re-use existing buffers
x.data = x.data[0:n]
x.sa = x.sa.slice(0, n)
}
// read data
if _, err := io.ReadFull(r, x.data); err != nil {
return err
}
// read index
sa := x.sa
for sa.len() > 0 {
n, err := readSlice(r, buf, sa)
if err != nil {
return err
}
sa = sa.slice(n, sa.len())
}
return nil
}
// Write writes the index x to w.
func (x *Index) Write(w io.Writer) error {
// buffer for all writes
buf := make([]byte, bufSize)
// write length
if err := writeInt(w, buf, len(x.data)); err != nil {
return err
}
// write data
if _, err := w.Write(x.data); err != nil {
return err
}
// write index
sa := x.sa
for sa.len() > 0 {
n, err := writeSlice(w, buf, sa)
if err != nil {
return err
}
sa = sa.slice(n, sa.len())
}
return nil
}
// Bytes returns the data over which the index was created.
// It must not be modified.
//
func (x *Index) Bytes() []byte {
return x.data
}
func (x *Index) at(i int) []byte {
return x.data[x.sa.get(i):]
}
// lookupAll returns a slice into the matching region of the index.
// The runtime is O(log(N)*len(s)).
func (x *Index) lookupAll(s []byte) ints {
// find matching suffix index range [i:j]
// find the first index where s would be the prefix
i := sort.Search(x.sa.len(), func(i int) bool { return bytes.Compare(x.at(i), s) >= 0 })
// starting at i, find the first index at which s is not a prefix
j := i + sort.Search(x.sa.len()-i, func(j int) bool { return !bytes.HasPrefix(x.at(j+i), s) })
return x.sa.slice(i, j)
}
// Lookup returns an unsorted list of at most n indices where the byte string s
// occurs in the indexed data. If n < 0, all occurrences are returned.
// The result is nil if s is empty, s is not found, or n == 0.
// Lookup time is O(log(N)*len(s) + len(result)) where N is the
// size of the indexed data.
//
func (x *Index) Lookup(s []byte, n int) (result []int) {
if len(s) > 0 && n != 0 {
matches := x.lookupAll(s)
count := matches.len()
if n < 0 || count < n {
n = count
}
// 0 <= n <= count
if n > 0 {
result = make([]int, n)
if matches.int32 != nil {
for i := range result {
result[i] = int(matches.int32[i])
}
} else {
for i := range result {
result[i] = int(matches.int64[i])
}
}
}
}
return
}
// FindAllIndex returns a sorted list of non-overlapping matches of the
// regular expression r, where a match is a pair of indices specifying
// the matched slice of x.Bytes(). If n < 0, all matches are returned
// in successive order. Otherwise, at most n matches are returned and
// they may not be successive. The result is nil if there are no matches,
// or if n == 0.
//
func (x *Index) FindAllIndex(r *regexp.Regexp, n int) (result [][]int) {
// a non-empty literal prefix is used to determine possible
// match start indices with Lookup
prefix, complete := r.LiteralPrefix()
lit := []byte(prefix)
// worst-case scenario: no literal prefix
if prefix == "" {
return r.FindAllIndex(x.data, n)
}
// if regexp is a literal just use Lookup and convert its
// result into match pairs
if complete {
// Lookup returns indices that may belong to overlapping matches.
// After eliminating them, we may end up with fewer than n matches.
// If we don't have enough at the end, redo the search with an
// increased value n1, but only if Lookup returned all the requested
// indices in the first place (if it returned fewer than that then
// there cannot be more).
for n1 := n; ; n1 += 2 * (n - len(result)) /* overflow ok */ {
indices := x.Lookup(lit, n1)
if len(indices) == 0 {
return
}
sort.Ints(indices)
pairs := make([]int, 2*len(indices))
result = make([][]int, len(indices))
count := 0
prev := 0
for _, i := range indices {
if count == n {
break
}
// ignore indices leading to overlapping matches
if prev <= i {
j := 2 * count
pairs[j+0] = i
pairs[j+1] = i + len(lit)
result[count] = pairs[j : j+2]
count++
prev = i + len(lit)
}
}
result = result[0:count]
if len(result) >= n || len(indices) != n1 {
// found all matches or there's no chance to find more
// (n and n1 can be negative)
break
}
}
if len(result) == 0 {
result = nil
}
return
}
// regexp has a non-empty literal prefix; Lookup(lit) computes
// the indices of possible complete matches; use these as starting
// points for anchored searches
// (regexp "^" matches beginning of input, not beginning of line)
r = regexp.MustCompile("^" + r.String()) // compiles because r compiled
// same comment about Lookup applies here as in the loop above
for n1 := n; ; n1 += 2 * (n - len(result)) /* overflow ok */ {
indices := x.Lookup(lit, n1)
if len(indices) == 0 {
return
}
sort.Ints(indices)
result = result[0:0]
prev := 0
for _, i := range indices {
if len(result) == n {
break
}
m := r.FindIndex(x.data[i:]) // anchored search - will not run off
// ignore indices leading to overlapping matches
if m != nil && prev <= i {
m[0] = i // correct m
m[1] += i
result = append(result, m)
prev = m[1]
}
}
if len(result) >= n || len(indices) != n1 {
// found all matches or there's no chance to find more
// (n and n1 can be negative)
break
}
}
if len(result) == 0 {
result = nil
}
return
}

View File

@ -0,0 +1,615 @@
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package astrobwt
import (
"bytes"
"fmt"
"io/ioutil"
"math/rand"
"os"
"path/filepath"
"regexp"
"sort"
"strings"
"testing"
)
type testCase struct {
name string // name of test case
source string // source to index
patterns []string // patterns to lookup
}
var testCases = []testCase{
{
"empty string",
"",
[]string{
"",
"foo",
"(foo)",
".*",
"a*",
},
},
{
"all a's",
"aaaaaaaaaa", // 10 a's
[]string{
"",
"a",
"aa",
"aaa",
"aaaa",
"aaaaa",
"aaaaaa",
"aaaaaaa",
"aaaaaaaa",
"aaaaaaaaa",
"aaaaaaaaaa",
"aaaaaaaaaaa", // 11 a's
".",
".*",
"a+",
"aa+",
"aaaa[b]?",
"aaa*",
},
},
{
"abc",
"abc",
[]string{
"a",
"b",
"c",
"ab",
"bc",
"abc",
"a.c",
"a(b|c)",
"abc?",
},
},
{
"barbara*3",
"barbarabarbarabarbara",
[]string{
"a",
"bar",
"rab",
"arab",
"barbar",
"bara?bar",
},
},
{
"typing drill",
"Now is the time for all good men to come to the aid of their country.",
[]string{
"Now",
"the time",
"to come the aid",
"is the time for all good men to come to the aid of their",
"to (come|the)?",
},
},
{
"godoc simulation",
"package main\n\nimport(\n \"rand\"\n ",
[]string{},
},
}
// find all occurrences of s in source; report at most n occurrences
func find(src, s string, n int) []int {
var res []int
if s != "" && n != 0 {
// find at most n occurrences of s in src
for i := -1; n < 0 || len(res) < n; {
j := strings.Index(src[i+1:], s)
if j < 0 {
break
}
i += j + 1
res = append(res, i)
}
}
return res
}
func testLookup(t *testing.T, tc *testCase, x *Index, s string, n int) {
res := x.Lookup([]byte(s), n)
exp := find(tc.source, s, n)
// check that the lengths match
if len(res) != len(exp) {
t.Errorf("test %q, lookup %q (n = %d): expected %d results; got %d", tc.name, s, n, len(exp), len(res))
}
// if n >= 0 the number of results is limited --- unless n >= all results,
// we may obtain different positions from the Index and from find (because
// Index may not find the results in the same order as find) => in general
// we cannot simply check that the res and exp lists are equal
// check that each result is in fact a correct match and there are no duplicates
sort.Ints(res)
for i, r := range res {
if r < 0 || len(tc.source) <= r {
t.Errorf("test %q, lookup %q, result %d (n = %d): index %d out of range [0, %d[", tc.name, s, i, n, r, len(tc.source))
} else if !strings.HasPrefix(tc.source[r:], s) {
t.Errorf("test %q, lookup %q, result %d (n = %d): index %d not a match", tc.name, s, i, n, r)
}
if i > 0 && res[i-1] == r {
t.Errorf("test %q, lookup %q, result %d (n = %d): found duplicate index %d", tc.name, s, i, n, r)
}
}
if n < 0 {
// all results computed - sorted res and exp must be equal
for i, r := range res {
e := exp[i]
if r != e {
t.Errorf("test %q, lookup %q, result %d: expected index %d; got %d", tc.name, s, i, e, r)
}
}
}
}
func testFindAllIndex(t *testing.T, tc *testCase, x *Index, rx *regexp.Regexp, n int) {
res := x.FindAllIndex(rx, n)
exp := rx.FindAllStringIndex(tc.source, n)
// check that the lengths match
if len(res) != len(exp) {
t.Errorf("test %q, FindAllIndex %q (n = %d): expected %d results; got %d", tc.name, rx, n, len(exp), len(res))
}
// if n >= 0 the number of results is limited --- unless n >= all results,
// we may obtain different positions from the Index and from regexp (because
// Index may not find the results in the same order as regexp) => in general
// we cannot simply check that the res and exp lists are equal
// check that each result is in fact a correct match and the result is sorted
for i, r := range res {
if r[0] < 0 || r[0] > r[1] || len(tc.source) < r[1] {
t.Errorf("test %q, FindAllIndex %q, result %d (n == %d): illegal match [%d, %d]", tc.name, rx, i, n, r[0], r[1])
} else if !rx.MatchString(tc.source[r[0]:r[1]]) {
t.Errorf("test %q, FindAllIndex %q, result %d (n = %d): [%d, %d] not a match", tc.name, rx, i, n, r[0], r[1])
}
}
if n < 0 {
// all results computed - sorted res and exp must be equal
for i, r := range res {
e := exp[i]
if r[0] != e[0] || r[1] != e[1] {
t.Errorf("test %q, FindAllIndex %q, result %d: expected match [%d, %d]; got [%d, %d]",
tc.name, rx, i, e[0], e[1], r[0], r[1])
}
}
}
}
func testLookups(t *testing.T, tc *testCase, x *Index, n int) {
for _, pat := range tc.patterns {
testLookup(t, tc, x, pat, n)
if rx, err := regexp.Compile(pat); err == nil {
testFindAllIndex(t, tc, x, rx, n)
}
}
}
// index is used to hide the sort.Interface
type index Index
func (x *index) Len() int { return x.sa.len() }
func (x *index) Less(i, j int) bool { return bytes.Compare(x.at(i), x.at(j)) < 0 }
func (x *index) Swap(i, j int) {
if x.sa.int32 != nil {
x.sa.int32[i], x.sa.int32[j] = x.sa.int32[j], x.sa.int32[i]
} else {
x.sa.int64[i], x.sa.int64[j] = x.sa.int64[j], x.sa.int64[i]
}
}
func (x *index) at(i int) []byte {
return x.data[x.sa.get(i):]
}
func testConstruction(t *testing.T, tc *testCase, x *Index) {
if !sort.IsSorted((*index)(x)) {
t.Errorf("failed testConstruction %s", tc.name)
}
}
func equal(x, y *Index) bool {
if !bytes.Equal(x.data, y.data) {
return false
}
if x.sa.len() != y.sa.len() {
return false
}
n := x.sa.len()
for i := 0; i < n; i++ {
if x.sa.get(i) != y.sa.get(i) {
return false
}
}
return true
}
// returns the serialized index size
func testSaveRestore(t *testing.T, tc *testCase, x *Index) int {
var buf bytes.Buffer
if err := x.Write(&buf); err != nil {
t.Errorf("failed writing index %s (%s)", tc.name, err)
}
size := buf.Len()
var y Index
if err := y.Read(bytes.NewReader(buf.Bytes())); err != nil {
t.Errorf("failed reading index %s (%s)", tc.name, err)
}
if !equal(x, &y) {
t.Errorf("restored index doesn't match saved index %s", tc.name)
}
old := maxData32
defer func() {
maxData32 = old
}()
// Reread as forced 32.
y = Index{}
maxData32 = realMaxData32
if err := y.Read(bytes.NewReader(buf.Bytes())); err != nil {
t.Errorf("failed reading index %s (%s)", tc.name, err)
}
if !equal(x, &y) {
t.Errorf("restored index doesn't match saved index %s", tc.name)
}
// Reread as forced 64.
y = Index{}
maxData32 = -1
if err := y.Read(bytes.NewReader(buf.Bytes())); err != nil {
t.Errorf("failed reading index %s (%s)", tc.name, err)
}
if !equal(x, &y) {
t.Errorf("restored index doesn't match saved index %s", tc.name)
}
return size
}
func testIndex(t *testing.T) {
for _, tc := range testCases {
x := New([]byte(tc.source))
testConstruction(t, &tc, x)
testSaveRestore(t, &tc, x)
testLookups(t, &tc, x, 0)
testLookups(t, &tc, x, 1)
testLookups(t, &tc, x, 10)
testLookups(t, &tc, x, 2e9)
testLookups(t, &tc, x, -1)
}
}
func TestIndex32(t *testing.T) {
testIndex(t)
}
func TestIndex64(t *testing.T) {
maxData32 = -1
defer func() {
maxData32 = realMaxData32
}()
testIndex(t)
}
func TestNew32(t *testing.T) {
test(t, func(x []byte) []int {
sa := make([]int32, len(x))
text_32(x, sa)
out := make([]int, len(sa))
for i, v := range sa {
out[i] = int(v)
}
return out
})
}
func TestNew64(t *testing.T) {
test(t, func(x []byte) []int {
sa := make([]int64, len(x))
text_64(x, sa)
out := make([]int, len(sa))
for i, v := range sa {
out[i] = int(v)
}
return out
})
}
// test tests an arbitrary suffix array construction function.
// Generates many inputs, builds and checks suffix arrays.
func test(t *testing.T, build func([]byte) []int) {
t.Run("ababab...", func(t *testing.T) {
// Very repetitive input has numLMS = len(x)/2-1
// at top level, the largest it can be.
// But maxID is only two (aba and ab$).
size := 100000
if testing.Short() {
size = 10000
}
x := make([]byte, size)
for i := range x {
x[i] = "ab"[i%2]
}
testSA(t, x, build)
})
t.Run("forcealloc", func(t *testing.T) {
// Construct a pathological input that forces
// recurse_32 to allocate a new temporary buffer.
// The input must have more than N/3 LMS-substrings,
// which we arrange by repeating an SLSLSLSLSLSL pattern
// like ababab... above, but then we must also arrange
// for a large number of distinct LMS-substrings.
// We use this pattern:
// 1 255 1 254 1 253 1 ... 1 2 1 255 2 254 2 253 2 252 2 ...
// This gives approximately 2¹⁵ distinct LMS-substrings.
// We need to repeat at least one substring, though,
// or else the recursion can be bypassed entirely.
x := make([]byte, 100000, 100001)
lo := byte(1)
hi := byte(255)
for i := range x {
if i%2 == 0 {
x[i] = lo
} else {
x[i] = hi
hi--
if hi <= lo {
lo++
if lo == 0 {
lo = 1
}
hi = 255
}
}
}
x[:cap(x)][len(x)] = 0 // for sais.New
testSA(t, x, build)
})
t.Run("exhaustive2", func(t *testing.T) {
// All inputs over {0,1} up to length 21.
// Runs in about 10 seconds on my laptop.
x := make([]byte, 30)
numFail := 0
for n := 0; n <= 21; n++ {
if n > 12 && testing.Short() {
break
}
x[n] = 0 // for sais.New
testRec(t, x[:n], 0, 2, &numFail, build)
}
})
t.Run("exhaustive3", func(t *testing.T) {
// All inputs over {0,1,2} up to length 14.
// Runs in about 10 seconds on my laptop.
x := make([]byte, 30)
numFail := 0
for n := 0; n <= 14; n++ {
if n > 8 && testing.Short() {
break
}
x[n] = 0 // for sais.New
testRec(t, x[:n], 0, 3, &numFail, build)
}
})
}
// testRec fills x[i:] with all possible combinations of values in [1,max]
// and then calls testSA(t, x, build) for each one.
func testRec(t *testing.T, x []byte, i, max int, numFail *int, build func([]byte) []int) {
if i < len(x) {
for x[i] = 1; x[i] <= byte(max); x[i]++ {
testRec(t, x, i+1, max, numFail, build)
}
return
}
if !testSA(t, x, build) {
*numFail++
if *numFail >= 10 {
t.Errorf("stopping after %d failures", *numFail)
t.FailNow()
}
}
}
// testSA tests the suffix array build function on the input x.
// It constructs the suffix array and then checks that it is correct.
func testSA(t *testing.T, x []byte, build func([]byte) []int) bool {
defer func() {
if e := recover(); e != nil {
t.Logf("build %v", x)
panic(e)
}
}()
sa := build(x)
if len(sa) != len(x) {
t.Errorf("build %v: len(sa) = %d, want %d", x, len(sa), len(x))
return false
}
for i := 0; i+1 < len(sa); i++ {
if sa[i] < 0 || sa[i] >= len(x) || sa[i+1] < 0 || sa[i+1] >= len(x) {
t.Errorf("build %s: sa out of range: %v\n", x, sa)
return false
}
if bytes.Compare(x[sa[i]:], x[sa[i+1]:]) >= 0 {
t.Errorf("build %v -> %v\nsa[%d:] = %d,%d out of order", x, sa, i, sa[i], sa[i+1])
return false
}
}
return true
}
var (
benchdata = make([]byte, 1e6)
benchrand = make([]byte, 1e6)
)
// Of all possible inputs, the random bytes have the least amount of substring
// repetition, and the repeated bytes have the most. For most algorithms,
// the running time of every input will be between these two.
func benchmarkNew(b *testing.B, random bool) {
b.ReportAllocs()
b.StopTimer()
data := benchdata
if random {
data = benchrand
if data[0] == 0 {
for i := range data {
data[i] = byte(rand.Intn(256))
}
}
}
b.StartTimer()
b.SetBytes(int64(len(data)))
for i := 0; i < b.N; i++ {
New(data)
}
}
func makeText(name string) ([]byte, error) {
var data []byte
switch name {
case "opticks":
var err error
data, err = ioutil.ReadFile("../../testdata/Isaac.Newton-Opticks.txt")
if err != nil {
return nil, err
}
case "go":
err := filepath.Walk("../..", func(path string, info os.FileInfo, err error) error {
if err == nil && strings.HasSuffix(path, ".go") && !info.IsDir() {
file, err := ioutil.ReadFile(path)
if err != nil {
return err
}
data = append(data, file...)
}
return nil
})
if err != nil {
return nil, err
}
case "zero":
data = make([]byte, 50e6)
case "rand":
data = make([]byte, 50e6)
for i := range data {
data[i] = byte(rand.Intn(256))
}
}
return data, nil
}
func setBits(bits int) (cleanup func()) {
if bits == 32 {
maxData32 = realMaxData32
} else {
maxData32 = -1 // force use of 64-bit code
}
return func() {
maxData32 = realMaxData32
}
}
func BenchmarkNew(b *testing.B) {
for _, text := range []string{"opticks", "go", "zero", "rand"} {
b.Run("text="+text, func(b *testing.B) {
data, err := makeText(text)
if err != nil {
b.Fatal(err)
}
if testing.Short() && len(data) > 5e6 {
data = data[:5e6]
}
for _, size := range []int{100e3, 500e3, 1e6, 5e6, 10e6, 50e6} {
if len(data) < size {
continue
}
data := data[:size]
name := fmt.Sprintf("%dK", size/1e3)
if size >= 1e6 {
name = fmt.Sprintf("%dM", size/1e6)
}
b.Run("size="+name, func(b *testing.B) {
for _, bits := range []int{32, 64} {
if ^uint(0) == 0xffffffff && bits == 64 {
continue
}
b.Run(fmt.Sprintf("bits=%d", bits), func(b *testing.B) {
cleanup := setBits(bits)
defer cleanup()
b.SetBytes(int64(len(data)))
b.ReportAllocs()
for i := 0; i < b.N; i++ {
New(data)
}
})
}
})
}
})
}
}
func BenchmarkSaveRestore(b *testing.B) {
r := rand.New(rand.NewSource(0x5a77a1)) // guarantee always same sequence
data := make([]byte, 1<<20) // 1MB of data to index
for i := range data {
data[i] = byte(r.Intn(256))
}
for _, bits := range []int{32, 64} {
if ^uint(0) == 0xffffffff && bits == 64 {
continue
}
b.Run(fmt.Sprintf("bits=%d", bits), func(b *testing.B) {
cleanup := setBits(bits)
defer cleanup()
b.StopTimer()
x := New(data)
size := testSaveRestore(nil, nil, x) // verify correctness
buf := bytes.NewBuffer(make([]byte, size)) // avoid growing
b.SetBytes(int64(size))
b.StartTimer()
b.ReportAllocs()
for i := 0; i < b.N; i++ {
buf.Reset()
if err := x.Write(buf); err != nil {
b.Fatal(err)
}
var y Index
if err := y.Read(buf); err != nil {
b.Fatal(err)
}
}
})
}
}

90
block/LICENSE Normal file
View File

@ -0,0 +1,90 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

315
block/block.go Normal file
View File

@ -0,0 +1,315 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package block
import "fmt"
import "time"
import "bytes"
import "strings"
import "runtime/debug"
import "encoding/hex"
import "encoding/binary"
import "golang.org/x/crypto/sha3"
import "github.com/deroproject/derohe/cryptography/crypto"
//import "github.com/deroproject/derosuite/config"
import "github.com/deroproject/derohe/transaction"
type Block struct {
Major_Version uint64 `json:"major_version"`
Minor_Version uint64 `json:"minor_version"`
Timestamp uint64 `json:"timestamp"` // time stamp is now in milli seconds
Height uint64 `json:"height"`
Miner_TX transaction.Transaction `json:"miner_tx"`
Proof [32]byte `json:"-"` // proof is being used to record balance root hash
Tips []crypto.Hash `json:"tips"`
MiniBlocks []MiniBlock `json:"miniblocks"`
Tx_hashes []crypto.Hash `json:"tx_hashes"`
}
// we process incoming blocks in this format
type Complete_Block struct {
Bl *Block
Txs []*transaction.Transaction
}
// this function gets the block identifier hash
// this has been simplified and varint length has been removed
// keccak hash of entire block including miniblocks, gives the block id
func (bl *Block) GetHash() (hash crypto.Hash) {
return sha3.Sum256(bl.serialize(false))
}
func (bl *Block) GetHashSkipLastMiniBlock() (hash crypto.Hash) {
return sha3.Sum256(bl.SerializeWithoutLastMiniBlock())
}
// serialize entire block ( block_header + miner_tx + tx_list )
func (bl *Block) Serialize() []byte {
return bl.serialize(false) // include mini blocks
}
func (bl *Block) SerializeWithoutLastMiniBlock() []byte {
return bl.serialize(true) //skip last mini block
}
// get timestamp, it has millisecond granularity
func (bl *Block) GetTimestamp() time.Time {
return time.Unix(0, int64(bl.Timestamp*uint64(time.Millisecond)))
}
// stringifier
func (bl Block) String() string {
r := new(strings.Builder)
fmt.Fprintf(r, "BLID:%s\n", bl.GetHash())
fmt.Fprintf(r, "Major version:%d Minor version: %d", bl.Major_Version, bl.Minor_Version)
fmt.Fprintf(r, "Height:%d\n", bl.Height)
fmt.Fprintf(r, "Timestamp:%d (%s)\n", bl.Timestamp, bl.GetTimestamp())
for i := range bl.Tips {
fmt.Fprintf(r, "Past %d:%s\n", i, bl.Tips[i])
}
for i, mbl := range bl.MiniBlocks {
fmt.Fprintf(r, "Mini %d:%s\n", i, mbl)
}
for i, txid := range bl.Tx_hashes {
fmt.Fprintf(r, "tx %d:%s\n", i, txid)
}
return r.String()
}
// this function serializes a block and skips miniblocks is requested
func (bl *Block) serialize(skiplastminiblock bool) []byte {
var serialized bytes.Buffer
buf := make([]byte, binary.MaxVarintLen64)
n := binary.PutUvarint(buf, uint64(bl.Major_Version))
serialized.Write(buf[:n])
n = binary.PutUvarint(buf, uint64(bl.Minor_Version))
serialized.Write(buf[:n])
binary.BigEndian.PutUint64(buf, bl.Timestamp)
serialized.Write(buf[:8])
n = binary.PutUvarint(buf, bl.Height)
serialized.Write(buf[:n])
// write miner address
serialized.Write(bl.Miner_TX.Serialize())
serialized.Write(bl.Proof[:])
n = binary.PutUvarint(buf, uint64(len(bl.Tips)))
serialized.Write(buf[:n])
for _, hash := range bl.Tips {
serialized.Write(hash[:])
}
if len(bl.MiniBlocks) == 0 {
serialized.WriteByte(0)
} else {
if skiplastminiblock == false {
n = binary.PutUvarint(buf, uint64(len(bl.MiniBlocks)))
serialized.Write(buf[:n])
for _, mblock := range bl.MiniBlocks {
s := mblock.Serialize()
serialized.Write(s[:])
}
} else {
length := len(bl.MiniBlocks) - 1
n = binary.PutUvarint(buf, uint64(length))
serialized.Write(buf[:n])
for i := 0; i < length; i++ {
s := bl.MiniBlocks[i].Serialize()
serialized.Write(s[:])
}
}
}
n = binary.PutUvarint(buf, uint64(len(bl.Tx_hashes)))
serialized.Write(buf[:n])
for _, hash := range bl.Tx_hashes {
serialized.Write(hash[:])
}
return serialized.Bytes()
}
// get block transactions tree hash
func (bl *Block) GetTipsHash() (result crypto.Hash) {
h := sha3.New256() // add all the remaining hashes
for i := range bl.Tips {
h.Write(bl.Tips[i][:])
}
r := h.Sum(nil)
copy(result[:], r)
return
}
// get block transactions
// we have discarded the merkle tree and have shifted to a plain version
func (bl *Block) GetTXSHash() (result crypto.Hash) {
h := sha3.New256()
for i := range bl.Tx_hashes {
h.Write(bl.Tx_hashes[i][:])
}
r := h.Sum(nil)
copy(result[:], r)
return
}
//parse entire block completely
func (bl *Block) Deserialize(buf []byte) (err error) {
done := 0
defer func() {
if r := recover(); r != nil {
err = fmt.Errorf("Invalid Block cannot deserialize '%x' stack %s", hex.EncodeToString(buf), string(debug.Stack()))
return
}
}()
bl.Major_Version, done = binary.Uvarint(buf)
if done <= 0 {
return fmt.Errorf("Invalid Major Version in Block\n")
}
buf = buf[done:]
bl.Minor_Version, done = binary.Uvarint(buf)
if done <= 0 {
return fmt.Errorf("Invalid Minor Version in Block\n")
}
buf = buf[done:]
if len(buf) < 8 {
return fmt.Errorf("Incomplete timestamp in Block\n")
}
bl.Timestamp = binary.BigEndian.Uint64(buf) // we have read 8 bytes
buf = buf[8:]
bl.Height, done = binary.Uvarint(buf)
if done <= 0 {
return fmt.Errorf("Invalid Height in Block\n")
}
buf = buf[done:]
// parse miner tx
err = bl.Miner_TX.Deserialize(buf)
if err != nil {
return err
}
buf = buf[len(bl.Miner_TX.Serialize()):] // skup number of bytes processed
// read 32 byte proof
copy(bl.Proof[:], buf[0:32])
buf = buf[32:]
// header finished here
// read and parse transaction
/*err = bl.Miner_tx.DeserializeHeader(buf)
if err != nil {
return fmt.Errorf("Cannot parse miner TX %x", buf)
}
// if tx was parse, make sure it's coin base
if len(bl.Miner_tx.Vin) != 1 || bl.Miner_tx.Vin[0].(transaction.Txin_gen).Height > config.MAX_CHAIN_HEIGHT {
// serialize transaction again to get the tx size, so as parsing could continue
return fmt.Errorf("Invalid Miner TX")
}
miner_tx_serialized_size := bl.Miner_tx.Serialize()
buf = buf[len(miner_tx_serialized_size):]
*/
tips_count, done := binary.Uvarint(buf)
if done <= 0 || done > 1 {
return fmt.Errorf("Invalid Tips count in Block\n")
}
buf = buf[done:]
// remember first tx is merkle root
for i := uint64(0); i < tips_count; i++ {
//fmt.Printf("Parsing transaction hash %d tx_count %d\n", i, tx_count)
var h crypto.Hash
copy(h[:], buf[:32])
buf = buf[32:]
bl.Tips = append(bl.Tips, h)
}
miniblocks_count, done := binary.Uvarint(buf)
if done <= 0 || done > 2 {
return fmt.Errorf("Invalid Mini blocks count in Block, done %d", done)
}
buf = buf[done:]
for i := uint64(0); i < miniblocks_count; i++ {
var mbl MiniBlock
if err = mbl.Deserialize(buf[:MINIBLOCK_SIZE]); err != nil {
return err
}
buf = buf[MINIBLOCK_SIZE:]
bl.MiniBlocks = append(bl.MiniBlocks, mbl)
}
//fmt.Printf("miner tx %x\n", miner_tx_serialized_size)
// read number of transactions
tx_count, done := binary.Uvarint(buf)
if done <= 0 {
return fmt.Errorf("Invalid Tx count in Block\n")
}
buf = buf[done:]
// remember first tx is merkle root
for i := uint64(0); i < tx_count; i++ {
//fmt.Printf("Parsing transaction hash %d tx_count %d\n", i, tx_count)
var h crypto.Hash
copy(h[:], buf[:32])
buf = buf[32:]
bl.Tx_hashes = append(bl.Tx_hashes, h)
}
//fmt.Printf("%d member in tx hashes \n",len(bl.Tx_hashes))
return
}

157
block/block_test.go Normal file
View File

@ -0,0 +1,157 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package block
//import "bytes"
import "testing"
import "encoding/hex"
import "github.com/deroproject/derohe/config"
//import "github.com/deroproject/derohe/crypto"
func Test_Generic_block_serdes(t *testing.T) {
var bl, bldecoded Block
genesis_tx_bytes, _ := hex.DecodeString(config.Mainnet.Genesis_Tx)
err := bl.Miner_TX.Deserialize(genesis_tx_bytes)
if err != nil {
t.Errorf("Deserialization test failed for Genesis TX err %s\n", err)
}
serialized := bl.Serialize()
err = bldecoded.Deserialize(serialized)
if err != nil {
t.Errorf("Deserialization test failed for NULL block err %s\n", err)
}
}
// this tests whether the PoW depends on everything in the BLOCK except Proof
func Test_PoW_Dependancy(t *testing.T) {
var bl Block
genesis_tx_bytes, _ := hex.DecodeString(config.Mainnet.Genesis_Tx)
err := bl.Miner_TX.Deserialize(genesis_tx_bytes)
if err != nil {
t.Errorf("Deserialization test failed for Genesis TX err %s\n", err)
}
Original_PoW := bl.GetHash()
{
temp_bl := bl
temp_bl.Major_Version++
if Original_PoW == temp_bl.GetHash() {
t.Fatalf("POW Skipping Major Version")
}
}
{
temp_bl := bl
temp_bl.Minor_Version++
if Original_PoW == temp_bl.GetHash() {
t.Fatalf("POW Skipping Minor Version")
}
}
{
temp_bl := bl
temp_bl.Timestamp++
if Original_PoW == temp_bl.GetHash() {
t.Fatalf("POW Skipping Timestamp Version")
}
}
{
temp_bl := bl
temp_bl.Miner_TX.Version++
if Original_PoW == temp_bl.GetHash() {
t.Fatalf("POW Skipping Miner_TX")
}
}
{
temp_bl := bl
temp_bl.Tips = append(temp_bl.Tips, Original_PoW)
if Original_PoW == temp_bl.GetHash() {
t.Fatalf("POW Skipping Tips")
}
}
{
temp_bl := bl
temp_bl.Tx_hashes = append(temp_bl.Tx_hashes, Original_PoW)
if Original_PoW == temp_bl.GetHash() {
t.Fatalf("POW Skipping TXs")
}
}
{
temp_bl := bl
temp_bl.Proof[31] = 1
if Original_PoW == temp_bl.GetHash() {
t.Fatalf("POW Skipping Proof")
}
}
}
// test all invalid edge cases, which will return error
func Test_Block_Edge_Cases(t *testing.T) {
tests := []struct {
name string
blockhex string
}{
{
name: "Invalid Major Version",
blockhex: "80808080808080808080", // Major_Version is taking more than 9 bytes, trigger error
},
{
name: "Invalid Minor Version",
blockhex: "0280808080808080808080", // Mijor_Version is taking more than 9 bytes, trigger error
},
{
name: "Invalid timestamp",
blockhex: "020280808080808080808080", // timestamp is taking more than 9 bytes, trigger error
},
{
name: "Incomplete header",
blockhex: "020255", // prev hash is not provided, controlled panic
},
}
for _, test := range tests {
block, err := hex.DecodeString(test.blockhex)
if err != nil {
t.Fatalf("Block hex could not be hex decoded")
}
//t.Logf("%s failed", test.name)
var bl Block
err = bl.Deserialize(block)
if err == nil {
t.Fatalf("%s failed", test.name)
}
}
}

204
block/miniblock.go Normal file
View File

@ -0,0 +1,204 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package block
import "fmt"
import "hash"
import "sync"
import "bytes"
import "strings"
import "encoding/binary"
import "golang.org/x/crypto/sha3"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/pow"
const MINIBLOCK_SIZE = 48
var hasherPool = sync.Pool{
New: func() interface{} { return sha3.New256() },
}
// it should be exactly 48 bytes after serialization
// structure size 1 + 2 + 5 + 8 + 16 + 16 bytes
type MiniBlock struct {
// below 3 fields are serialized into single byte
Version uint8 // 1 byte // lower 5 bits (0,1,2,3,4)
Final bool // bit 5
PastCount uint8 // previous count // bits 6,7
Timestamp uint16 // represents rolling time
Height uint64 // 5 bytes serialized in 5 bytes,
Past [2]uint32 // 8 bytes used to build DAG of miniblocks and prevent number of attacks
KeyHash crypto.Hash // 16 bytes, remaining bytes are trimmed miniblock miner keyhash
Flags uint32 // can be used as flags by special miners to represent something, also used as nonce
Nonce [3]uint32 // 12 nonce byte represents 2^96 variations, 2^96 work every ms
}
type MiniBlockKey struct {
Height uint64
Past0 uint32
Past1 uint32
}
func (mbl *MiniBlock) GetKey() (key MiniBlockKey) {
key.Height = mbl.Height
key.Past0 = mbl.Past[0]
key.Past1 = mbl.Past[1]
return
}
func (mbl MiniBlock) String() string {
r := new(strings.Builder)
fmt.Fprintf(r, "%d ", mbl.Version)
fmt.Fprintf(r, "height %d", mbl.Height)
if mbl.Final {
fmt.Fprintf(r, " Final ")
}
if mbl.PastCount == 1 {
fmt.Fprintf(r, " Past [%08x]", mbl.Past[0])
} else {
fmt.Fprintf(r, " Past [%08x %08x]", mbl.Past[0], mbl.Past[1])
}
fmt.Fprintf(r, " time %d", mbl.Timestamp)
fmt.Fprintf(r, " flags %d", mbl.Flags)
fmt.Fprintf(r, " Nonce [%08x %08x %08x]", mbl.Nonce[0], mbl.Nonce[1], mbl.Nonce[2])
return r.String()
}
// this function gets the block identifier hash, this is only used to deduplicate mini blocks
func (mbl *MiniBlock) GetHash() (result crypto.Hash) {
ser := mbl.Serialize()
sha := hasherPool.Get().(hash.Hash)
sha.Reset()
sha.Write(ser[:])
x := sha.Sum(nil)
copy(result[:], x[:])
hasherPool.Put(sha)
return result
// return sha3.Sum256(ser[:])
}
// Get PoW hash , this is very slow function
func (mbl *MiniBlock) GetPoWHash() (hash crypto.Hash) {
return pow.Pow(mbl.Serialize())
}
func (mbl *MiniBlock) SanityCheck() error {
if mbl.Version >= 31 {
return fmt.Errorf("version not supported")
}
if mbl.PastCount > 2 {
return fmt.Errorf("tips cannot be more than 2")
}
if mbl.PastCount == 0 {
return fmt.Errorf("miniblock must have tips")
}
if mbl.Height >= 0xffffffffff {
return fmt.Errorf("miniblock height not possible")
}
if mbl.PastCount == 2 && mbl.Past[0] == mbl.Past[1] {
return fmt.Errorf("tips cannot collide")
}
return nil
}
// serialize entire block ( 64 bytes )
func (mbl *MiniBlock) Serialize() (result []byte) {
if err := mbl.SanityCheck(); err != nil {
panic(err)
}
var b bytes.Buffer
if mbl.Final {
b.WriteByte(mbl.Version | mbl.PastCount<<6 | 0x20)
} else {
b.WriteByte(mbl.Version | mbl.PastCount<<6)
}
binary.Write(&b, binary.BigEndian, mbl.Timestamp)
var scratch [8]byte
binary.BigEndian.PutUint64(scratch[:], mbl.Height)
b.Write(scratch[3:8]) // 1 + 5
for _, v := range mbl.Past {
binary.Write(&b, binary.BigEndian, v)
}
b.Write(mbl.KeyHash[:16])
binary.Write(&b, binary.BigEndian, mbl.Flags)
for _, v := range mbl.Nonce {
binary.Write(&b, binary.BigEndian, v)
}
return b.Bytes()
}
//parse entire block completely
func (mbl *MiniBlock) Deserialize(buf []byte) (err error) {
if len(buf) < MINIBLOCK_SIZE {
return fmt.Errorf("Expected %d bytes. Actual %d", MINIBLOCK_SIZE, len(buf))
}
if mbl.Version = buf[0] & 0x1f; mbl.Version != 1 {
return fmt.Errorf("unknown version '%d'", mbl.Version)
}
mbl.PastCount = buf[0] >> 6
if buf[0]&0x20 > 0 {
mbl.Final = true
}
mbl.Timestamp = binary.BigEndian.Uint16(buf[1:])
mbl.Height = binary.BigEndian.Uint64(buf[0:]) & 0x000000ffffffffff
var b bytes.Buffer
b.Write(buf[8:])
for i := range mbl.Past {
if err = binary.Read(&b, binary.BigEndian, &mbl.Past[i]); err != nil {
return
}
}
if err = mbl.SanityCheck(); err != nil {
return err
}
b.Read(mbl.KeyHash[:16])
if err = binary.Read(&b, binary.BigEndian, &mbl.Flags); err != nil {
return
}
for i := range mbl.Nonce {
if err = binary.Read(&b, binary.BigEndian, &mbl.Nonce[i]); err != nil {
return
}
}
return
}

73
block/miniblock_test.go Normal file
View File

@ -0,0 +1,73 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package block
import "bytes"
import "testing"
import "crypto/rand"
func Test_blockmini_serde(t *testing.T) {
var random_data [MINIBLOCK_SIZE]byte
random_data[0] = 0x41
var bl, bl2 MiniBlock
if err := bl2.Deserialize(random_data[:]); err != nil {
t.Fatalf("error during serdes %x err %s", random_data, err)
}
//t.Logf("bl2 %+v\n",bl2)
//t.Logf("bl2 serialized %x\n",bl2.Serialize())
if err := bl.Deserialize(bl2.Serialize()); err != nil {
t.Fatalf("error during serdes %x", random_data)
}
}
func Test_blockmini_serdes(t *testing.T) {
for i := 0; i < 10000; i++ {
var random_data [MINIBLOCK_SIZE]byte
if _, err := rand.Read(random_data[:]); err != nil {
t.Fatalf("error reading random number %s", err)
}
random_data[0] = 0x41
var bl, bl2 MiniBlock
if err := bl2.Deserialize(random_data[:]); err != nil {
t.Fatalf("error during serdes %x", random_data)
}
if err := bl.Deserialize(bl2.Serialize()); err != nil {
t.Fatalf("error during serdes %x", random_data)
}
if bl.GetHash() != bl2.GetHash() {
t.Fatalf("error during serdes %x", random_data)
}
if !bytes.Equal(bl.Serialize(), bl2.Serialize()) {
t.Fatalf("error during serdes %x", random_data)
}
}
}

129
block/miniblockdag.go Normal file
View File

@ -0,0 +1,129 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package block
import "fmt"
import "sort"
import "sync"
type MiniBlocksCollection struct {
Collection map[MiniBlockKey][]MiniBlock
sync.RWMutex
}
// create a collection
func CreateMiniBlockCollection() *MiniBlocksCollection {
return &MiniBlocksCollection{Collection: map[MiniBlockKey][]MiniBlock{}}
}
// purge all heights less than this height
func (c *MiniBlocksCollection) PurgeHeight(height int64) (purge_count int) {
if height < 0 {
return
}
c.Lock()
defer c.Unlock()
for k, _ := range c.Collection {
if k.Height <= uint64(height) {
purge_count++
delete(c.Collection, k)
}
}
return purge_count
}
func (c *MiniBlocksCollection) Count() int {
c.RLock()
defer c.RUnlock()
count := 0
for _, v := range c.Collection {
count += len(v)
}
return count
}
// check if already inserted
func (c *MiniBlocksCollection) IsAlreadyInserted(mbl MiniBlock) bool {
return c.IsCollision(mbl)
}
// check if collision will occur
func (c *MiniBlocksCollection) IsCollision(mbl MiniBlock) bool {
c.RLock()
defer c.RUnlock()
return c.isCollisionnolock(mbl)
}
// this assumes that we are already locked
func (c *MiniBlocksCollection) isCollisionnolock(mbl MiniBlock) bool {
mbls := c.Collection[mbl.GetKey()]
for i := range mbls {
if mbl == mbls[i] {
return true
}
}
return false
}
// insert a miniblock
func (c *MiniBlocksCollection) InsertMiniBlock(mbl MiniBlock) (err error, result bool) {
if mbl.Final {
return fmt.Errorf("Final cannot be inserted"), false
}
c.Lock()
defer c.Unlock()
if c.isCollisionnolock(mbl) {
return fmt.Errorf("collision %x", mbl.Serialize()), false
}
c.Collection[mbl.GetKey()] = append(c.Collection[mbl.GetKey()], mbl)
return nil, true
}
// get all the genesis blocks
func (c *MiniBlocksCollection) GetAllMiniBlocks(key MiniBlockKey) (mbls []MiniBlock) {
c.RLock()
defer c.RUnlock()
for _, mbl := range c.Collection[key] {
mbls = append(mbls, mbl)
}
return
}
// get all the tips from the map, this is atleast O(n)
func (c *MiniBlocksCollection) GetAllKeys(height int64) (keys []MiniBlockKey) {
c.RLock()
defer c.RUnlock()
for k := range c.Collection {
if k.Height == uint64(height) {
keys = append(keys, k)
}
}
sort.SliceStable(keys, func(i, j int) bool { // sort descending on the basis of work done
return len(c.Collection[keys[i]]) > len(c.Collection[keys[j]])
})
return
}

View File

@ -0,0 +1,65 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package block
//import "bytes"
import "testing"
// tests whether the purge is working as it should
func Test_blockmini_purge(t *testing.T) {
c := CreateMiniBlockCollection()
for i := 0; i < 10; i++ {
mbl := MiniBlock{Version: 1, Height: uint64(i), PastCount: 1}
if err, ok := c.InsertMiniBlock(mbl); !ok {
t.Fatalf("error inserting miniblock err: %s", err)
}
}
c.PurgeHeight(5) // purge all miniblock <= height 5
if c.Count() != 4 {
t.Fatalf("miniblocks not purged")
}
for _, mbls := range c.Collection {
for _, mbl := range mbls {
if mbl.Height <= 5 {
t.Fatalf("purge not working correctly")
}
}
}
}
// tests whether collision is working correctly
// also tests whether genesis blocks returns connected always
func Test_blockmini_collision(t *testing.T) {
c := CreateMiniBlockCollection()
mbl := MiniBlock{Version: 1, PastCount: 1}
if err, ok := c.InsertMiniBlock(mbl); !ok {
t.Fatalf("error inserting miniblock err: %s", err)
}
if !c.IsAlreadyInserted(mbl) {
t.Fatalf("already inserted block not detected")
}
if c.IsAlreadyInserted(mbl) != c.IsCollision(mbl) {
t.Fatalf("already inserted block not detected")
}
}

View File

@ -0,0 +1,69 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/transaction"
// used to verify complete block which contains expanded transaction
type cbl_verify struct {
data map[crypto.Hash]map[[33]byte]uint64
}
// tx must be in expanded form
// check and insert cannot should be used 2 time, one time use for check, second time use for insert
func (b *cbl_verify) check(tx *transaction.Transaction, insert_for_future bool) (err error) {
if tx.IsRegistration() || tx.IsCoinbase() || tx.IsPremine() { // these are not used
return nil
}
if b.data == nil {
b.data = map[crypto.Hash]map[[33]byte]uint64{}
}
height := tx.Height
for _, p := range tx.Payloads {
parity := p.Proof.Parity()
if _, ok := b.data[p.SCID]; !ok { // this scid is being touched for first time, we are good to go
if !insert_for_future { // if we are not inserting, skip this entire statment
continue
}
b.data[p.SCID] = map[[33]byte]uint64{}
}
if p.Statement.RingSize != uint64(len(p.Statement.Publickeylist_compressed)) {
return fmt.Errorf("TX is not expanded. cannot cbl_verify expected %d Actual %d", p.Statement.RingSize, len(p.Statement.Publickeylist_compressed))
}
for j, pkc := range p.Statement.Publickeylist_compressed {
if (j%2 == 0) == parity { // this condition is well thought out and works good enough
if h, ok := b.data[p.SCID][pkc]; ok {
if h != height {
return fmt.Errorf("Not possible")
}
} else {
if insert_for_future {
b.data[p.SCID][pkc] = height
}
}
}
}
}
return nil
}

1538
blockchain/blockchain.go Normal file

File diff suppressed because it is too large Load Diff

248
blockchain/difficulty.go Normal file
View File

@ -0,0 +1,248 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "math"
import "math/big"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/globals"
var (
// bigZero is 0 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigZero = big.NewInt(0)
// bigOne is 1 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigOne = big.NewInt(1)
// oneLsh256 is 1 shifted left 256 bits. It is defined here to avoid
// the overhead of creating it multiple times.
oneLsh256 = new(big.Int).Lsh(bigOne, 256)
// enabling this will simulation mode with hard coded difficulty set to 1
// the variable is knowingly not exported, so no one can tinker with it
//simulation = false // simulation mode is disabled
)
// HashToBig converts a PoW has into a big.Int that can be used to
// perform math comparisons.
func HashToBig(buf crypto.Hash) *big.Int {
// A Hash is in little-endian, but the big package wants the bytes in
// big-endian, so reverse them.
blen := len(buf) // its hardcoded 32 bytes, so why do len but lets do it
for i := 0; i < blen/2; i++ {
buf[i], buf[blen-1-i] = buf[blen-1-i], buf[i]
}
return new(big.Int).SetBytes(buf[:])
}
// this function calculates the difficulty in big num form
func ConvertDifficultyToBig(difficultyi uint64) *big.Int {
if difficultyi == 0 {
panic("difficulty can never be zero")
}
// (1 << 256) / (difficultyNum )
difficulty := new(big.Int).SetUint64(difficultyi)
denominator := new(big.Int).Add(difficulty, bigZero) // above 2 lines can be merged
return new(big.Int).Div(oneLsh256, denominator)
}
func ConvertIntegerDifficultyToBig(difficultyi *big.Int) *big.Int {
if difficultyi.Cmp(bigZero) == 0 {
panic("difficulty can never be zero")
}
return new(big.Int).Div(oneLsh256, difficultyi)
}
// this function check whether the pow hash meets difficulty criteria
func CheckPowHash(pow_hash crypto.Hash, difficulty uint64) bool {
big_difficulty := ConvertDifficultyToBig(difficulty)
big_pow_hash := HashToBig(pow_hash)
if big_pow_hash.Cmp(big_difficulty) <= 0 { // if work_pow is less than difficulty
return true
}
return false
}
// this function check whether the pow hash meets difficulty criteria
// however, it take diff in bigint format
func CheckPowHashBig(pow_hash crypto.Hash, big_difficulty_integer *big.Int) bool {
big_pow_hash := HashToBig(pow_hash)
big_difficulty := ConvertIntegerDifficultyToBig(big_difficulty_integer)
if big_pow_hash.Cmp(big_difficulty) <= 0 { // if work_pow is less than difficulty
return true
}
return false
}
const E = float64(2.71828182845905)
// hard code datatypes
func Diff(solvetime, blocktime, M int64, prev_diff int64) (diff int64) {
if blocktime <= 0 || solvetime <= 0 || M <= 0 {
panic("invalid parameters")
}
easypart := int64(math.Pow(E, ((1-float64(solvetime)/float64(blocktime))/float64(M))) * 10000)
diff = (prev_diff * easypart) / 10000
return diff
}
// big int implementation
func DiffBig(solvetime, blocktime, M int64, prev_diff *big.Int) (diff *big.Int) {
if blocktime <= 0 || solvetime <= 0 || M <= 0 {
panic("invalid parameters")
}
easypart := int64(math.Pow(E, ((1-float64(solvetime)/float64(blocktime))/float64(M))) * 10000)
diff = new(big.Int).Mul(prev_diff, new(big.Int).SetInt64(easypart))
diff.Div(diff, new(big.Int).SetUint64(10000))
return diff
}
// when creating a new block, current_time in utc + chain_block_time must be added
// while verifying the block, expected time stamp should be replaced from what is in blocks header
// in DERO atlantis difficulty is based on previous tips
// get difficulty at specific tips,
// algorithm is agiven above
// this should be more thoroughly evaluated
// NOTE: we need to evaluate if the mining adversary gains something, if the they set the time diff to 1
// we need to do more simulations and evaluations
// difficulty is now processed at sec level, mean how many hashes are require per sec to reach block time
// basica
func (chain *Blockchain) Get_Difficulty_At_Tips(tips []crypto.Hash) *big.Int {
tips_string := ""
for _, tip := range tips {
tips_string += fmt.Sprintf("%s", tip.String())
}
if diff_bytes, found := chain.cache_Get_Difficulty_At_Tips.Get(tips_string); found {
return new(big.Int).SetBytes([]byte(diff_bytes.(string)))
}
difficulty := Get_Difficulty_At_Tips(chain, tips)
if chain.cache_enabled {
chain.cache_Get_Difficulty_At_Tips.Add(tips_string, string(difficulty.Bytes())) // set in cache
}
return difficulty
}
func (chain *Blockchain) VerifyMiniblockPoW(bl *block.Block, mbl block.MiniBlock) bool {
var cachekey []byte
for i := range bl.Tips {
cachekey = append(cachekey, bl.Tips[i][:]...)
}
cachekey = append(cachekey, mbl.Serialize()...)
if _, ok := chain.cache_IsMiniblockPowValid.Get(fmt.Sprintf("%s", cachekey)); ok {
return true
}
PoW := mbl.GetPoWHash()
block_difficulty := chain.Get_Difficulty_At_Tips(bl.Tips)
// test new difficulty checksm whether they are equivalent to integer math
/*if CheckPowHash(PoW, block_difficulty.Uint64()) != CheckPowHashBig(PoW, block_difficulty) {
logger.Panicf("Difficuly mismatch between big and uint64 diff ")
}*/
if CheckPowHashBig(PoW, block_difficulty) == true {
if chain.cache_enabled {
chain.cache_IsMiniblockPowValid.Add(fmt.Sprintf("%s", cachekey), true) // set in cache
}
return true
}
return false
}
type DiffProvider interface {
Load_Block_Height(crypto.Hash) int64
Load_Block_Difficulty(crypto.Hash) *big.Int
Load_Block_Timestamp(crypto.Hash) uint64
Get_Block_Past(crypto.Hash) []crypto.Hash
}
func Get_Difficulty_At_Tips(source DiffProvider, tips []crypto.Hash) *big.Int {
var MinimumDifficulty *big.Int
if globals.IsMainnet() {
MinimumDifficulty = new(big.Int).SetUint64(config.Settings.MAINNET_MINIMUM_DIFFICULTY) // this must be controllable parameter
} else {
MinimumDifficulty = new(big.Int).SetUint64(config.Settings.TESTNET_MINIMUM_DIFFICULTY) // this must be controllable parameter
}
GenesisDifficulty := new(big.Int).SetUint64(1)
if chain, ok := source.(*Blockchain); ok {
if chain.simulator == true {
return GenesisDifficulty
}
}
if len(tips) == 0 {
return GenesisDifficulty
}
height := int64(0)
for i := range tips {
past_height := source.Load_Block_Height(tips[i])
if past_height < 0 {
panic(fmt.Errorf("could not find height for blid %s", tips[i]))
}
if height <= past_height {
height = past_height
}
}
height++
//above height code is equivalent to below code
//height := chain.Calculate_Height_At_Tips(tips)
// until we have atleast 2 blocks, we cannot run the algo
if height < 3 {
return MinimumDifficulty
}
tip_difficulty := source.Load_Block_Difficulty(tips[0])
tip_time := source.Load_Block_Timestamp(tips[0])
parents := source.Get_Block_Past(tips[0])
parent_time := source.Load_Block_Timestamp(parents[0])
block_time := int64(config.BLOCK_TIME_MILLISECS)
solve_time := int64(tip_time - parent_time)
if solve_time > (block_time * 2) { // there should not be sudden decreases
solve_time = block_time * 2
}
M := int64(8)
difficulty := DiffBig(solve_time, block_time, M, tip_difficulty)
if difficulty.Cmp(MinimumDifficulty) < 0 { // we can never be below minimum difficulty
difficulty.Set(MinimumDifficulty)
}
return difficulty
}

71
blockchain/genesis.go Normal file
View File

@ -0,0 +1,71 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "encoding/hex"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/globals"
// generates a genesis block
func Generate_Genesis_Block() (bl block.Block) {
genesis_tx_blob, err := hex.DecodeString(globals.Config.Genesis_Tx)
if err != nil {
panic("Failed to hex decode genesis Tx " + err.Error())
}
err = bl.Miner_TX.Deserialize(genesis_tx_blob)
if err != nil {
panic(fmt.Sprintf("Failed to parse genesis tx err %s hex %s ", err, globals.Config.Genesis_Tx))
}
if !bl.Miner_TX.IsPremine() {
panic("miner tx not premine")
}
//rlog.Tracef(2, "Hash of Genesis Tx %x\n", bl.Miner_tx.GetHash())
// verify whether tx is coinbase and valid
// setup genesis block header
bl.Major_Version = 1
bl.Minor_Version = 1
bl.Timestamp = 0 // first block timestamp
var zerohash crypto.Hash
_ = zerohash
//bl.Tips = append(bl.Tips,zerohash)
//bl.Prev_hash is automatic zero
logger.V(1).Info("Hash of genesis block", "blid", bl.GetHash())
serialized := bl.Serialize()
var bl2 block.Block
err = bl2.Deserialize(serialized)
if err != nil {
panic(fmt.Sprintf("error while serdes genesis block err %s", err))
}
if bl.GetHash() != bl2.GetHash() {
panic("hash mismatch serdes genesis block")
}
return
}

View File

@ -0,0 +1,43 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "bytes"
import "testing"
//import "github.com/deroproject/derohe/block"
func Test_Genesis_block(t *testing.T) {
bl := Generate_Genesis_Block()
//var bl block.Block
serialized := bl.Serialize()
err := bl.Deserialize(serialized)
if err != nil {
t.Error("Deserialization test failed for genesis block\n")
}
serialized2 := bl.Serialize()
if !bytes.Equal(serialized, serialized2) {
t.Errorf("serdes test failed for genesis block \n%x\n%x\n", serialized, serialized2)
return
}
}

View File

@ -0,0 +1,107 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
// this file installs hard coded contracts
//import "fmt"
import _ "embed"
/*
import "strings"
import "strconv"
import "encoding/hex"
import "encoding/binary"
import "math/big"
import "golang.org/x/xerrors"
import "github.com/deroproject/derohe/cryptography/bn256"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/premine"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/rpc"
*/
import "github.com/deroproject/graviton"
import "github.com/deroproject/derohe/dvm"
import "github.com/deroproject/derohe/cryptography/crypto"
//go:embed hardcoded_sc/nameservice.bas
var source_nameservice string
// process the miner tx, giving fees, miner rewatd etc
func (chain *Blockchain) install_hardcoded_contracts(cache map[crypto.Hash]*graviton.Tree, ss *graviton.Snapshot, balance_tree *graviton.Tree, sc_tree *graviton.Tree, height uint64) (err error) {
if height != 0 {
return
}
if _, _, err = dvm.ParseSmartContract(source_nameservice); err != nil {
logger.Error(err, "error Parsing hard coded sc")
return
}
var name crypto.Hash
name[31] = 1
if err = chain.install_hardcoded_sc(cache, ss, balance_tree, sc_tree, source_nameservice, name); err != nil {
return
}
//fmt.Printf("source code embedded %s\n",source_nameservice)
return
}
// hard coded contracts generally do not do any initialization
func (chain *Blockchain) install_hardcoded_sc(cache map[crypto.Hash]*graviton.Tree, ss *graviton.Snapshot, balance_tree *graviton.Tree, sc_tree *graviton.Tree, source string, scid crypto.Hash) (err error) {
w_sc_tree := &Tree_Wrapper{tree: sc_tree, entries: map[string][]byte{}}
var w_sc_data_tree *Tree_Wrapper
meta := SC_META_DATA{}
w_sc_data_tree = wrapped_tree(cache, ss, scid)
// install SC, should we check for sanity now, why or why not
w_sc_data_tree.Put(SC_Code_Key(scid), dvm.Variable{Type: dvm.String, ValueString: source}.MarshalBinaryPanic())
w_sc_tree.Put(SC_Meta_Key(scid), meta.MarshalBinary())
// we must commit all the changes
// anything below should never give error
if _, ok := cache[scid]; !ok {
cache[scid] = w_sc_data_tree.tree
}
for k, v := range w_sc_data_tree.entries { // commit entire data to tree
if err = w_sc_data_tree.tree.Put([]byte(k), v); err != nil {
return
}
}
for k, v := range w_sc_tree.entries {
if err = w_sc_tree.tree.Put([]byte(k), v); err != nil {
return
}
}
return nil
}

View File

@ -0,0 +1,30 @@
/* Name Service SMART CONTRACT in DVM-BASIC.
Allows a user to register names which could be looked by wallets for easy to use name while transfer
*/
// This function is used to initialize parameters during install time
Function Initialize() Uint64
10 RETURN 0
End Function
// Register a name, limit names of 5 or less length
Function Register(name String) Uint64
10 IF EXISTS(name) THEN GOTO 50 // if name is already used, it cannot reregistered
15 IF STRLEN(name) >= 64 THEN GOTO 50 // skip names misuse
20 IF STRLEN(name) >= 6 THEN GOTO 40
30 IF SIGNER() == address_raw("deto1qyvyeyzrcm2fzf6kyq7egkes2ufgny5xn77y6typhfx9s7w3mvyd5qqynr5hx") THEN GOTO 40
35 IF SIGNER() != address_raw("deto1qy0ehnqjpr0wxqnknyc66du2fsxyktppkr8m8e6jvplp954klfjz2qqdzcd8p") THEN GOTO 50
40 STORE(name,SIGNER())
50 RETURN 0
End Function
// This function is used to change owner
// owner is an string form of address
Function TransferOwnership(name String,newowner String) Uint64
10 IF LOAD(name) != SIGNER() THEN GOTO 30
20 STORE(name,ADDRESS_RAW(newowner))
30 RETURN 0
End Function

264
blockchain/hardfork_core.go Normal file
View File

@ -0,0 +1,264 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
// the voting for hard fork works as follows
// block major version remains contant, while block minor version contains the next hard fork number,
// at trigger height, the last window_size blocks are counted as folllows
// if Major_Version == minor version, it is a negative vote
// if minor_version > major_version, it is positive vote
// if threshold votes are positive, the next hard fork triggers
// this is work in progress
// hard forking is integrated deep within the the blockchain as almost anything can be replaced in DERO without disruption
const default_voting_window_size = 6000 // this many votes will counted
const default_vote_percent = 62 // 62 percent votes means the hard fork is locked in
type Hard_fork struct {
Version int64 // which version will trigger
Height int64 // at what height hard fork will come into effect, trigger block
Window_size int64 // how many votes to count (x number of votes)
Threshold int64 // between 0 and 99 // percent number of votes required to lock in hardfork, 0 = mandatory
Votes int64 // number of votes in favor
Voted bool // whether voting resulted in hardfork
}
// current mainnet_hard_forks
var mainnet_hard_forks = []Hard_fork{
// {1, 0,0,0,0,true}, // dummy entry so as we can directly use the fork index into this entry
{1, 0, 0, 0, 0, true}, // version 1 hard fork where genesis block landed and chain migration occurs
// version 1 has difficulty hardcoded to 1
//{2, 95551, 0, 0, 0, true}, // version 2 hard fork where Atlantis bootstraps , it's mandatory
// {3, 721000, 0, 0, 0, true}, // version 3 hard fork emission fix, it's mandatory
}
// current testnet_hard_forks
var testnet_hard_forks = []Hard_fork{
{1, 0, 0, 0, 0, true}, // version 1 hard fork where genesis block landed
//{3, 0, 0, 0, 0, true}, // version 3 hard fork where we started , it's mandatory
//{4, 3, 0, 0, 0, true}, // version 4 hard fork where we change mining algorithm it's mandatory
}
// current simulation_hard_forks
// these can be tampered with for testing and other purposes
// this variable is exported so as simulation can play/test hard fork code
var Simulation_hard_forks = []Hard_fork{
{1, 0, 0, 0, 0, true}, // version 1 hard fork where genesis block landed
{2, 1, 0, 0, 0, true}, // version 2 hard fork where we started , it's mandatory
}
// at init time, suitable versions are selected
var current_hard_forks []Hard_fork
// init suitable structure based on mainnet/testnet selection at runtime
func init_hard_forks(params map[string]interface{}) {
// if simulation , load simulation features
if params["--simulator"] == true {
current_hard_forks = Simulation_hard_forks // enable simulator mode hard forks
logger.Info("simulator hardforks are online")
} else {
if globals.IsMainnet() {
current_hard_forks = mainnet_hard_forks
logger.V(1).Info("mainnet hardforks are online")
} else {
current_hard_forks = testnet_hard_forks
logger.V(1).Info("testnet hardforks are online")
}
}
// if voting in progress, load all votes from db, since we do not store votes in disk,
// we will load all necessary blocks, counting votes
}
// check block version validity at specific height according to current network
func (chain *Blockchain) Check_Block_Version(bl *block.Block) (result bool) {
height := chain.Calculate_Height_At_Tips(bl.Tips)
if height == 0 && bl.Major_Version == 1 { // handle genesis block as exception
return true
}
// all blocks except genesis block land here
if bl.Major_Version == uint64(chain.Get_Current_Version_at_Height(height)) {
return true
}
return
}
// this func will recount votes, set whether the version is voted in
// only the main chain blocks are counted in
// this func must be called with chain in lock state
/*
func (chain *Blockchain) Recount_Votes() {
height := chain.Load_Height_for_BL_ID(chain.Get_Top_ID())
for i := len(current_hard_forks) - 1; i > 0; i-- {
// count votes only if voting is in progress
if 0 != current_hard_forks[i].Window_size && // if window_size > 0
height <= current_hard_forks[i].Height &&
height >= (current_hard_forks[i].Height-current_hard_forks[i].Window_size) { // start voting when required
hard_fork_locked := false
current_hard_forks[i].Votes = 0 // make votes zero, before counting
for j := height; j >= (current_hard_forks[i].Height - current_hard_forks[i].Window_size); j-- {
// load each block, and count the votes
hash, err := chain.Load_BL_ID_at_Height(j)
if err == nil {
bl, err := chain.Load_BL_FROM_ID(hash)
if err == nil {
if bl.Minor_Version == uint64(current_hard_forks[i].Version) {
current_hard_forks[i].Votes++
}
} else {
logger.Warnf("err loading block (%s) at height %d, chain height %d err %s", hash, j, height, err)
}
} else {
logger.Warnf("err loading block at height %d, chain height %d err %s", j, height, err)
}
}
// if necessary votes have been accumulated , lock in the hard fork
if ((current_hard_forks[i].Votes * 100) / current_hard_forks[i].Window_size) >= current_hard_forks[i].Threshold {
hard_fork_locked = true
}
current_hard_forks[i].Voted = hard_fork_locked // keep it as per status
}
}
}
*/
// this function returns number of information whether hf is going on scheduled, everything is okay etc
func (chain *Blockchain) Get_HF_info() (state int, enabled bool, earliest_height, threshold, version, votes, window int64) {
state = 2 // default is everything is okay
enabled = true
topoheight := chain.Load_TOPO_HEIGHT()
block_id, err := chain.Load_Block_Topological_order_at_index(topoheight)
if err != nil {
return
}
bl, err := chain.Load_BL_FROM_ID(block_id)
if err != nil {
logger.Error(err, "loading block", "blid", block_id, "topoheight", topoheight)
}
height := chain.Load_Height_for_BL_ID(block_id)
version = chain.Get_Current_Version_at_Height(height)
// check top block to see if the network is going through a hard fork
if bl.Major_Version != bl.Minor_Version { // network is going through voting
state = 0
enabled = false
}
if bl.Minor_Version != uint64(chain.Get_Ideal_Version_at_Height(height)) {
// we are NOT voting for the hard fork ( or we are already broken), issue warning to user, that we need an upgrade NOW
state = 1
enabled = false
version = int64(bl.Minor_Version)
}
if state == 0 { // we know our state is good, report back, good info
for i := range current_hard_forks {
if version == current_hard_forks[i].Version {
earliest_height = current_hard_forks[i].Height
threshold = current_hard_forks[i].Threshold
version = current_hard_forks[i].Version
votes = current_hard_forks[i].Votes
window = current_hard_forks[i].Window_size
}
}
}
return
}
// current hard fork version , block major version
// we may be at genesis block height
func (chain *Blockchain) Get_Current_Version() int64 { // it is last version voted or mandatory update
return chain.Get_Current_Version_at_Height(chain.Get_Height())
}
func (chain *Blockchain) Get_Current_BlockTime() uint64 { // it is last version voted or mandatory update
block_time := config.BLOCK_TIME
//if chain.Get_Current_Version() >= 4 {
// block_time= config.BLOCK_TIME_hf4
// }
return block_time
}
func (chain *Blockchain) Get_Current_Version_at_Height(height int64) int64 {
for i := len(current_hard_forks) - 1; i >= 0; i-- {
//logger.Infof("i %d height %d hf height %d",i, height,current_hard_forks[i].Height )
if height >= current_hard_forks[i].Height {
// if it was a mandatory fork handle it directly
if current_hard_forks[i].Threshold == 0 {
return current_hard_forks[i].Version
}
if current_hard_forks[i].Voted { // if the version was voted in, select it, other wise try lower
return current_hard_forks[i].Version
}
}
}
return 0
}
// if we are voting, return the next expected version
func (chain *Blockchain) Get_Ideal_Version() int64 {
return chain.Get_Ideal_Version_at_Height(chain.Get_Height())
}
// used to cast vote
func (chain *Blockchain) Get_Ideal_Version_at_Height(height int64) int64 {
for i := len(current_hard_forks) - 1; i > 0; i-- {
// only voted during the period required
if height <= current_hard_forks[i].Height &&
height >= (current_hard_forks[i].Height-current_hard_forks[i].Window_size) { // start voting when required
return current_hard_forks[i].Version
}
}
// if we are not voting, return current version
return chain.Get_Current_Version_at_Height(height)
}
/*
// if the block major version is more than what we have in our index, display warning to user
func (chain *Blockchain) Display_Warning_If_Blocks_are_New(bl *block.Block) {
// check the biggest fork
if current_hard_forks[len(current_hard_forks )-1].version < bl.Major_Version {
logger.Warnf("We have seen new blocks floating with version number bigger than ours, please update the software")
}
return
}
*/

View File

@ -0,0 +1,90 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -0,0 +1,388 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package mempool
import "fmt"
import "sync"
import "sort"
import "time"
import "sync/atomic"
import "github.com/go-logr/logr"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/metrics"
import "github.com/deroproject/derohe/cryptography/crypto"
// this is only used for sorting and nothing else
type TX_Sorting_struct struct {
FeesPerByte uint64 // this is fees per byte
Hash crypto.Hash // transaction hash
Size uint64 // transaction size
}
// NOTE: do NOT consider this code as useless, as it is used to avooid double spending attacks within the block and within the pool
// let me explain, since we are a state machine, we add block to our blockchain
// so, if a double spending attack comes, 2 transactions with same inputs, we reject one of them
// the algo is documented somewhere else which explains the entire process
// at this point in time, this is an ultrafast written mempool,
// it will not scale for more than 10000 transactions but is good enough for now
// we can always come back and rewrite it
// NOTE: the pool is now persistant
type Mempool struct {
txs sync.Map //map[crypto.Hash]*mempool_object
nonces sync.Map //map[crypto.Hash]bool // contains key images of all txs
sorted_by_fee []crypto.Hash // contains txids sorted by fees
sorted []TX_Sorting_struct // contains TX sorting information, so as new block can be forged easily
modified bool // used to monitor whethel mem pool contents have changed,
height uint64 // track blockchain height
// global variable , but don't see it utilisation here except fot tx verification
//chain *Blockchain
Exit_Mutex chan bool
sync.Mutex
}
// this object is serialized and deserialized
type mempool_object struct {
Tx *transaction.Transaction
Added uint64 // time in epoch format
Height uint64 // at which height the tx unlocks in the mempool
Size uint64 // size in bytes of the TX
FEEperBYTE uint64 // fee per byte
}
var loggerpool logr.Logger
func Init_Mempool(params map[string]interface{}) (*Mempool, error) {
var mempool Mempool
//mempool.chain = params["chain"].(*Blockchain)
loggerpool = globals.Logger.WithName("MEMPOOL") // all components must use this logger
loggerpool.Info("Mempool started")
atomic.AddUint32(&globals.Subsystem_Active, 1) // increment subsystem
mempool.Exit_Mutex = make(chan bool)
metrics.Set.GetOrCreateGauge("mempool_count", func() float64 {
count := float64(0)
mempool.txs.Range(func(k, value interface{}) bool {
count++
return true
})
return count
})
return &mempool, nil
}
func (pool *Mempool) HouseKeeping(height uint64) {
pool.height = height
// this code is executed in conditions which are as follows
// we have to purge old txs which can no longer be mined
var delete_list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*mempool_object)
if height >= (v.Tx.Height + 10) { // if we have moved 10 heights, chances of reorg are almost nil
delete_list = append(delete_list, txhash)
}
return true
})
for i := range delete_list {
metrics.Set.GetOrCreateCounter("mempool_discarded_total").Inc()
pool.Mempool_Delete_TX(delete_list[i])
}
}
func (pool *Mempool) Shutdown() {
//TODO save mempool tx somewhere
close(pool.Exit_Mutex) // stop relaying
pool.Lock()
defer pool.Unlock()
loggerpool.Info("Mempool stopped")
atomic.AddUint32(&globals.Subsystem_Active, ^uint32(0)) // this decrement 1 fom subsystem
}
// start pool monitoring for changes for some specific time
// this is required so as we can add or discard transactions while selecting work for mining
func (pool *Mempool) Monitor() {
pool.Lock()
pool.modified = false
pool.Unlock()
}
// return whether pool contents have changed
func (pool *Mempool) HasChanged() (result bool) {
pool.Lock()
result = pool.modified
pool.Unlock()
return
}
// a tx should only be added to pool after verification is complete
func (pool *Mempool) Mempool_Add_TX(tx *transaction.Transaction, Height uint64) (result bool) {
result = false
pool.Lock()
defer pool.Unlock()
var object mempool_object
tx_hash := crypto.Hash(tx.GetHash())
dup_within_tx := map[crypto.Hash]bool{}
for i := range tx.Payloads {
if pool.Mempool_Nonce_Used(tx.Payloads[i].Proof.Nonce()) {
return false
}
if _, ok := dup_within_tx[tx.Payloads[i].Proof.Nonce()]; ok {
return false
}
dup_within_tx[tx.Payloads[i].Proof.Nonce()] = true
}
// check if tx already exists, skip it
if _, ok := pool.txs.Load(tx_hash); ok {
//rlog.Debugf("Pool already contains %s, skipping", tx_hash)
return false
}
for i := range tx.Payloads {
pool.nonces.Store(tx.Payloads[i].Proof.Nonce(), true)
}
// we are here means we can add it to pool
object.Tx = tx
object.Height = Height
object.Added = uint64(time.Now().UTC().Unix())
object.Size = uint64(len(tx.Serialize()))
object.FEEperBYTE = tx.Fees() / object.Size
pool.txs.Store(tx_hash, &object)
pool.modified = true // pool has been modified
//pool.sort_list() // sort and update pool list
return true
}
// check whether a tx exists in the pool
func (pool *Mempool) Mempool_TX_Exist(txid crypto.Hash) (result bool) {
//pool.Lock()
//defer pool.Unlock()
if _, ok := pool.txs.Load(txid); ok {
return true
}
return false
}
// check whether a keyimage exists in the pool
func (pool *Mempool) Mempool_Nonce_Used(ki crypto.Hash) (result bool) {
//pool.Lock()
//defer pool.Unlock()
if _, ok := pool.nonces.Load(ki); ok {
return true
}
return false
}
// delete specific tx from pool and return it
// if nil is returned Tx was not found in pool
func (pool *Mempool) Mempool_Delete_TX(txid crypto.Hash) (tx *transaction.Transaction) {
//pool.Lock()
//defer pool.Unlock()
var ok bool
var objecti interface{}
// check if tx already exists, skip it
if objecti, ok = pool.txs.Load(txid); !ok {
// rlog.Warnf("Pool does NOT contain %s, returning nil", txid)
return nil
}
// we reached here means, we have the tx remove it from our list, do maintainance cleapup and discard it
object := objecti.(*mempool_object)
tx = object.Tx
pool.txs.Delete(txid)
// remove all the key images
//TODO
// for i := 0; i < len(object.Tx.Vin); i++ {
// pool.nonces.Delete(object.Tx.Vin[i].(transaction.Txin_to_key).K_image)
// }
for i := range tx.Payloads {
pool.nonces.Delete(tx.Payloads[i].Proof.Nonce())
}
//pool.sort_list() // sort and update pool list
pool.modified = true // pool has been modified
return object.Tx // return the tx
}
// get specific tx from mem pool without removing it
func (pool *Mempool) Mempool_Get_TX(txid crypto.Hash) (tx *transaction.Transaction) {
// pool.Lock()
// defer pool.Unlock()
var ok bool
var objecti interface{}
if objecti, ok = pool.txs.Load(txid); !ok {
//loggerpool.Warnf("Pool does NOT contain %s, returning nil", txid)
return nil
}
// we reached here means, we have the tx, return the pointer back
//object := pool.txs[txid]
object := objecti.(*mempool_object)
return object.Tx
}
// return list of all txs in pool
func (pool *Mempool) Mempool_List_TX() []crypto.Hash {
// pool.Lock()
// defer pool.Unlock()
var list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
//v := value.(*mempool_object)
//objects = append(objects, *v)
list = append(list, txhash)
return true
})
//pool.sort_list() // sort and update pool list
// list should be as big as spurce list
//list := make([]crypto.Hash, len(pool.sorted_by_fee), len(pool.sorted_by_fee))
//copy(list, pool.sorted_by_fee) // return list sorted by fees
return list
}
// passes back sorting information and length information for easier new block forging
func (pool *Mempool) Mempool_List_TX_SortedInfo() []TX_Sorting_struct {
// pool.Lock()
// defer pool.Unlock()
_, data := pool.sort_list() // sort and update pool list
return data
/* // list should be as big as spurce list
list := make([]TX_Sorting_struct, len(pool.sorted), len(pool.sorted))
copy(list, pool.sorted) // return list sorted by fees
return list
*/
}
// print current mempool txs
// TODO add sorting
func (pool *Mempool) Mempool_Print() {
pool.Lock()
defer pool.Unlock()
var klist []crypto.Hash
var vlist []*mempool_object
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*mempool_object)
//objects = append(objects, *v)
klist = append(klist, txhash)
vlist = append(vlist, v)
return true
})
loggerpool.Info(fmt.Sprintf("Total TX in mempool = %d\n", len(klist)))
loggerpool.Info(fmt.Sprintf("%20s %7s %6s %32s\n", "Added", "Size", "Height", "TXID"))
for i := range klist {
k := klist[i]
v := vlist[i]
loggerpool.Info(fmt.Sprintf("%20s %7d %6d %32s\n", time.Unix(int64(v.Added), 0).UTC().Format(time.RFC3339),
len(v.Tx.Serialize()), v.Height, k))
}
}
// flush mempool
func (pool *Mempool) Mempool_flush() {
var list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
//v := value.(*mempool_object)
//objects = append(objects, *v)
list = append(list, txhash)
return true
})
loggerpool.Info("Total TX in mempool", "txcount", len(list))
loggerpool.Info("Flushing mempool")
for i := range list {
pool.Mempool_Delete_TX(list[i])
}
}
// sorts the pool internally
// this function assummes lock is already taken
// ??? if we selecting transactions randomly, why to keep them sorted
func (pool *Mempool) sort_list() ([]crypto.Hash, []TX_Sorting_struct) {
data := make([]TX_Sorting_struct, 0, 512) // we are rarely expectingmore than this entries in mempool
// collect data from pool for sorting
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*mempool_object)
if v.Height <= pool.height {
data = append(data, TX_Sorting_struct{Hash: txhash, FeesPerByte: v.FEEperBYTE, Size: v.Size})
}
return true
})
// inverted comparision sort to do descending sort
sort.SliceStable(data, func(i, j int) bool { return data[i].FeesPerByte > data[j].FeesPerByte })
sorted_list := make([]crypto.Hash, 0, len(data))
//pool.sorted_by_fee = pool.sorted_by_fee[:0] // empty old slice
for i := range data {
sorted_list = append(sorted_list, data[i].Hash)
}
//pool.sorted = data
return sorted_list, data
}

View File

@ -0,0 +1,135 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package mempool
//import "fmt"
//import "bytes"
import "testing"
import "encoding/hex"
import "github.com/deroproject/derohe/transaction"
// test the mempool interface with valid TX
func Test_mempool(t *testing.T) {
// this tx is from internal testnet
// tx_id 499002f3fb7fea8a71dac93dea65c0ff74be05b0858078a27a48d78b71eacf87
tx_hex := "01000003519b9e874a9dd7fcbfb7802268123dcf970c336891101a3a0705855b72b9eb08c80100000000000000000000000000000000000000000000000000000000000000000000f394d06566a81b024fd8624b4c8592f8b9344e58f227261eb08d821bd190b4ac9c7df0660038ddcf881602a14fe5ad8eebff83d3ce3541a1f232fe61acf71109e417160936863e8c0597c798b73ef08e76b8eeebcaf20260ba91fcdef5f626361f824dc6197b02c599d4511c624a933226d8fccc125611ec80d8ad5acea4baf2f1c77e4c5525d9859b40f43daaf3d4e4450203d80419dcb026b9adbd7d6912284ee9ce20f3721076947f16e1faefd9b8c35116f4cf0044ead0a12fda3f3bcf2ed91627b922c02c10d112df9e782d3750cda151f6ba2ebe6495065141f32800894566011198c4055458005a31d3e285037446ba14bac54420bcab87cbebeaccab1f158b011e90132b7310e6c866e348cab68d21bad1d9f811b4d667a1e40bdef1a19259d3000b972ae06da958658b12ca4a45c2c6ea50d305be86f7e979fc962c687eb012f401a2c4bf0beb6c713343b94d65ee6d937660480daf670ab5b0ed421a9103e8a25119c836dd799600e67e4ec66ef680833398a99b16956aed2ee5bd9e6c8e68009a01145636e5e448d6c8ab902db394aa9c5aeee92dd150e1150d5b8f6674f330c279010912df27232ae126525a130ed8a9b4dee62eb57ce1d8a42ae9bcef867b10ce94012f94aa78846fcdda504243f83f3122aee5dc38dd32fa2a90224b3a40b4cf79c301129028709e39591c55e6e477a36cc26dc2efd7183fb59f51635dbd114dfed9d5010d66f31b781fa181c2114600ee1f03d55439355265700fb9d35ac684a19db398001e12558cc17e59fb85fb825481dee6fef3612c043bb38ca4833183da5381364600077a99d4d2df2f06067359f178a40f98a3e0943309c5ce3ce5398b9909174254012aa49cfc78ebec51c00e7824db0d8d41ef43fbb0396824d2c31b7db64124cc61000a8b51bd6c94a502f7aea3e5001f00d76b165ad580e48f8aa72813da8240286e00055f3480aa7cdb68a20d4b1e567d42389b44cf4e345da9e5a5655a80418695bc012c8f1e1ef8af646b4fb08afaeb3febde17c4c0d04b30dfd22d96a32ced9c9439010c40484e1547d307ad56b79bb7450e4eaaed7ed79bd5392dffd7449043f8783a0015104264f8d80356176ca6fdf4777e4ebc8edcbab50f8b6c366ac75ebd6bb5c701078d55053d2d41f21f51d5e617f282ba9dfa2576f0231048dd73998ceaa699fd0105821b61addd6935c7d80e65c706f0f2aa2d73ad22c5548c9c8b92cd82689dc40115f576c413a37014b9ee5377b121c69ad6e0f00f2b4f6c99d1f68b00a5d923370119b29fe6e9cf35f1344e24d52ee1fdb6521de3176f3d2b2a2b9327ae6e6cfbb10017198369a6ff180253700361c40e7f378c8e7ead6e04f01dbf5b4c47e7e1e4ff001abaf84628764eccbc5d0f66dc0a99505bb0e598362c45312470526afad11caf00242c6a902c564b3b363eb4ae08f643f18571779428358bfbdaf9856b0ebf229f0100000000000000000000000000000000000000000000000000000000000000001b7a78b6f573d4ac5d3a8d7d10dafac0dbc92604474417b29d40448d9fbc821d0fe3f73da41b27752d5db0658fcc0bff7da21112417ac719e491fdd43c8e5d0b07bcca22c27367a276e6a4e0cfb1bfbc772842898356ad4723101d4ada5137f409bb3b290d7519da3af15feb09e183e2be809690099551f1cb984deb1295b7611dd0cfd9000482314a388512d278fc22b2f6e3fc6e95023b4bd228e22626b7440119fa2c6f45a33b008ef5b23a5522ebbf0bf5f26d8263613a7f9b5ad9a192357600107a023453f7f79f413756dfba0096ae5d13158bbbb147089a020e819ef6640728acf01c811b967d1a14d767ac20088c5ca1613aa5d94ebe7fe2840b6431d8d002137dae183071d12cd9fb14c4b4738ef750ff514ce26257d1245eaedfea625516bfcb075a33a8910dc4e40742bfaa70d95591711e9db33d0db0be06cc819817072c22a9c81f4369f68bb1ef71c7f7458671d647caef18b30cca91da4f1b201d082a03306b4a07d1f0e1a8cc8844038fc524201a6c3a346a971203fd45458d292987d44049eecdac25c5622465a8d683a43621612d2fb283724b17a7bfa3bad2266a6fb1cf669eef21c90e24e85d6ab48e8b237355e964407dde00d65fb442dd210ce0378349f533c0e8ab65293de5319e246a044ed22b1e6db28add1508dd6f0da22954eea8a431b211a8b54147e32fec6593f9fb731389f62f94d5047e0f3301216576c71a79629bc3879325280e216d59cf7e6ff2d6a0babe418f8f2ed603ce010e16785f8c855043221ed5a603d4ed4e8e90059669543738ceb2a04dc996079c01264e9beaf4bb82e1b8f47bb0c239167064001e306eeef2b5495ec42754806b65012357f738543b13594ff081a070dbc0575645ac5ce8e0f89d3afdeb3d49e9ddae012a17d5095b9cef08f6eee2ce733dbf2cfcdb8c23e39410ed973e68cf8505266d00280c52b45752dd12ba84499618898111a8f9b19f2b76c06917e57eef221f3387011cc3360b478d144118a82a3c4391814f34aac2fb0f2d043394304bed179689e501292ca5631820f5f465242b4cc517f06c153c176a43f77daca463b7e2d3e2d7ba001aa0b2c90c3172cd2556137d78748f47924432449210d8b2f5e27adb6210bc77012c92a12526a5b0b1ef60994eb4d08e9bb5a3ea330d417838a52ad8636d37fa9e00131153c9affeefd6b98ecb6a80f5e6ea97417376030325182c064e12dd353c32011f4e25c49eea09adaa46d56642e2ea0afced2d75036acd3839435863f1ec3bf1002e19fe8456a7236c8fb77a15f87164debaad865e85c3468152a5deaffde3da6a01"
var tx, dup_tx transaction.Transaction
tx_raw, _ := hex.DecodeString(tx_hex)
err := tx.Deserialize(tx_raw)
dup_tx.Deserialize(tx_raw)
if err != nil {
t.Errorf("Tx Deserialisation failed")
}
pool, err := Init_Mempool(nil)
if err != nil {
t.Errorf("Pool initialization failed")
}
if len(pool.Mempool_List_TX()) != 0 {
t.Errorf("Pool should be initialized in empty state")
}
if pool.Mempool_Add_TX(&tx, 0) != true {
t.Errorf("Cannot Add transaction to pool in empty state")
}
if pool.Mempool_TX_Exist(tx.GetHash()) != true {
t.Errorf("TX should already be in pool")
}
list_tx := pool.Mempool_List_TX()
if len(list_tx) != 1 || list_tx[0] != tx.GetHash() {
t.Errorf("Pool List tx failed")
}
get_tx := pool.Mempool_Get_TX(tx.GetHash())
if tx.GetHash() != get_tx.GetHash() {
t.Errorf("Pool get_tx failed")
}
// re-adding tx should faild
if pool.Mempool_Add_TX(&tx, 0) == true || len(pool.Mempool_List_TX()) > 1 {
t.Errorf("Pool should not allow duplicate TX")
}
// modify tx and readd
dup_tx.DestNetwork = 1 //modify tx so txid changes, still it should be rejected
if tx.GetHash() == dup_tx.GetHash() {
t.Errorf("tx and duplicate tx must have different hash")
}
if pool.Mempool_Add_TX(&dup_tx, 0) == true {
t.Errorf("Pool should not allow duplicate Key images %d", len(pool.Mempool_List_TX()))
}
if len(pool.Mempool_List_TX()) != 1 {
t.Errorf("Pool should have only 1 tx, actual %d", len(pool.Mempool_List_TX()))
}
// pool must have 1 key_image
key_image_count := 0
pool.nonces.Range(func(k, value interface{}) bool {
key_image_count++
return true
})
if key_image_count != 1 {
t.Errorf("Pool doesnot have necessary key image")
}
if pool.Mempool_Delete_TX(dup_tx.GetHash()) != nil {
t.Errorf("non existing TX cannot be deleted\n")
}
// pool must have 1 key_image
key_image_count = 0
pool.nonces.Range(func(k, value interface{}) bool {
key_image_count++
return true
})
if key_image_count != 1 {
t.Errorf("Pool must have necessary key image")
}
// lets delete
if pool.Mempool_Delete_TX(tx.GetHash()) == nil {
t.Errorf("existing TX cannot be deleted\n")
}
key_image_count = 0
pool.nonces.Range(func(k, value interface{}) bool {
key_image_count++
return true
})
if key_image_count != 0 {
t.Errorf("Pool should not have any key image")
}
if len(pool.Mempool_List_TX()) != 0 {
t.Errorf("Pool should have 0 tx")
}
}

658
blockchain/miner_block.go Normal file
View File

@ -0,0 +1,658 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "bytes"
import "sort"
import "sync"
import "runtime/debug"
import "encoding/binary"
import "golang.org/x/xerrors"
import "golang.org/x/time/rate"
import "golang.org/x/crypto/sha3"
// this file creates the blobs which can be used to mine new blocks
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/errormsg"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/graviton"
const TX_VALIDITY_HEIGHT = 11
// structure used to rank/sort blocks on a number of factors
type BlockScore struct {
BLID crypto.Hash
//MiniCount int
Height int64 // block height
}
// Heighest height is ordered first, the condition is reverted see eg. at https://golang.org/pkg/sort/#Slice
// if heights are equal, nodes are sorted by their block ids which will never collide , hopefullly
// block ids are sorted by lowest byte first diff
func sort_descending_by_height_blid(tips_scores []BlockScore) {
sort.Slice(tips_scores, func(i, j int) bool {
if tips_scores[i].Height != tips_scores[j].Height { // if height mismatch use them
if tips_scores[i].Height > tips_scores[j].Height {
return true
} else {
return false
}
} else { // cumulative difficulty is same, we must check minerblocks
return bytes.Compare(tips_scores[i].BLID[:], tips_scores[j].BLID[:]) == -1
}
})
}
func sort_ascending_by_height(tips_scores []BlockScore) {
sort.Slice(tips_scores, func(i, j int) bool { return tips_scores[i].Height < tips_scores[j].Height })
}
// this will sort the tips based on cumulative difficulty and/or block ids
// the tips will sorted in descending order
func (chain *Blockchain) SortTips(tips []crypto.Hash) (sorted []crypto.Hash) {
if len(tips) == 0 {
panic("tips cannot be 0")
}
if len(tips) == 1 {
sorted = []crypto.Hash{tips[0]}
return
}
tips_scores := make([]BlockScore, len(tips), len(tips))
for i := range tips {
tips_scores[i].BLID = tips[i]
tips_scores[i].Height = chain.Load_Block_Height(tips[i])
}
sort_descending_by_height_blid(tips_scores)
for i := range tips_scores {
sorted = append(sorted, tips_scores[i].BLID)
}
return
}
// used by tip
func convert_uint32_to_crypto_hash(i uint32) crypto.Hash {
var h crypto.Hash
binary.BigEndian.PutUint32(h[:], i)
return h
}
//NOTE: this function is quite big since we do a lot of things in preparation of next blocks
func (chain *Blockchain) Create_new_miner_block(miner_address rpc.Address) (cbl *block.Complete_Block, bl block.Block, err error) {
//chain.Lock()
//defer chain.Unlock()
cbl = &block.Complete_Block{}
topoheight := chain.Load_TOPO_HEIGHT()
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
return
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
return
}
balance_tree, err := ss.GetTree(config.BALANCE_TREE)
if err != nil {
return
}
var tips []crypto.Hash
// lets fill in the tips from miniblocks, list is already sorted
if keys := chain.MiniBlocks.GetAllKeys(chain.Get_Height() + 1); len(keys) > 0 {
for _, key := range keys {
mbls := chain.MiniBlocks.GetAllMiniBlocks(key)
if len(mbls) < 1 {
continue
}
mbl := mbls[0]
tips = tips[:0]
tip := convert_uint32_to_crypto_hash(mbl.Past[0])
if ehash, ok := chain.ExpandMiniBlockTip(tip); ok {
tips = append(tips, ehash)
} else {
continue
}
if mbl.PastCount == 2 {
tip = convert_uint32_to_crypto_hash(mbl.Past[1])
if ehash, ok := chain.ExpandMiniBlockTip(tip); ok {
tips = append(tips, ehash)
} else {
continue
}
}
if mbl.PastCount == 2 && mbl.Past[0] == mbl.Past[1] {
continue
}
break
}
}
if len(tips) == 0 {
tips = chain.SortTips(chain.Get_TIPS())
}
for i := range tips {
if len(bl.Tips) < 2 { //only 2 tips max
var check_tips []crypto.Hash
check_tips = append(check_tips, bl.Tips...)
check_tips = append(check_tips, tips[i])
if chain.CheckDagStructure(check_tips) { // avoid any tips which fail structure test
bl.Tips = append(bl.Tips, tips[i])
}
}
}
height := chain.Calculate_Height_At_Tips(bl.Tips) // we are 1 higher than previous highest tip
history := map[crypto.Hash]bool{}
var history_array []crypto.Hash
for i := range bl.Tips {
h := height - 20
if h < 0 {
h = 0
}
history_array = append(history_array, chain.get_ordered_past(bl.Tips[i], h)...)
}
for _, h := range history_array {
history[h] = true
}
var tx_hash_list_included []crypto.Hash // these tx will be included ( due to block size limit )
sizeoftxs := uint64(0) // size of all non coinbase tx included within this block
// add upto 100 registration tx each registration tx is 99 bytes, so 100 tx will take 9900 bytes or 10KB
{
tx_hash_list_sorted := chain.Regpool.Regpool_List_TX() // hash of all tx expected to be included within this block , sorted by fees
for i := range tx_hash_list_sorted {
if tx := chain.Regpool.Regpool_Get_TX(tx_hash_list_sorted[i]); tx != nil {
if _, err = balance_tree.Get(tx.MinerAddress[:]); err != nil {
if xerrors.Is(err, graviton.ErrNotFound) { // address needs registration
cbl.Txs = append(cbl.Txs, tx)
tx_hash_list_included = append(tx_hash_list_included, tx_hash_list_sorted[i])
}
}
}
}
}
hf_version := chain.Get_Current_Version_at_Height(height)
//rlog.Infof("Total tx in pool %d", len(tx_hash_list_sorted))
// select tx based on fees
// first of lets find the tx fees collected by consuming txs from mempool
tx_hash_list_sorted := chain.Mempool.Mempool_List_TX_SortedInfo() // hash of all tx expected to be included within this block , sorted by fees
logger.V(8).Info("mempool returned tx list", "tx_list", tx_hash_list_sorted)
var pre_check cbl_verify // used to verify sanity of new block
history_tx := map[crypto.Hash]bool{} // used to build history of recent blocks
for _, h := range history_array {
var history_bl *block.Block
if history_bl, err = chain.Load_BL_FROM_ID(h); err != nil {
return
}
for i := range history_bl.Tx_hashes {
history_tx[history_bl.Tx_hashes[i]] = true
}
}
for i := range tx_hash_list_sorted {
if (sizeoftxs + tx_hash_list_sorted[i].Size) > (config.STARGATE_HE_MAX_BLOCK_SIZE - 102400) { // limit block to max possible
break
}
if _, ok := history_tx[tx_hash_list_sorted[i].Hash]; ok {
logger.V(8).Info("not selecting tx since it is already mined", "txid", tx_hash_list_sorted[i].Hash)
continue
}
if tx := chain.Mempool.Mempool_Get_TX(tx_hash_list_sorted[i].Hash); tx != nil {
if int64(tx.Height) < height {
if history[tx.BLID] != true {
logger.V(8).Info("not selecting tx since the reference with which it was made is not in history", "txid", tx_hash_list_sorted[i].Hash)
continue
}
if tx.IsProofRequired() && len(bl.Tips) == 2 {
if tx.BLID == bl.Tips[0] || tx.BLID == bl.Tips[1] { // delay txs by a block if they would collide
logger.V(8).Info("not selecting tx due to probable collision", "txid", tx_hash_list_sorted[i].Hash)
continue
}
}
version, err := chain.ReadBlockSnapshotVersion(tx.BLID)
if err != nil {
continue
}
hash, err := chain.Load_Merkle_Hash(version)
if err != nil {
continue
}
if hash != tx.Payloads[0].Statement.Roothash {
//return fmt.Errorf("Tx statement roothash mismatch expected %x actual %x", tx.Payloads[0].Statement.Roothash, hash[:])
continue
}
if height-int64(tx.Height) < TX_VALIDITY_HEIGHT {
if nil == chain.Verify_Transaction_NonCoinbase_CheckNonce_Tips(hf_version, tx, bl.Tips) {
if nil == pre_check.check(tx, false) {
pre_check.check(tx, true)
sizeoftxs += tx_hash_list_sorted[i].Size
cbl.Txs = append(cbl.Txs, tx)
tx_hash_list_included = append(tx_hash_list_included, tx_hash_list_sorted[i].Hash)
logger.V(8).Info("tx selected for mining ", "txlist", tx_hash_list_sorted[i].Hash)
} else {
logger.V(8).Info("not selecting tx due to pre_check failure", "txid", tx_hash_list_sorted[i].Hash)
}
} else {
logger.V(8).Info("not selecting tx due to nonce failure", "txid", tx_hash_list_sorted[i].Hash)
}
} else {
logger.V(8).Info("not selecting tx due to height difference", "txid", tx_hash_list_sorted[i].Hash)
}
} else {
logger.V(8).Info("not selecting tx due to height", "txid", tx_hash_list_sorted[i].Hash)
}
} else {
logger.V(8).Info("not selecting tx since tx is nil", "txid", tx_hash_list_sorted[i].Hash)
}
}
// now we have all major parts of block, assemble the block
bl.Major_Version = uint64(chain.Get_Current_Version_at_Height(height))
bl.Minor_Version = uint64(chain.Get_Ideal_Version_at_Height(height)) // This is used for hard fork voting,
bl.Height = uint64(height)
bl.Timestamp = uint64(globals.Time().UTC().UnixMilli())
bl.Miner_TX.Version = 1
bl.Miner_TX.TransactionType = transaction.COINBASE // what about unregistered users
copy(bl.Miner_TX.MinerAddress[:], miner_address.Compressed())
for i := range bl.Tips { // adjust time stamp, only if someone mined a block in extreme precision
if chain.Load_Block_Timestamp(bl.Tips[i]) >= uint64(globals.Time().UTC().UnixMilli()) {
bl.Timestamp = chain.Load_Block_Timestamp(bl.Tips[i]) + 1
}
}
// check whether the miner address is registered
if _, err = balance_tree.Get(bl.Miner_TX.MinerAddress[:]); err != nil {
if xerrors.Is(err, graviton.ErrNotFound) { // address needs registration
err = fmt.Errorf("miner address is not registered")
}
return
}
for i := range tx_hash_list_included {
bl.Tx_hashes = append(bl.Tx_hashes, tx_hash_list_included[i])
}
// lets fill in the miniblocks, list is already sorted
var key block.MiniBlockKey
key.Height = bl.Height
key.Past0 = binary.BigEndian.Uint32(bl.Tips[0][:])
if len(bl.Tips) == 2 {
key.Past1 = binary.BigEndian.Uint32(bl.Tips[1][:])
}
if mbls := chain.MiniBlocks.GetAllMiniBlocks(key); len(mbls) > 0 {
if uint64(len(mbls)) > config.BLOCK_TIME-1 {
mbls = mbls[:config.BLOCK_TIME-1]
}
bl.MiniBlocks = mbls
}
cbl.Bl = &bl
return
}
//
func ConvertBlockToMiniblock(bl block.Block, miniblock_miner_address rpc.Address) (mbl block.MiniBlock) {
mbl.Version = 1
if len(bl.Tips) == 0 {
panic("Tips cannot be zero")
}
mbl.Height = bl.Height
timestamp := uint64(globals.Time().UTC().UnixMilli())
mbl.Timestamp = uint16(timestamp) // this will help us better understand network conditions
mbl.PastCount = byte(len(bl.Tips))
for i := range bl.Tips {
mbl.Past[i] = binary.BigEndian.Uint32(bl.Tips[i][:])
}
if uint64(len(bl.MiniBlocks)) != config.BLOCK_TIME-1 {
miner_address_hashed_key := graviton.Sum(miniblock_miner_address.Compressed())
copy(mbl.KeyHash[:], miner_address_hashed_key[:])
} else {
mbl.Final = true
block_header_hash := sha3.Sum256(bl.Serialize()) // note here this block is not present
for i := range mbl.KeyHash {
mbl.KeyHash[i] = block_header_hash[i]
}
}
// leave the flags for users as per their request
for i := range mbl.Nonce {
mbl.Nonce[i] = globals.Global_Random.Uint32() // fill with randomness
}
return
}
// returns a new block template ready for mining
// block template has the following format
// miner block header in hex +
// miner tx in hex +
// 2 bytes ( inhex 4 bytes for number of tx )
// tx hashes that follow
var cache_block block.Block
var cache_block_mutex sync.Mutex
func (chain *Blockchain) Create_new_block_template_mining(miniblock_miner_address rpc.Address) (bl block.Block, mbl block.MiniBlock, miniblock_blob string, reserved_pos int, err error) {
cache_block_mutex.Lock()
defer cache_block_mutex.Unlock()
if (cache_block.Timestamp+100) < (uint64(globals.Time().UTC().UnixMilli())) || (cache_block.Timestamp > 0 && int64(cache_block.Height) != chain.Get_Height()+1) {
if chain.simulator {
_, bl, err = chain.Create_new_miner_block(miniblock_miner_address) // simulator lets you test everything
} else {
_, bl, err = chain.Create_new_miner_block(chain.integrator_address)
}
if err != nil {
logger.V(1).Error(err, "block template error ")
return
}
cache_block = bl // setup block cache for 100 msec
chain.mining_blocks_cache.Add(fmt.Sprintf("%d", cache_block.Timestamp), string(bl.Serialize()))
} else {
bl = cache_block
}
mbl = ConvertBlockToMiniblock(bl, miniblock_miner_address)
var miner_hash crypto.Hash
copy(miner_hash[:], mbl.KeyHash[:])
if !mbl.Final {
if !chain.IsAddressHashValid(false, miner_hash) {
logger.V(3).Error(err, "unregistered miner %s", miner_hash)
err = fmt.Errorf("unregistered miner or you need to wait 15 mins")
return
}
}
miniblock_blob = fmt.Sprintf("%x", mbl.Serialize())
return
}
// rate limiter is deployed, in case RPC is exposed over internet
// someone should not be just giving fake inputs and delay chain syncing
var accept_limiter = rate.NewLimiter(1.0, 4) // 1 block per sec, burst of 4 blocks is okay
var accept_lock = sync.Mutex{}
var duplicate_height_check = map[uint64]bool{}
// accept work given by us
// we should verify that the transaction list supplied back by the miner exists in the mempool
// otherwise the miner is trying to attack the network
func (chain *Blockchain) Accept_new_block(tstamp uint64, miniblock_blob []byte) (mblid crypto.Hash, blid crypto.Hash, result bool, err error) {
if globals.Arguments["--sync-node"] != nil && globals.Arguments["--sync-node"].(bool) {
logger.Error(fmt.Errorf("Mining is deactivated since daemon is running with --sync-mode, please check program options."), "")
return mblid, blid, false, fmt.Errorf("Please deactivate --sync-node option before mining")
}
accept_lock.Lock()
defer accept_lock.Unlock()
cbl := &block.Complete_Block{}
bl := block.Block{}
var mbl block.MiniBlock
//logger.Infof("Incoming block for accepting %x", block_template)
// safety so if anything wrong happens, verification fails
defer func() {
if r := recover(); r != nil {
logger.V(1).Error(nil, "Recovered while accepting new block", "r", r, "stack", debug.Stack())
err = fmt.Errorf("Error while parsing block")
}
}()
if err = mbl.Deserialize(miniblock_blob); err != nil {
logger.V(1).Error(err, "Error Deserializing blob")
return
}
// now lets locate the actual block from our cache
if block_data, found := chain.mining_blocks_cache.Get(fmt.Sprintf("%d", tstamp)); found {
if err = bl.Deserialize([]byte(block_data.(string))); err != nil {
logger.V(1).Error(err, "Error parsing submitted work block template ", "template", block_data)
return
}
} else {
logger.V(1).Error(nil, "Job not found in cache", "jobid", fmt.Sprintf("%d", tstamp), "tstamp", uint64(globals.Time().UTC().UnixMilli()))
err = fmt.Errorf("job not found in cache")
return
}
// lets try to check pow to detect whether the miner is cheating
if !chain.VerifyMiniblockPoW(&bl, mbl) {
logger.V(1).Error(err, "Error ErrInvalidPoW ")
err = errormsg.ErrInvalidPoW
return
}
if !mbl.Final {
var miner_hash crypto.Hash
copy(miner_hash[:], mbl.KeyHash[:])
if !chain.IsAddressHashValid(true, miner_hash) {
logger.V(3).Error(err, "unregistered miner %s", miner_hash)
err = fmt.Errorf("unregistered miner or you need to wait 15 mins")
return
}
if err1, ok := chain.InsertMiniBlock(mbl); ok {
//fmt.Printf("miniblock %s inserted successfully, total %d\n",mblid,len(chain.MiniBlocks.Collection) )
result = true
// notify peers, we have a miniblock and return to miner
if !chain.simulator { // if not in simulator mode, relay miniblock to the chain
go chain.P2P_MiniBlock_Relayer(mbl, 0)
}
} else {
logger.V(1).Error(err1, "miniblock insertion failed", "mbl", fmt.Sprintf("%+v", mbl))
err = err1
}
return
}
result = true // block's pow is valid
// if we reach here, everything looks ok, we can complete the block we have, lets add the final piece
bl.MiniBlocks = append(bl.MiniBlocks, mbl)
// if a duplicate block is being sent, reject the block
if _, ok := duplicate_height_check[bl.Height]; ok {
logger.V(3).Error(nil, "Block %s rejected by chain due to duplicate hwork.", "blid", bl.GetHash())
err = fmt.Errorf("Error duplicate work")
return
}
// since we have passed dynamic rules, build a full block and try adding to chain
// lets build up the complete block
// collect tx list + their fees
for i := range bl.Tx_hashes {
var tx *transaction.Transaction
var tx_bytes []byte
if tx = chain.Mempool.Mempool_Get_TX(bl.Tx_hashes[i]); tx != nil {
cbl.Txs = append(cbl.Txs, tx)
continue
} else if tx = chain.Regpool.Regpool_Get_TX(bl.Tx_hashes[i]); tx != nil {
cbl.Txs = append(cbl.Txs, tx)
continue
} else if tx_bytes, err = chain.Store.Block_tx_store.ReadTX(bl.Tx_hashes[i]); err == nil {
tx = &transaction.Transaction{}
if err = tx.Deserialize(tx_bytes); err != nil {
logger.V(1).Error(err, "Tx could not be loaded from disk", "txid", bl.Tx_hashes[i].String())
return
}
cbl.Txs = append(cbl.Txs, tx)
} else {
logger.V(1).Error(err, "Tx not found in pool or DB, rejecting submitted block", "txid", bl.Tx_hashes[i].String())
return
}
}
cbl.Bl = &bl // the block is now complete, lets try to add it to chain
if !chain.simulator && !accept_limiter.Allow() { // if rate limiter allows, then add block to chain
logger.V(1).Info("Block rejected by chain", "blid", bl.GetHash())
return
}
blid = bl.GetHash()
var result_block bool
err, result_block = chain.Add_Complete_Block(cbl)
if result_block {
duplicate_height_check[bl.Height] = true
cache_block_mutex.Lock()
cache_block.Timestamp = 0 // expire cache block
cache_block_mutex.Unlock()
logger.V(1).Info("Block successfully accepted, Notifying Network", "blid", bl.GetHash(), "height", bl.Height)
if !chain.simulator { // if not in simulator mode, relay block to the chain
chain.P2P_Block_Relayer(cbl, 0) // lets relay the block to network
}
} else {
logger.V(3).Error(err, "Block Rejected", "blid", bl.GetHash())
return
}
return
}
// this expands the 12 byte tip to full 32 byte tip
// it is not used in consensus but used by p2p for safety checks
func (chain *Blockchain) ExpandMiniBlockTip(hash crypto.Hash) (result crypto.Hash, found bool) {
tips := chain.Get_TIPS()
for i := range tips {
if bytes.Equal(hash[:4], tips[i][:4]) {
copy(result[:], tips[i][:])
return result, true
}
}
// the block may just have been mined, so we evaluate roughly 25 past blocks to cross check
max_topo := chain.Load_TOPO_HEIGHT()
tries := 0
for i := max_topo; i >= 0 && tries < 25; i-- {
blhash, err := chain.Load_Block_Topological_order_at_index(i)
if err == nil {
if bytes.Equal(hash[:4], blhash[:4]) {
copy(result[:], blhash[:])
return result, true
}
}
tries++
}
return result, false
}
// it is USED by consensus and p2p whether the miners has is valid
func (chain *Blockchain) IsAddressHashValid(skip_cache bool, hashes ...crypto.Hash) (found bool) {
if skip_cache {
for _, hash := range hashes { // check whether everything could be satisfied via cache
if _, found := chain.cache_IsAddressHashValid.Get(fmt.Sprintf("%s", hash)); !found {
goto hard_way // do things the hard way
}
}
return true
}
hard_way:
// the block may just have been mined, so we evaluate roughly 25 past blocks to cross check
max_topo := chain.Load_TOPO_HEIGHT()
if max_topo > 25 { // we can lag a bit here, basically atleast around 10 mins lag
max_topo -= 25
}
toporecord, err := chain.Store.Topo_store.Read(max_topo)
if err != nil {
return
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
return
}
var balance_tree *graviton.Tree
if balance_tree, err = ss.GetTree(config.BALANCE_TREE); err != nil {
return
}
for _, hash := range hashes {
bits, _, _, err := balance_tree.GetKeyValueFromHash(hash[0:16])
if err != nil || bits >= 120 {
return
}
if chain.cache_enabled {
chain.cache_IsAddressHashValid.Add(fmt.Sprintf("%s", hash), true) // set in cache
}
}
return true
}

View File

@ -0,0 +1,127 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
//import "time"
import "encoding/binary"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/cryptography/crypto"
import "golang.org/x/crypto/sha3"
const miniblock_genesis_distance = 0
const miniblock_normal_distance = 2
// last miniblock must be extra checked for corruption/attacks
func (chain *Blockchain) Verify_MiniBlocks_HashCheck(cbl *block.Complete_Block) (err error) {
last_mini_block := cbl.Bl.MiniBlocks[len(cbl.Bl.MiniBlocks)-1]
if !last_mini_block.Final {
return fmt.Errorf("corrupted block")
}
block_header_hash := sha3.Sum256(cbl.Bl.SerializeWithoutLastMiniBlock())
for i := 0; i < 16; i++ {
if last_mini_block.KeyHash[i] != block_header_hash[i] {
return fmt.Errorf("MiniBlock has corrupted header expected %x actual %x", block_header_hash[:], last_mini_block.KeyHash[:])
}
}
return nil
}
// verifies the consensus rules completely for miniblocks
func Verify_MiniBlocks(bl block.Block) (err error) {
if bl.Height == 0 && len(bl.MiniBlocks) != 0 {
err = fmt.Errorf("Genesis block cannot have miniblocks")
return
}
if bl.Height == 0 {
return nil
}
if bl.Height != 0 && len(bl.MiniBlocks) == 0 {
err = fmt.Errorf("All blocks except genesis must have miniblocks")
return
}
final_count := 0
for _, mbl := range bl.MiniBlocks {
if mbl.Final { // 50 ms passing allowed
final_count++
}
}
if final_count != 1 {
err = fmt.Errorf("No final miniblock")
return
}
// check whether the genesis blocks are all equal
for _, mbl := range bl.MiniBlocks {
if bl.Height != mbl.Height {
return fmt.Errorf("MiniBlock has invalid height block height %d mbl height %d", bl.Height, mbl.Height)
}
if len(bl.Tips) != int(mbl.PastCount) {
return fmt.Errorf("MiniBlock has wrong number of tips")
}
if len(bl.Tips) == 0 {
panic("all miniblocks genesis must point to tip")
} else if len(bl.Tips) == 1 {
if binary.BigEndian.Uint32(bl.Tips[0][:]) != mbl.Past[0] {
return fmt.Errorf("MiniBlock has invalid tip")
}
} else if len(bl.Tips) == 2 {
if binary.BigEndian.Uint32(bl.Tips[0][:]) != mbl.Past[0] {
return fmt.Errorf("MiniBlock has invalid tip")
}
if binary.BigEndian.Uint32(bl.Tips[1][:]) != mbl.Past[1] {
return fmt.Errorf("MiniBlock has invalid tip")
}
if mbl.Past[0] == mbl.Past[1] {
return fmt.Errorf("MiniBlock refers to same tip twice")
}
} else {
panic("we only support 2 tips")
}
}
return nil
}
// insert a miniblock to chain and if successfull inserted, notify everyone in need
func (chain *Blockchain) InsertMiniBlock(mbl block.MiniBlock) (err error, result bool) {
var miner_hash crypto.Hash
copy(miner_hash[:], mbl.KeyHash[:])
if !chain.IsAddressHashValid(true, miner_hash) {
logger.V(1).Error(err, "Invalid miner address")
err = fmt.Errorf("Invalid miner address")
return err, false
}
if err, result = chain.MiniBlocks.InsertMiniBlock(mbl); result == true {
chain.RPC_NotifyNewMiniBlock.L.Lock()
chain.RPC_NotifyNewMiniBlock.Broadcast()
chain.RPC_NotifyNewMiniBlock.L.Unlock()
}
return err, result
}

488
blockchain/prune_history.go Normal file
View File

@ -0,0 +1,488 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
// this file will prune the history of the blockchain, making it light weight
// the pruner works like this
// identify a point in history before which all history is discarded
// the entire thing works cryptographically and thus everything is cryptographically verified
// this function is the only one which does not work in append-only
import "os"
import "fmt"
import "math/big"
import "path/filepath"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/graviton"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/globals"
const CHUNK_SIZE = 100000 // write 100000 account chunks, actually we should be writing atleast 100,000 accounts
func ByteCountIEC(b int64) string {
const unit = 1024
if b < unit {
return fmt.Sprintf("%d B", b)
}
div, exp := int64(unit), 0
for n := b / unit; n >= unit; n /= unit {
div *= unit
exp++
}
return fmt.Sprintf("%.1f %ciB",
float64(b)/float64(div), "KMGTPE"[exp])
}
func DirSize(path string) int64 {
var size int64
err := filepath.Walk(path, func(_ string, info os.FileInfo, err error) error {
if err != nil {
return nil
}
if !info.IsDir() {
size += info.Size()
}
return err
})
_ = err
return size
}
func Prune_Blockchain(prune_topo int64) (err error) {
var store storage
// initialize store
current_path := filepath.Join(globals.GetDataDirectory())
if store.Balance_store, err = graviton.NewDiskStore(filepath.Join(current_path, "balances")); err == nil {
if err = store.Topo_store.Open(current_path); err == nil {
store.Block_tx_store.basedir = current_path
} else {
return err
}
}
max_topoheight := store.Topo_store.Count()
for ; max_topoheight >= 0; max_topoheight-- {
if toporecord, err := store.Topo_store.Read(max_topoheight); err == nil {
if !toporecord.IsClean() {
break
}
}
}
//prune_topoheight := max_topoheight - 97
prune_topoheight := prune_topo
if max_topoheight-prune_topoheight < 50 {
return fmt.Errorf("We need atleast 50 blocks to prune")
}
err = rewrite_graviton_store(&store, prune_topoheight, max_topoheight)
if err != nil {
globals.Logger.Error(err, "error rewriting graviton store")
}
discard_blocks_and_transactions(&store, prune_topoheight)
// close original store and move new store in the same place
store.Balance_store.Close()
old_path := filepath.Join(current_path, "balances")
new_path := filepath.Join(current_path, "balances_new")
globals.Logger.Info("Old balance tree", "size", ByteCountIEC(DirSize(old_path)))
globals.Logger.Info("balance tree after pruning history", "size", ByteCountIEC(DirSize(new_path)))
os.RemoveAll(old_path)
return os.Rename(new_path, old_path)
}
// first lets free space by discarding blocks, and txs before the historical point
// any error while deleting should be considered non fatal
func discard_blocks_and_transactions(store *storage, topoheight int64) {
globals.Logger.Info("Block store before pruning", "size", ByteCountIEC(DirSize(filepath.Join(store.Block_tx_store.basedir, "bltx_store"))))
for i := int64(0); i < topoheight-20; i++ { // donot some more blocks for sanity currently
if toporecord, err := store.Topo_store.Read(i); err == nil {
blid := toporecord.BLOCK_ID
var bl block.Block
if block_data, err := store.Block_tx_store.ReadBlock(blid); err == nil {
if err = bl.Deserialize(block_data); err == nil { // we should deserialize the block here
for _, txhash := range bl.Tx_hashes { // we also have to purge the tx hashes
_ = store.Block_tx_store.DeleteTX(txhash) // delete tx hashes
//fmt.Printf("DeleteTX %x\n", txhash)
}
}
// lets delete the block data also
_ = store.Block_tx_store.DeleteBlock(blid)
//fmt.Printf("DeleteBlock %x\n", blid)
}
}
}
globals.Logger.Info("Block store after pruning ", "size", ByteCountIEC(DirSize(filepath.Join(store.Block_tx_store.basedir, "bltx_store"))))
}
// clone a snapshot, this is a dero arch dependent
// since the trees can be large in the long term, we do them in chunks
func clone_snapshot(rsource, wsource *graviton.Store, r_ssversion uint64) (latest_commit_version uint64, err error) {
var old_ss, write_ss *graviton.Snapshot
var old_balance_tree, write_balance_tree *graviton.Tree
var old_meta_tree, write_meta_tree *graviton.Tree
if old_ss, err = rsource.LoadSnapshot(r_ssversion); err != nil {
return
}
if write_ss, err = wsource.LoadSnapshot(0); err != nil {
return
}
if old_balance_tree, err = old_ss.GetTree(config.BALANCE_TREE); err != nil {
return
}
if write_balance_tree, err = write_ss.GetTree(config.BALANCE_TREE); err != nil {
return
}
{ // copy old tree to new tree, in chunks
c := old_balance_tree.Cursor()
object_counter := int64(0)
for k, v, err := c.First(); err == nil; k, v, err = c.Next() {
if object_counter != 0 && object_counter%CHUNK_SIZE == 0 {
if latest_commit_version, err = graviton.Commit(write_balance_tree); err != nil {
fmt.Printf("err while cloingggggggggggg %s\n", err)
return 0, err
}
}
write_balance_tree.Put(k, v)
object_counter++
}
}
/* h,_ := old_balance_tree.Hash()
fmt.Printf("old balance hash %+v\n",h )
h,_ = write_balance_tree.Hash()
fmt.Printf("write balance hash %+v\n",h )
//os.Exit(0)
*/
// copy meta tree for scid
if old_meta_tree, err = old_ss.GetTree(config.SC_META); err != nil {
return
}
if write_meta_tree, err = write_ss.GetTree(config.SC_META); err != nil {
return
}
var sc_list [][]byte
{ // copy sc tree, in chunks
c := old_meta_tree.Cursor()
object_counter := int64(0)
for k, v, err := c.First(); err == nil; k, v, err = c.Next() {
if object_counter != 0 && object_counter%CHUNK_SIZE == 0 {
if latest_commit_version, err = graviton.Commit(write_meta_tree); err != nil {
fmt.Printf("err while cloingggggggggggg %s\n", err)
return 0, err
}
}
write_meta_tree.Put(k, v)
sc_list = append(sc_list, k)
object_counter++
}
}
/* h,_ = old_meta_tree.Hash()
fmt.Printf("old meta hash %+v\n",h )
h,_ = write_meta_tree.Hash()
fmt.Printf("new meta hash %+v\n",h )
os.Exit(0)
*/
var sc_trees []*graviton.Tree
// now we have to copy all scs data one by one
for _, scid := range sc_list {
var old_sc_tree, write_sc_tree *graviton.Tree
if old_sc_tree, err = old_ss.GetTree(string(scid)); err != nil {
return
}
if write_sc_tree, err = write_ss.GetTree(string(scid)); err != nil {
return
}
c := old_sc_tree.Cursor()
for k, v, err := c.First(); err == nil; k, v, err = c.Next() {
write_sc_tree.Put(k, v)
}
sc_trees = append(sc_trees, write_sc_tree)
}
sc_trees = append(sc_trees, write_balance_tree)
sc_trees = append(sc_trees, write_meta_tree)
latest_commit_version, err = graviton.Commit(sc_trees...)
return
}
// diff a snapshot from block to block, this is a dero arch dependent
// entire block is done in a single commit
func diff_snapshot(rsource, wsource *graviton.Store, old_version uint64, new_version uint64) (latest_commit_version uint64, err error) {
var sc_trees []*graviton.Tree
var old_ss, new_ss, write_ss *graviton.Snapshot
var old_tree, new_tree, write_tree *graviton.Tree
if old_ss, err = rsource.LoadSnapshot(old_version); err != nil {
return
}
if new_ss, err = rsource.LoadSnapshot(new_version); err != nil {
return
}
if write_ss, err = wsource.LoadSnapshot(0); err != nil {
return
}
if old_tree, err = old_ss.GetTree(config.BALANCE_TREE); err != nil {
return
}
if new_tree, err = new_ss.GetTree(config.BALANCE_TREE); err != nil {
return
}
if write_tree, err = write_ss.GetTree(config.BALANCE_TREE); err != nil {
return
}
// diff and update balance tree
clone_tree_changes(old_tree, new_tree, write_tree)
sc_trees = append(sc_trees, write_tree)
// copy meta tree for scid
if old_tree, err = old_ss.GetTree(config.SC_META); err != nil {
return
}
if new_tree, err = new_ss.GetTree(config.SC_META); err != nil {
return
}
if write_tree, err = write_ss.GetTree(config.SC_META); err != nil {
return
}
var sc_list_new, sc_list_modified [][]byte
// diff and update meta tree
{
insert_handler := func(k, v []byte) {
write_tree.Put(k, v)
sc_list_new = append(sc_list_new, k)
}
modify_handler := func(k, v []byte) { // modification receives old value
new_value, _ := new_tree.Get(k)
write_tree.Put(k, new_value)
sc_list_modified = append(sc_list_modified, k)
}
graviton.Diff(old_tree, new_tree, nil, modify_handler, insert_handler)
}
sc_trees = append(sc_trees, write_tree)
// now we have to copy new scs data one by one
for _, scid := range sc_list_new {
if old_tree, err = old_ss.GetTree(string(scid)); err != nil {
return
}
if new_tree, err = new_ss.GetTree(string(scid)); err != nil {
return
}
if write_tree, err = write_ss.GetTree(string(scid)); err != nil {
return
}
c := old_tree.Cursor()
for k, v, err := c.First(); err == nil; k, v, err = c.Next() {
write_tree.Put(k, v)
}
sc_trees = append(sc_trees, write_tree)
}
for _, scid := range sc_list_modified {
if old_tree, err = old_ss.GetTree(string(scid)); err != nil {
return
}
if new_tree, err = new_ss.GetTree(string(scid)); err != nil {
return
}
if write_tree, err = write_ss.GetTree(string(scid)); err != nil {
return
}
clone_tree_changes(old_tree, new_tree, write_tree)
sc_trees = append(sc_trees, write_tree)
}
latest_commit_version, err = graviton.Commit(sc_trees...)
return
}
// this will rewrite the graviton store
func rewrite_graviton_store(store *storage, prune_topoheight int64, max_topoheight int64) (err error) {
var write_store *graviton.Store
writebalancestorepath := filepath.Join(store.Block_tx_store.basedir, "balances_new")
if write_store, err = graviton.NewDiskStore(writebalancestorepath); err != nil {
return err
}
toporecord, err := store.Topo_store.Read(prune_topoheight)
if err != nil {
return err
}
var major_copy uint64
{ // do the heavy lifting, merge all changes before this topoheight
var latest_commit_version uint64
latest_commit_version, err = clone_snapshot(store.Balance_store, write_store, toporecord.State_Version)
major_copy = latest_commit_version
if err != nil {
return err
}
}
// now we must do block to block changes till the top block
{
var new_entries []int64
var commit_versions []uint64
for i := prune_topoheight; i < max_topoheight; i++ {
var old_toporecord, new_toporecord TopoRecord
// fetch old tree data
old_topo := i
new_topo := i + 1
err = nil
if old_toporecord, err = store.Topo_store.Read(old_topo); err == nil {
if new_toporecord, err = store.Topo_store.Read(new_topo); err == nil {
var latest_commit_version uint64
latest_commit_version, err = diff_snapshot(store.Balance_store, write_store, old_toporecord.State_Version, new_toporecord.State_Version)
if err != nil {
return err
}
new_entries = append(new_entries, new_topo)
commit_versions = append(commit_versions, latest_commit_version)
}
}
}
// now lets store all the commit versions in 1 go
for i, topo := range new_entries {
old_toporecord, err := store.Topo_store.Read(topo)
if err != nil {
globals.Logger.Error(err, "err reading/writing toporecord", "topo", topo)
return err
}
store.Topo_store.Write(topo, old_toporecord.BLOCK_ID, commit_versions[i], old_toporecord.Height)
/*{
ss, _ := write_store.LoadSnapshot(commit_versions[i])
balance_tree, _ := ss.GetTree(config.BALANCE_TREE)
sc_meta_tree, _ := ss.GetTree(config.SC_META)
balance_merkle_hash, _ := balance_tree.Hash()
meta_merkle_hash, _ := sc_meta_tree.Hash()
var hash [32]byte
for i := range balance_merkle_hash {
hash[i] = balance_merkle_hash[i] ^ meta_merkle_hash[i]
}
fmt.Printf("writing toporecord %d version %d hash %x\n", topo, commit_versions[i], hash[:])
}*/
var bl block.Block
var block_data []byte
if block_data, err = store.Block_tx_store.ReadBlock(old_toporecord.BLOCK_ID); err != nil {
return err
}
if err = bl.Deserialize(block_data); err != nil { // we should deserialize the block here
return err
}
var diff *big.Int
if diff, err = store.Block_tx_store.ReadBlockDifficulty(old_toporecord.BLOCK_ID); err != nil {
return err
}
store.Block_tx_store.DeleteBlock(old_toporecord.BLOCK_ID)
err = store.Block_tx_store.WriteBlock(old_toporecord.BLOCK_ID, block_data, diff, commit_versions[i], bl.Height)
if err != nil {
return err
}
}
}
// now overwrite the starting topo mapping
for i := int64(0); i <= prune_topoheight; i++ { // overwrite the entries in the topomap
if toporecord, err := store.Topo_store.Read(i); err == nil {
store.Topo_store.Write(i, toporecord.BLOCK_ID, major_copy, toporecord.Height)
//fmt.Printf("writing toporecord %d version %d\n",i, major_copy)
} else {
globals.Logger.Error(err, "err reading toporecord", "topo", i)
return err // this is irrepairable damage
}
}
// now lets remove the old graviton db
write_store.Close()
return
}
// clone tree changes between 2 versions (old_tree, new_tree and then commit them to write_tree)
func clone_tree_changes(old_tree, new_tree, write_tree *graviton.Tree) {
if old_tree.IsDirty() || new_tree.IsDirty() || write_tree.IsDirty() {
panic("trees cannot be dirty")
}
insert_count := 0
modify_count := 0
insert_handler := func(k, v []byte) {
insert_count++
//fmt.Printf("insert %x %x\n",k,v)
write_tree.Put(k, v)
}
modify_handler := func(k, v []byte) { // modification receives old value
modify_count++
new_value, _ := new_tree.Get(k)
write_tree.Put(k, new_value)
}
graviton.Diff(old_tree, new_tree, nil, modify_handler, insert_handler)
}

View File

@ -0,0 +1,90 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -0,0 +1,412 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package regpool
import "fmt"
import "sync"
import "time"
import "sync/atomic"
import "encoding/hex"
import "encoding/json"
import "github.com/go-logr/logr"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/metrics"
import "github.com/deroproject/derohe/cryptography/crypto"
// this is only used for sorting and nothing else
type TX_Sorting_struct struct {
FeesPerByte uint64 // this is fees per byte
Hash crypto.Hash // transaction hash
Size uint64 // transaction size
}
// NOTE: do NOT consider this code as useless, as it is used to avooid double spending attacks within the block and within the pool
// let me explain, since we are a state machine, we add block to our blockchain
// so, if a double spending attack comes, 2 transactions with same inputs, we reject one of them
// the algo is documented somewhere else which explains the entire process
// at this point in time, this is an ultrafast written regpool,
// it will not scale for more than 10000 transactions but is good enough for now
// we can always come back and rewrite it
// NOTE: the pool is now persistant
type Regpool struct {
txs sync.Map //map[crypto.Hash]*regpool_object
address_map sync.Map //map[crypto.Hash]bool // contains key images of all txs
sorted_by_fee []crypto.Hash // contains txids sorted by fees
sorted []TX_Sorting_struct // contains TX sorting information, so as new block can be forged easily
modified bool // used to monitor whethel mem pool contents have changed,
height uint64 // track blockchain height
// global variable , but don't see it utilisation here except fot tx verification
//chain *Blockchain
Exit_Mutex chan bool
sync.Mutex
}
// this object is serialized and deserialized
type regpool_object struct {
Tx *transaction.Transaction
Added uint64 // time in epoch format
Height uint64 // at which height the tx unlocks in the regpool
Relayed int // relayed count
RelayedAt int64 // when was tx last relayed
Size uint64 // size in bytes of the TX
FEEperBYTE uint64 // fee per byte
}
var loggerpool logr.Logger
// marshal object as json
func (obj *regpool_object) MarshalJSON() ([]byte, error) {
return json.Marshal(&struct {
Tx string `json:"tx"` // hex encoding
Added uint64 `json:"added"`
Height uint64 `json:"height"`
Relayed int `json:"relayed"`
RelayedAt int64 `json:"relayedat"`
}{
Tx: hex.EncodeToString(obj.Tx.Serialize()),
Added: obj.Added,
Height: obj.Height,
Relayed: obj.Relayed,
RelayedAt: obj.RelayedAt,
})
}
// unmarshal object from json encoding
func (obj *regpool_object) UnmarshalJSON(data []byte) error {
aux := &struct {
Tx string `json:"tx"`
Added uint64 `json:"added"`
Height uint64 `json:"height"`
Relayed int `json:"relayed"`
RelayedAt int64 `json:"relayedat"`
}{}
if err := json.Unmarshal(data, &aux); err != nil {
return err
}
obj.Added = aux.Added
obj.Height = aux.Height
obj.Relayed = aux.Relayed
obj.RelayedAt = aux.RelayedAt
tx_bytes, err := hex.DecodeString(aux.Tx)
if err != nil {
return err
}
obj.Size = uint64(len(tx_bytes))
obj.Tx = &transaction.Transaction{}
err = obj.Tx.Deserialize(tx_bytes)
if err == nil {
obj.FEEperBYTE = 0
}
return err
}
func Init_Regpool(params map[string]interface{}) (*Regpool, error) {
var regpool Regpool
//regpool.chain = params["chain"].(*Blockchain)
loggerpool = globals.Logger.WithName("REGPOOL") // all components must use this logger
loggerpool.Info("Regpool started")
atomic.AddUint32(&globals.Subsystem_Active, 1) // increment subsystem
regpool.Exit_Mutex = make(chan bool)
metrics.Set.GetOrCreateGauge("regpool_count", func() float64 {
count := float64(0)
regpool.txs.Range(func(k, value interface{}) bool {
count++
return true
})
return count
})
// initialize maps
//regpool.txs = map[crypto.Hash]*regpool_object{}
//regpool.address_map = map[crypto.Hash]bool{}
//TODO load any trasactions saved at previous exit
return &regpool, nil
}
// this is created per incoming block and then discarded
// This does not require shutting down and will be garbage collected automatically
//func Init_Block_Regpool(params map[string]interface{}) (*Regpool, error) {
// var regpool Regpool
// return &regpool, nil
//}
func (pool *Regpool) HouseKeeping(height uint64, Verifier func(*transaction.Transaction) bool) {
pool.height = height
// this code is executed in conditions where a registered user tries to register again
var delete_list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*regpool_object)
if !Verifier(v.Tx) { // this tx user has already registered
delete_list = append(delete_list, txhash)
}
return true
})
for i := range delete_list {
pool.Regpool_Delete_TX(delete_list[i])
}
}
func (pool *Regpool) Shutdown() {
//TODO save regpool tx somewhere
close(pool.Exit_Mutex) // stop relaying
pool.Lock()
defer pool.Unlock()
loggerpool.Info("Regpool stopped")
atomic.AddUint32(&globals.Subsystem_Active, ^uint32(0)) // this decrement 1 fom subsystem
}
// start pool monitoring for changes for some specific time
// this is required so as we can add or discard transactions while selecting work for mining
func (pool *Regpool) Monitor() {
pool.Lock()
pool.modified = false
pool.Unlock()
}
// return whether pool contents have changed
func (pool *Regpool) HasChanged() (result bool) {
pool.Lock()
result = pool.modified
pool.Unlock()
return
}
// a tx should only be added to pool after verification is complete
func (pool *Regpool) Regpool_Add_TX(tx *transaction.Transaction, Height uint64) (result bool) {
result = false
pool.Lock()
defer pool.Unlock()
if !tx.IsRegistration() {
return false
}
var object regpool_object
if pool.Regpool_Address_Present(tx.MinerAddress) {
// loggerpool.Infof("Rejecting TX, since address already has registration information")
return false
}
tx_hash := crypto.Hash(tx.GetHash())
// check if tx already exists, skip it
if _, ok := pool.txs.Load(tx_hash); ok {
//rlog.Debugf("Pool already contains %s, skipping", tx_hash)
return false
}
if !tx.IsRegistrationValid() {
return false
}
// add all the key images to check double spend attack within the pool
//TODO
// for i := 0; i < len(tx.Vin); i++ {
// pool.address_map.Store(tx.Vin[i].(transaction.Txin_to_key).K_image,true) // add element to map for next check
// }
pool.address_map.Store(tx.MinerAddress, true)
// we are here means we can add it to pool
object.Tx = tx
object.Height = Height
object.Added = uint64(time.Now().UTC().Unix())
object.Size = uint64(len(tx.Serialize()))
pool.txs.Store(tx_hash, &object)
pool.modified = true // pool has been modified
//pool.sort_list() // sort and update pool list
return true
}
// check whether a tx exists in the pool
func (pool *Regpool) Regpool_TX_Exist(txid crypto.Hash) (result bool) {
//pool.Lock()
//defer pool.Unlock()
if _, ok := pool.txs.Load(txid); ok {
return true
}
return false
}
// check whether a keyimage exists in the pool
func (pool *Regpool) Regpool_Address_Present(ki [33]byte) (result bool) {
//pool.Lock()
//defer pool.Unlock()
if _, ok := pool.address_map.Load(ki); ok {
return true
}
return false
}
// delete specific tx from pool and return it
// if nil is returned Tx was not found in pool
func (pool *Regpool) Regpool_Delete_TX(txid crypto.Hash) (tx *transaction.Transaction) {
//pool.Lock()
//defer pool.Unlock()
var ok bool
var objecti interface{}
// check if tx already exists, skip it
if objecti, ok = pool.txs.Load(txid); !ok {
//rlog.Warnf("Pool does NOT contain %s, returning nil", txid)
return nil
}
// we reached here means, we have the tx remove it from our list, do maintainance cleapup and discard it
object := objecti.(*regpool_object)
tx = object.Tx
pool.txs.Delete(txid)
// remove all the key images
//TODO
// for i := 0; i < len(object.Tx.Vin); i++ {
// pool.address_map.Delete(object.Tx.Vin[i].(transaction.Txin_to_key).K_image)
// }
pool.address_map.Delete(tx.MinerAddress)
//pool.sort_list() // sort and update pool list
pool.modified = true // pool has been modified
return object.Tx // return the tx
}
// get specific tx from mem pool without removing it
func (pool *Regpool) Regpool_Get_TX(txid crypto.Hash) (tx *transaction.Transaction) {
// pool.Lock()
// defer pool.Unlock()
var ok bool
var objecti interface{}
if objecti, ok = pool.txs.Load(txid); !ok {
//loggerpool.Warnf("Pool does NOT contain %s, returning nil", txid)
return nil
}
// we reached here means, we have the tx, return the pointer back
//object := pool.txs[txid]
object := objecti.(*regpool_object)
return object.Tx
}
// return list of all txs in pool
func (pool *Regpool) Regpool_List_TX() []crypto.Hash {
// pool.Lock()
// defer pool.Unlock()
var list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
//v := value.(*regpool_object)
//objects = append(objects, *v)
list = append(list, txhash)
return true
})
//pool.sort_list() // sort and update pool list
// list should be as big as spurce list
//list := make([]crypto.Hash, len(pool.sorted_by_fee), len(pool.sorted_by_fee))
//copy(list, pool.sorted_by_fee) // return list sorted by fees
return list
}
// print current regpool txs
// TODO add sorting
func (pool *Regpool) Regpool_Print() {
pool.Lock()
defer pool.Unlock()
var klist []crypto.Hash
var vlist []*regpool_object
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
v := value.(*regpool_object)
//objects = append(objects, *v)
klist = append(klist, txhash)
vlist = append(vlist, v)
return true
})
loggerpool.Info(fmt.Sprintf("Total TX in regpool = %d\n", len(klist)))
loggerpool.Info(fmt.Sprintf("%20s %14s %7s %7s %6s %32s\n", "Added", "Last Relayed", "Relayed", "Size", "Height", "TXID"))
for i := range klist {
k := klist[i]
v := vlist[i]
loggerpool.Info(fmt.Sprintf("%20s %14s %7d %7d %6d %32s\n", time.Unix(int64(v.Added), 0).UTC().Format(time.RFC3339), time.Duration(v.RelayedAt)*time.Second, v.Relayed,
len(v.Tx.Serialize()), v.Height, k))
}
}
// flush regpool
func (pool *Regpool) Regpool_flush() {
var list []crypto.Hash
pool.txs.Range(func(k, value interface{}) bool {
txhash := k.(crypto.Hash)
//v := value.(*regpool_object)
//objects = append(objects, *v)
list = append(list, txhash)
return true
})
loggerpool.Info("Total TX in regpool", "txcount", len(list))
loggerpool.Info("Flushing regpool")
for i := range list {
pool.Regpool_Delete_TX(list[i])
}
}

View File

@ -0,0 +1,139 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package regpool
//import "fmt"
//import "bytes"
import "testing"
import "encoding/hex"
import "github.com/deroproject/derohe/transaction"
// test the mempool interface with valid TX
func Test_regpool(t *testing.T) {
// this tx is from random internal testnet, both tx are from same wallet
tx_hex := "010000010ccf5f06ed0d8b66da41b3054438996fb57801e57b0809fec9816432715a1ae90004e22ceb7a312c7a5d1e19dd5eb6bec3ba182a77fdbd0004ac7ea2bece9cc8a00141663a9d5680f724ee9bfe4cf27e3a88e74986923e05f533d46643b052f397"
//tx_hex2 := "010000018009d704feec7161952a952f306cd96023810c6788478a1c9fc50e7281ab7893ac02939da3bb500a6cf47bdc537f97b71a430acf832933459a6d2fbbc67cb2374909ceb166a4b5c582dec1a2b8629c073c949ffae201bb2c2562e8607eb1191003"
var tx, dup_tx transaction.Transaction
tx_raw, _ := hex.DecodeString(tx_hex)
err := tx.Deserialize(tx_raw)
dup_tx.Deserialize(tx_raw)
if err != nil {
t.Errorf("Tx Deserialisation failed")
}
pool, err := Init_Regpool(nil)
if err != nil {
t.Errorf("Pool initialization failed")
}
if len(pool.Regpool_List_TX()) != 0 {
t.Errorf("Pool should be initialized in empty state")
}
if pool.Regpool_Add_TX(&tx, 0) != true {
t.Errorf("Cannot Add transaction to pool in empty state")
}
if pool.Regpool_TX_Exist(tx.GetHash()) != true {
t.Errorf("TX should already be in pool")
}
/*if len(pool.Mempool_List_TX()) != 1 {
t.Errorf("Pool should have 1 tx")
}*/
list_tx := pool.Regpool_List_TX()
if len(list_tx) != 1 || list_tx[0] != tx.GetHash() {
t.Errorf("Pool List tx failed")
}
get_tx := pool.Regpool_Get_TX(tx.GetHash())
if tx.GetHash() != get_tx.GetHash() {
t.Errorf("Pool get_tx failed")
}
// re-adding tx should faild
if pool.Regpool_Add_TX(&tx, 0) == true || len(pool.Regpool_List_TX()) > 1 {
t.Errorf("Pool should not allow duplicate TX")
}
// modify tx and readd
dup_tx.DestNetwork = 1 //modify tx so txid changes, still it should be rejected
if tx.GetHash() == dup_tx.GetHash() {
t.Errorf("tx and duplicate tx must have different hash")
}
if pool.Regpool_Add_TX(&dup_tx, 0) == true || len(pool.Regpool_List_TX()) > 1 {
t.Errorf("Pool should not allow duplicate Key images")
}
// pool must have 1 key_image
address_count := 0
pool.address_map.Range(func(k, value interface{}) bool {
address_count++
return true
})
if address_count != 1 {
t.Errorf("Pool doesnot have necessary key image")
}
if pool.Regpool_Delete_TX(dup_tx.GetHash()) != nil {
t.Errorf("non existing TX cannot be deleted\n")
}
// pool must have 1 key_image
address_count = 0
pool.address_map.Range(func(k, value interface{}) bool {
address_count++
return true
})
if address_count != 1 {
t.Errorf("Pool must have necessary key image")
}
// lets delete
if pool.Regpool_Delete_TX(tx.GetHash()) == nil {
t.Errorf("existing TX cannot be deleted\n")
}
address_count = 0
pool.address_map.Range(func(k, value interface{}) bool {
address_count++
return true
})
if address_count != 0 {
t.Errorf("Pool should not have any key image")
}
if len(pool.Regpool_List_TX()) != 0 {
t.Errorf("Pool should have 0 tx")
}
}

360
blockchain/sc.go Normal file
View File

@ -0,0 +1,360 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
// this file implements necessary structure to SC handling
import "fmt"
import "bytes"
import "runtime/debug"
import "encoding/binary"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/dvm"
//import "github.com/deroproject/graviton"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/transaction"
// currently DERO hash 2 contract types
// 1 OPEN
// 2 PRIVATE
type SC_META_DATA struct {
Type byte // 0 Open, 1 Private
DataHash crypto.Hash // hash of SC data tree is here, so as the meta tree verifies all SC DATA
}
// serialize the structure
func (meta SC_META_DATA) MarshalBinary() (buf []byte) {
buf = make([]byte, 33, 33)
buf[0] = meta.Type
copy(buf[1+len(meta.DataHash):], meta.DataHash[:])
return
}
func (meta *SC_META_DATA) UnmarshalBinary(buf []byte) (err error) {
if len(buf) != 1+32 {
return fmt.Errorf("input buffer should be of 33 bytes in length")
}
meta.Type = buf[0]
copy(meta.DataHash[:], buf[1+len(meta.DataHash):])
return nil
}
func SC_Meta_Key(scid crypto.Hash) []byte {
return scid[:]
}
func SC_Code_Key(scid crypto.Hash) []byte {
return dvm.Variable{Type: dvm.String, ValueString: "C"}.MarshalBinaryPanic()
}
func SC_Asset_Key(asset crypto.Hash) []byte {
return asset[:]
}
// this will process the SC transaction
// the tx should only be processed , if it has been processed
func (chain *Blockchain) execute_sc_function(w_sc_tree *Tree_Wrapper, data_tree *Tree_Wrapper, scid crypto.Hash, bl_height, bl_topoheight, bl_timestamp uint64, bl_hash crypto.Hash, tx transaction.Transaction, entrypoint string, hard_fork_version_current int64) (gas uint64, err error) {
defer func() {
if r := recover(); r != nil { // safety so if anything wrong happens, verification fails
if err == nil {
err = fmt.Errorf("Stack trace \n%s", debug.Stack())
}
logger.V(1).Error(err, "Recovered while rewinding chain,", "r", r, "stack trace", string(debug.Stack()))
}
}()
//fmt.Printf("executing entrypoint %s\n", entrypoint)
//if !tx.Verify_SC_Signature() { // if tx is not SC TX, or Signature could not be verified skip it
// return
//}
tx_hash := tx.GetHash()
tx_store := dvm.Initialize_TX_store()
// used as value loader from disk
// this function is used to load any data required by the SC
balance_loader := func(key dvm.DataKey) (result uint64) {
var found bool
_ = found
result, found = chain.LoadSCAssetValue(data_tree, key.SCID, key.Asset)
return result
}
diskloader := func(key dvm.DataKey, found *uint64) (result dvm.Variable) {
var exists bool
if result, exists = chain.LoadSCValue(data_tree, key.SCID, key.MarshalBinaryPanic()); exists {
*found = uint64(1)
}
//fmt.Printf("Loading from disk %+v result %+v found status %+v \n", key, result, exists)
return
}
diskloader_raw := func(key []byte) (value []byte, found bool) {
var err error
value, err = data_tree.Get(key[:])
if err != nil {
return value, false
}
if len(value) == 0 {
return value, false
}
//fmt.Printf("Loading from disk %+v result %+v found status %+v \n", key, result, exists)
return value, true
}
balance, sc_parsed, found := chain.ReadSC(w_sc_tree, data_tree, scid)
if !found {
logger.V(1).Error(nil, "SC not found", "scid", scid)
err = fmt.Errorf("SC not found %s", scid)
return
}
//fmt.Printf("sc_parsed %+v\n", sc_parsed)
// if we found the SC in parsed form, check whether entrypoint is found
function, ok := sc_parsed.Functions[entrypoint]
if !ok {
logger.V(1).Error(fmt.Errorf("stored SC does not contain entrypoint"), "", "entrypoint", entrypoint, "scid", scid)
err = fmt.Errorf("stored SC does not contain entrypoint '%s' scid %s \n", entrypoint, scid)
return
}
_ = function
//fmt.Printf("entrypoint found '%s' scid %s\n", entrypoint, scid)
//if len(sc_tx.Params) == 0 { // initialize params if not initialized earlier
// sc_tx.Params = map[string]string{}
//}
//sc_tx.Params["value"] = fmt.Sprintf("%d", sc_tx.Value) // overide value
tx_store.DiskLoader = diskloader // hook up loading from chain
tx_store.DiskLoaderRaw = diskloader_raw
tx_store.BalanceLoader = balance_loader
tx_store.BalanceAtStart = balance
tx_store.SCID = scid
//fmt.Printf("tx store %v\n", tx_store)
// we can skip proof check, here
if err = chain.Expand_Transaction_NonCoinbase(&tx); err != nil {
return
}
signer, err := extract_signer(&tx)
if err != nil { // allow anonymous SC transactions with condition that SC will not call Signer
// this allows anonymous voting and numerous other applications
// otherwise SC receives signer as all zeroes
}
// setup block hash, height, topoheight correctly
state := &dvm.Shared_State{
Store: tx_store,
Assets: map[crypto.Hash]uint64{},
SCIDSELF: scid,
Chain_inputs: &dvm.Blockchain_Input{
BL_HEIGHT: bl_height,
BL_TOPOHEIGHT: uint64(bl_topoheight),
BL_TIMESTAMP: bl_timestamp,
SCID: scid,
BLID: bl_hash,
TXID: tx_hash,
Signer: string(signer[:]),
},
}
if _, ok = globals.Arguments["--debug"]; ok && globals.Arguments["--debug"] != nil && chain.simulator {
state.Trace = true // enable tracing for dvm simulator
}
for _, payload := range tx.Payloads {
var new_value [8]byte
stored_value, _ := chain.LoadSCAssetValue(data_tree, scid, payload.SCID)
binary.BigEndian.PutUint64(new_value[:], stored_value+payload.BurnValue)
chain.StoreSCValue(data_tree, scid, payload.SCID[:], new_value[:])
state.Assets[payload.SCID] += payload.BurnValue
}
// we have an entrypoint, now we must setup parameters and dvm
// all parameters are in string form to bypass translation issues in middle layers
params := map[string]interface{}{}
for _, p := range function.Params {
var zerohash crypto.Hash
switch {
case p.Type == dvm.Uint64 && p.Name == "value":
params[p.Name] = fmt.Sprintf("%d", state.Assets[zerohash]) // overide value
case p.Type == dvm.Uint64 && tx.SCDATA.Has(p.Name, rpc.DataUint64):
params[p.Name] = fmt.Sprintf("%d", tx.SCDATA.Value(p.Name, rpc.DataUint64).(uint64))
case p.Type == dvm.String && tx.SCDATA.Has(p.Name, rpc.DataString):
params[p.Name] = tx.SCDATA.Value(p.Name, rpc.DataString).(string)
case p.Type == dvm.String && tx.SCDATA.Has(p.Name, rpc.DataHash):
h := tx.SCDATA.Value(p.Name, rpc.DataHash).(crypto.Hash)
params[p.Name] = string(h[:])
fmt.Printf("%s:%x\n", p.Name, string(h[:]))
default:
err = fmt.Errorf("entrypoint '%s' parameter type missing or not yet supported (%+v)", entrypoint, p)
return
}
}
result, err := dvm.RunSmartContract(&sc_parsed, entrypoint, state, params)
//fmt.Printf("result value %+v\n", result)
if err != nil {
logger.V(2).Error(err, "error execcuting SC", "entrypoint", entrypoint, "scid", scid)
return
}
if err == nil && result.Type == dvm.Uint64 && result.ValueUint64 == 0 { // confirm the changes
for k, v := range tx_store.Keys {
chain.StoreSCValue(data_tree, scid, k.MarshalBinaryPanic(), v.MarshalBinaryPanic())
}
for k, v := range tx_store.RawKeys {
chain.StoreSCValue(data_tree, scid, []byte(k), v)
}
data_tree.transfere = append(data_tree.transfere, tx_store.Transfers[scid].TransferE...)
} else { // discard all changes, since we never write to store immediately, they are purged, however we need to return any value associated
err = fmt.Errorf("Discarded knowingly")
return
}
//fmt.Printf("SC execution finished amount value %d\n", tx.Value)
return
}
// reads SC, balance
func (chain *Blockchain) ReadSC(w_sc_tree *Tree_Wrapper, data_tree *Tree_Wrapper, scid crypto.Hash) (balance uint64, sc dvm.SmartContract, found bool) {
meta_bytes, err := w_sc_tree.Get(SC_Meta_Key(scid))
if err != nil {
return
}
var meta SC_META_DATA // the meta contains the link to the SC bytes
if err := meta.UnmarshalBinary(meta_bytes); err != nil {
return
}
var zerohash crypto.Hash
balance, _ = chain.LoadSCAssetValue(data_tree, scid, zerohash)
sc_bytes, err := data_tree.Get(SC_Code_Key(scid))
if err != nil {
return
}
var v dvm.Variable
if err = v.UnmarshalBinary(sc_bytes); err != nil {
return
}
sc, pos, err := dvm.ParseSmartContract(v.ValueString)
if err != nil {
return
}
_ = pos
found = true
return
}
func (chain *Blockchain) LoadSCValue(data_tree *Tree_Wrapper, scid crypto.Hash, key []byte) (v dvm.Variable, found bool) {
//fmt.Printf("loading fromdb %s %s \n", scid, key)
object_data, err := data_tree.Get(key[:])
if err != nil {
return v, false
}
if len(object_data) == 0 {
return v, false
}
if err = v.UnmarshalBinary(object_data); err != nil {
return v, false
}
return v, true
}
func (chain *Blockchain) LoadSCAssetValue(data_tree *Tree_Wrapper, scid crypto.Hash, asset crypto.Hash) (v uint64, found bool) {
//fmt.Printf("loading fromdb %s %s \n", scid, key)
object_data, err := data_tree.Get(asset[:])
if err != nil {
return v, false
}
if len(object_data) == 0 { // all assets are by default 0
return v, true
}
if len(object_data) != 8 {
return v, false
}
return binary.BigEndian.Uint64(object_data[:]), true
}
// reads a value from SC, always read balance
func (chain *Blockchain) ReadSCValue(data_tree *Tree_Wrapper, scid crypto.Hash, key interface{}) (value interface{}) {
var keybytes []byte
if key == nil {
return
}
switch k := key.(type) {
case uint64:
keybytes = dvm.DataKey{Key: dvm.Variable{Type: dvm.Uint64, ValueUint64: k}}.MarshalBinaryPanic()
case string:
keybytes = dvm.DataKey{Key: dvm.Variable{Type: dvm.String, ValueString: k}}.MarshalBinaryPanic()
//case int64:
// keybytes = dvm.DataKey{Key: dvm.Variable{Type: dvm.String, Value: k}}.MarshalBinaryPanic()
default:
return
}
value_var, found := chain.LoadSCValue(data_tree, scid, keybytes)
//fmt.Printf("read value %+v", value_var)
if found && value_var.Type != dvm.Invalid {
switch value_var.Type {
case dvm.Uint64:
value = value_var.ValueUint64
case dvm.String:
value = value_var.ValueString
default:
panic("This variable cannot be loaded")
}
}
return
}
// store the value in the chain
func (chain *Blockchain) StoreSCValue(data_tree *Tree_Wrapper, scid crypto.Hash, key, value []byte) {
if bytes.Compare(scid[:], key) == 0 { // an scid can mint its assets infinitely
return
}
data_tree.Put(key, value)
return
}

356
blockchain/store.go Normal file
View File

@ -0,0 +1,356 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "math/big"
import "path/filepath"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/graviton"
// though these can be done within a single DB, these are separated for completely clarity purposes
type storage struct {
Balance_store *graviton.Store // stores most critical data, only history can be purged, its merkle tree is stored in the block
Block_tx_store storefs // stores blocks which can be discarded at any time(only past but keep recent history for rollback)
Topo_store storetopofs // stores topomapping which can only be discarded by punching holes in the start of the file
}
func (s *storage) Initialize(params map[string]interface{}) (err error) {
current_path := filepath.Join(globals.GetDataDirectory())
if s.Balance_store, err = graviton.NewDiskStore(filepath.Join(current_path, "balances")); err == nil {
if err = s.Topo_store.Open(current_path); err == nil {
s.Block_tx_store.basedir = current_path
}
}
if err != nil {
logger.Error(err, "Cannot open store")
return err
}
logger.Info("Initialized", "path", current_path)
return nil
}
func (s *storage) IsBalancesIntialized() bool {
var err error
var balancehash, random_hash [32]byte
balance_ss, _ := s.Balance_store.LoadSnapshot(0) // load most recent snapshot
balancetree, _ := balance_ss.GetTree(config.BALANCE_TREE)
// avoid hardcoding any hash
if balancehash, err = balancetree.Hash(); err == nil {
random_tree, _ := balance_ss.GetTree(config.SC_META)
if random_hash, err = random_tree.Hash(); err == nil {
if random_hash == balancehash {
return false
}
}
}
if err != nil {
panic("database issues")
}
return true
}
func (chain *Blockchain) StoreBlock(bl *block.Block, snapshot_version uint64) {
hash := bl.GetHash()
serialized_bytes := bl.Serialize() // we are storing the miner transactions within
difficulty_of_current_block := new(big.Int)
if len(bl.Tips) == 0 { // genesis block has no parent
difficulty_of_current_block.SetUint64(1) // this is never used, as genesis block is a sync block, only its cumulative difficulty is used
} else {
difficulty_of_current_block = chain.Get_Difficulty_At_Tips(bl.Tips)
}
chain.Store.Block_tx_store.DeleteBlock(hash) // what should we do on error
err := chain.Store.Block_tx_store.WriteBlock(hash, serialized_bytes, difficulty_of_current_block, snapshot_version, bl.Height)
if err != nil {
panic(fmt.Sprintf("error while writing block"))
}
}
// loads a block from disk, deserializes it
func (chain *Blockchain) Load_BL_FROM_ID(hash [32]byte) (*block.Block, error) {
var bl block.Block
if block_data, err := chain.Store.Block_tx_store.ReadBlock(hash); err == nil {
if err = bl.Deserialize(block_data); err != nil { // we should deserialize the block here
//logger.Warnf("fError deserialiing block, block id %x len(data) %d data %x err %s", hash[:], len(block_data), block_data, err)
return nil, err
}
return &bl, nil
} else {
return nil, err
}
/*else if xerrors.Is(err,graviton.ErrNotFound){
}*/
}
// confirm whether the block exist in the data
// this only confirms whether the block has been downloaded
// a separate check is required, whether the block is valid ( satifies PoW and other conditions)
// we will not add a block to store, until it satisfies PoW
func (chain *Blockchain) Block_Exists(h crypto.Hash) bool {
if _, err := chain.Load_BL_FROM_ID(h); err == nil {
return true
}
return false
}
// This will get the biggest height of tip for hardfork version and other calculations
// get biggest height of parent, add 1
func (chain *Blockchain) Calculate_Height_At_Tips(tips []crypto.Hash) int64 {
height := int64(0)
if len(tips) == 0 { // genesis block has no parent
} else { // find the best height of past
for i := range tips {
past_height := chain.Load_Block_Height(tips[i])
if past_height < 0 {
panic(fmt.Errorf("could not find height for blid %s", tips[i]))
}
if height <= past_height {
height = past_height
}
}
height++
}
return height
}
func (chain *Blockchain) Load_Block_Timestamp(h crypto.Hash) uint64 {
bl, err := chain.Load_BL_FROM_ID(h)
if err != nil {
panic(err)
}
return bl.Timestamp
}
func (chain *Blockchain) Load_Block_Height(h crypto.Hash) (height int64) {
defer func() {
if r := recover(); r != nil {
height = -1
}
}()
if heighti, err := chain.ReadBlockHeight(h); err != nil {
return -1
} else {
return int64(heighti)
}
}
func (chain *Blockchain) Load_Height_for_BL_ID(h crypto.Hash) int64 {
return chain.Load_Block_Height(h)
}
// all the immediate past of a block
func (chain *Blockchain) Get_Block_Past(hash crypto.Hash) (blocks []crypto.Hash) {
//fmt.Printf("loading tips for block %x\n", hash)
if keysi, ok := chain.cache_BlockPast.Get(hash); ok {
keys := keysi.([]crypto.Hash)
blocks = make([]crypto.Hash, len(keys))
for i := range keys {
copy(blocks[i][:], keys[i][:])
}
return
}
bl, err := chain.Load_BL_FROM_ID(hash)
if err != nil {
panic(err)
}
blocks = make([]crypto.Hash, 0, len(bl.Tips))
for i := range bl.Tips {
blocks = append(blocks, bl.Tips[i])
}
cache_copy := make([]crypto.Hash, len(blocks), len(blocks))
for i := range blocks {
cache_copy[i] = blocks[i]
}
if chain.cache_enabled { //set in cache
chain.cache_BlockPast.Add(hash, cache_copy)
}
return
}
func (chain *Blockchain) Load_Block_Difficulty(h crypto.Hash) *big.Int {
if diff, err := chain.Store.Block_tx_store.ReadBlockDifficulty(h); err != nil {
panic(err)
} else {
return diff
}
}
func (chain *Blockchain) Get_Top_ID() crypto.Hash {
var h crypto.Hash
topo_count := chain.Store.Topo_store.Count()
if topo_count == 0 {
return h
}
cindex := topo_count - 1
for {
r, err := chain.Store.Topo_store.Read(cindex)
if err != nil {
panic(err)
}
if !r.IsClean() {
return r.BLOCK_ID
}
if cindex == 0 {
return h
}
cindex--
}
}
// faster bootstrap
func (chain *Blockchain) Load_TOP_HEIGHT() int64 {
return chain.Load_Block_Height(chain.Get_Top_ID())
}
func (chain *Blockchain) Load_TOPO_HEIGHT() int64 {
topo_count := chain.Store.Topo_store.Count()
if topo_count == 0 {
return 0
}
cindex := topo_count - 1
for {
r, err := chain.Store.Topo_store.Read(cindex)
if err != nil {
panic(err)
}
if !r.IsClean() {
return cindex
}
if cindex == 0 {
return 0
}
cindex--
}
}
func (chain *Blockchain) Load_Block_Topological_order_at_index(index_pos int64) (hash crypto.Hash, err error) {
r, err := chain.Store.Topo_store.Read(index_pos)
if err != nil {
return hash, err
}
if !r.IsClean() {
return r.BLOCK_ID, nil
} else {
panic("cnnot query clean block id")
}
}
//load store hash from 2 tree
func (chain *Blockchain) Load_Merkle_Hash(version uint64) (hash crypto.Hash, err error) {
if hashi, ok := chain.cache_VersionMerkle.Get(version); ok {
hash = hashi.(crypto.Hash)
return
}
ss, err := chain.Store.Balance_store.LoadSnapshot(version)
if err != nil {
return
}
balance_tree, err := ss.GetTree(config.BALANCE_TREE)
if err != nil {
return
}
sc_meta_tree, err := ss.GetTree(config.SC_META)
if err != nil {
return
}
balance_merkle_hash, err := balance_tree.Hash()
if err != nil {
return
}
meta_merkle_hash, err := sc_meta_tree.Hash()
if err != nil {
return
}
for i := range balance_merkle_hash {
hash[i] = balance_merkle_hash[i] ^ meta_merkle_hash[i]
}
if chain.cache_enabled { //set in cache
chain.cache_VersionMerkle.Add(version, hash)
}
return hash, nil
}
// loads a complete block from disk
func (chain *Blockchain) Load_Complete_Block(blid crypto.Hash) (cbl *block.Complete_Block, err error) {
cbl = &block.Complete_Block{}
cbl.Bl, err = chain.Load_BL_FROM_ID(blid)
if err != nil {
return
}
for _, txid := range cbl.Bl.Tx_hashes {
var tx_bytes []byte
if tx_bytes, err = chain.Store.Block_tx_store.ReadTX(txid); err != nil {
return
} else {
var tx transaction.Transaction
if err = tx.Deserialize(tx_bytes); err != nil {
return
}
cbl.Txs = append(cbl.Txs, &tx)
}
}
return
}

225
blockchain/storefs.go Normal file
View File

@ -0,0 +1,225 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
// this file implements a filesystem store which is used to store blocks/transactions directly in the file system
import "os"
import "fmt"
import "strings"
import "io/ioutil"
import "math/big"
import "path/filepath"
import "github.com/deroproject/derohe/globals"
type storefs struct {
basedir string
}
// the filename stores the following information
// hex block id (64 chars).block._ rewards (decimal) _ difficulty _ cumulative difficulty
func (s *storefs) ReadBlock(h [32]byte) ([]byte, error) {
defer globals.Recover(0)
var dummy [32]byte
if h == dummy {
return nil, fmt.Errorf("empty block")
}
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
files, err := os.ReadDir(dir)
if err != nil {
return nil, err
}
filename_start := fmt.Sprintf("%x.block", h[:])
for _, file := range files {
if strings.HasPrefix(file.Name(), filename_start) {
//fmt.Printf("Reading block with filename %s\n", file.Name())
file := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]), file.Name())
return os.ReadFile(file)
}
}
return nil, os.ErrNotExist
}
// on windows, we see an odd behaviour where some files could not be deleted, since they may exist only in cache
func (s *storefs) DeleteBlock(h [32]byte) error {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
files, err := os.ReadDir(dir)
if err != nil {
return err
}
filename_start := fmt.Sprintf("%x.block", h[:])
var found bool
for _, file := range files {
if strings.HasPrefix(file.Name(), filename_start) {
file := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]), file.Name())
err = os.Remove(file)
if err != nil {
//return err
}
found = true
}
}
if found {
return nil
}
return os.ErrNotExist
}
func (s *storefs) ReadBlockDifficulty(h [32]byte) (*big.Int, error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
files, err := os.ReadDir(dir)
if err != nil {
return nil, err
}
filename_start := fmt.Sprintf("%x.block", h[:])
for _, file := range files {
if strings.HasPrefix(file.Name(), filename_start) {
diff := new(big.Int)
parts := strings.Split(file.Name(), "_")
if len(parts) != 4 {
panic("such filename cannot occur")
}
_, err := fmt.Sscan(parts[1], diff)
if err != nil {
return nil, err
}
return diff, nil
}
}
return nil, os.ErrNotExist
}
// this cannot be cached
func (chain *Blockchain) ReadBlockSnapshotVersion(h [32]byte) (uint64, error) {
return chain.Store.Block_tx_store.ReadBlockSnapshotVersion(h)
}
func (s *storefs) ReadBlockSnapshotVersion(h [32]byte) (uint64, error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
files, err := os.ReadDir(dir) // this always returns the sorted list
if err != nil {
return 0, err
}
// windows has a caching issue, so earlier versions may exist at the same time
// so we mitigate it, by using the last version, below 3 lines reverse the already sorted arrray
for left, right := 0, len(files)-1; left < right; left, right = left+1, right-1 {
files[left], files[right] = files[right], files[left]
}
filename_start := fmt.Sprintf("%x.block", h[:])
for _, file := range files {
if strings.HasPrefix(file.Name(), filename_start) {
var ssversion uint64
parts := strings.Split(file.Name(), "_")
if len(parts) != 4 {
panic("such filename cannot occur")
}
_, err := fmt.Sscan(parts[2], &ssversion)
if err != nil {
return 0, err
}
return ssversion, nil
}
}
return 0, os.ErrNotExist
}
func (chain *Blockchain) ReadBlockHeight(h [32]byte) (uint64, error) {
if heighti, ok := chain.cache_BlockHeight.Get(h); ok {
height := heighti.(uint64)
return height, nil
}
height, err := chain.Store.Block_tx_store.ReadBlockHeight(h)
if err == nil && chain.cache_enabled {
chain.cache_BlockHeight.Add(h, height)
}
return height, err
}
func (s *storefs) ReadBlockHeight(h [32]byte) (uint64, error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
files, err := os.ReadDir(dir)
if err != nil {
return 0, err
}
filename_start := fmt.Sprintf("%x.block", h[:])
for _, file := range files {
if strings.HasPrefix(file.Name(), filename_start) {
var height uint64
parts := strings.Split(file.Name(), "_")
if len(parts) != 4 {
panic("such filename cannot occur")
}
_, err := fmt.Sscan(parts[3], &height)
if err != nil {
return 0, err
}
return height, nil
}
}
return 0, os.ErrNotExist
}
func (s *storefs) WriteBlock(h [32]byte, data []byte, difficulty *big.Int, ss_version uint64, height uint64) (err error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
file := filepath.Join(dir, fmt.Sprintf("%x.block_%s_%d_%d", h[:], difficulty.String(), ss_version, height))
if err = os.MkdirAll(dir, 0700); err != nil {
return err
}
return ioutil.WriteFile(file, data, 0600)
}
func (s *storefs) ReadTX(h [32]byte) ([]byte, error) {
file := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]), fmt.Sprintf("%x.tx", h[:]))
return ioutil.ReadFile(file)
}
func (s *storefs) WriteTX(h [32]byte, data []byte) (err error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
file := filepath.Join(dir, fmt.Sprintf("%x.tx", h[:]))
if err = os.MkdirAll(dir, 0700); err != nil {
return err
}
return ioutil.WriteFile(file, data, 0600)
}
func (s *storefs) DeleteTX(h [32]byte) (err error) {
dir := filepath.Join(filepath.Join(s.basedir, "bltx_store"), fmt.Sprintf("%02x", h[0]), fmt.Sprintf("%02x", h[1]), fmt.Sprintf("%02x", h[2]))
file := filepath.Join(dir, fmt.Sprintf("%x.tx", h[:]))
return os.Remove(file)
}

306
blockchain/storetopo.go Normal file
View File

@ -0,0 +1,306 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "os"
import "fmt"
import "math"
import "path/filepath"
import "encoding/binary"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/cryptography/crypto"
type TopoRecord struct {
BLOCK_ID [32]byte
State_Version uint64
Height int64
}
const TOPORECORD_SIZE int64 = 48
// this file implements a filesystem store which is used to store topo to block mapping directly in the file system and the state version directly tied
type storetopofs struct {
topomapping *os.File
}
func (s TopoRecord) String() string {
return fmt.Sprintf("blid %x state version %d height %d", s.BLOCK_ID[:], s.State_Version, s.Height)
}
func (s *storetopofs) Open(basedir string) (err error) {
s.topomapping, err = os.OpenFile(filepath.Join(basedir, "topo.map"), os.O_RDWR|os.O_CREATE, 0700)
return err
}
func (s *storetopofs) Count() int64 {
fstat, err := s.topomapping.Stat()
if err != nil {
panic(fmt.Sprintf("cannot stat topofile. err %s", err))
}
count := int64(fstat.Size() / int64(TOPORECORD_SIZE))
for ; count >= 1; count-- {
if record, err := s.Read(count - 1); err == nil && !record.IsClean() {
break
} else if err != nil {
panic(fmt.Sprintf("cannot read topofile. err %s", err))
}
}
return count
}
// it basically represents Load_Block_Topological_order_at_index
// reads an entry at specific location
func (s *storetopofs) Read(index int64) (TopoRecord, error) {
var buf [TOPORECORD_SIZE]byte
var record TopoRecord
if n, err := s.topomapping.ReadAt(buf[:], index*TOPORECORD_SIZE); int64(n) != TOPORECORD_SIZE {
return record, err
}
copy(record.BLOCK_ID[:], buf[:])
record.State_Version = binary.LittleEndian.Uint64(buf[len(record.BLOCK_ID):])
record.Height = int64(binary.LittleEndian.Uint64(buf[len(record.BLOCK_ID)+8:]))
return record, nil
}
func (s *storetopofs) Write(index int64, blid [32]byte, state_version uint64, height int64) (err error) {
var buf [TOPORECORD_SIZE]byte
var record TopoRecord
copy(buf[:], blid[:])
binary.LittleEndian.PutUint64(buf[len(record.BLOCK_ID):], state_version)
//height := chain.Load_Height_for_BL_ID(blid)
binary.LittleEndian.PutUint64(buf[len(record.BLOCK_ID)+8:], uint64(height))
_, err = s.topomapping.WriteAt(buf[:], index*TOPORECORD_SIZE)
return err
}
func (s *storetopofs) Clean(index int64) (err error) {
var state_version uint64
var blid [32]byte
return s.Write(index, blid, state_version, 0)
}
// whether record is clean
func (r *TopoRecord) IsClean() bool {
if r.State_Version != 0 {
return false
}
for _, x := range r.BLOCK_ID {
if x != 0 {
return false
}
}
return true
}
var pruned_till int64 = -1
// locates prune topoheight till where the history has been pruned
// this is not used anywhere in the consensus and can be modified any way possible
// this is for the wallet
func (s *storetopofs) LocatePruneTopo() int64 {
if pruned_till >= 0 { // return cached result
return pruned_till
}
count := s.Count()
if count < 10 {
return 0
}
zero_block, err := s.Read(0)
if err != nil || zero_block.IsClean() {
return 0
}
fifth_block, err := s.Read(5)
if err != nil || fifth_block.IsClean() {
return 0
}
// we are assumming atleast 5 blocks are pruned
if zero_block.State_Version != fifth_block.State_Version {
return 0
}
// now we must find the point where version number = zero_block.State_Version + 1
low := int64(0) // in case of purging DB, this should start from N
high := int64(count)
prune_topo := int64(math.MaxInt64)
for low <= high {
median := (low + high) / 2
median_block, _ := s.Read(median)
if median_block.State_Version >= (zero_block.State_Version + 1) {
if prune_topo > median {
prune_topo = median
}
high = median - 1
} else {
low = median + 1
}
}
prune_topo--
pruned_till = prune_topo
return prune_topo
}
// exported from chain
func (chain *Blockchain) LocatePruneTopo() int64 {
return chain.Store.Topo_store.LocatePruneTopo()
}
func (s *storetopofs) binarySearchHeight(targetheight int64) (blids []crypto.Hash, topos []int64) {
startIndex := int64(0)
total_records := int64(s.Count())
endIndex := total_records
midIndex := total_records / 2
if endIndex < 0 { // no record
return
}
for startIndex <= endIndex {
record, _ := s.Read(midIndex)
if record.Height >= targetheight-((config.STABLE_LIMIT*4)/2) && record.Height <= targetheight+((config.STABLE_LIMIT*4)/2) {
break
}
if record.Height >= targetheight {
endIndex = midIndex - 1
midIndex = (startIndex + endIndex) / 2
continue
}
startIndex = midIndex + 1
midIndex = (startIndex + endIndex) / 2
}
for i, count := midIndex, 0; i >= 0 && count < 100; i, count = i-1, count+1 {
record, _ := s.Read(i)
if record.Height == targetheight {
blids = append(blids, record.BLOCK_ID)
topos = append(topos, i)
}
}
for i, count := midIndex, 0; i < total_records && count < 100; i, count = i+1, count+1 {
record, _ := s.Read(i)
if record.Height == targetheight {
blids = append(blids, record.BLOCK_ID)
topos = append(topos, i)
}
}
blids, topos = SliceUniqTopoRecord(blids, topos) // unique the record
return
}
// SliceUniq removes duplicate values in given slice
func SliceUniqTopoRecord(s []crypto.Hash, h []int64) ([]crypto.Hash, []int64) {
for i := 0; i < len(s); i++ {
for i2 := i + 1; i2 < len(s); i2++ {
if s[i] == s[i2] {
// delete
s = append(s[:i2], s[i2+1:]...)
h = append(h[:i2], h[i2+1:]...)
i2--
}
}
}
return s, h
}
func (chain *Blockchain) Get_Blocks_At_Height(height int64) []crypto.Hash {
blids, _ := chain.Store.Topo_store.binarySearchHeight(height)
return blids
}
// since topological order might mutate, instead of doing cleanup, we double check the pointers
// we first locate the block and its height, then we locate that height, then we traverse 50 blocks up and 50 blocks down
func (chain *Blockchain) Is_Block_Topological_order(blid crypto.Hash) bool {
bl_height := chain.Load_Height_for_BL_ID(blid)
blids, _ := chain.Store.Topo_store.binarySearchHeight(bl_height)
for i := range blids {
if blids[i] == blid {
return true
}
}
return false
}
func (chain *Blockchain) Load_Block_Topological_order(blid crypto.Hash) int64 {
bl_height := chain.Load_Height_for_BL_ID(blid)
blids, topos := chain.Store.Topo_store.binarySearchHeight(bl_height)
for i := range blids {
if blids[i] == blid {
return topos[i]
}
}
return -1
}
// this function is not used in core
func (chain *Blockchain) Find_Blocks_Height_Range(startheight, stopheight int64) (blids []crypto.Hash) {
_, topos_start := chain.Store.Topo_store.binarySearchHeight(startheight)
if stopheight > chain.Get_Height() {
stopheight = chain.Get_Height()
}
_, topos_end := chain.Store.Topo_store.binarySearchHeight(stopheight)
lowest := topos_start[0]
for _, t := range topos_start {
if t < lowest {
lowest = t
}
}
highest := topos_end[0]
for _, t := range topos_end {
if t > highest {
highest = t
}
}
blid_map := map[crypto.Hash]bool{}
for i := lowest; i <= highest; i++ {
if toporecord, err := chain.Store.Topo_store.Read(i); err != nil {
panic(err)
} else {
blid_map[toporecord.BLOCK_ID] = true
}
}
for k := range blid_map {
blids = append(blids, k)
}
return
}

View File

@ -0,0 +1,569 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
// this file implements core execution of all changes to block chain homomorphically
import "fmt"
import "bufio"
import "strings"
import "strconv"
import "runtime/debug"
import "encoding/hex"
import "encoding/binary"
import "math/big"
import "golang.org/x/xerrors"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/cryptography/bn256"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/premine"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/dvm"
import "github.com/deroproject/graviton"
// convert bitcoin model to our, but skip initial 4 years of supply, so our total supply gets to 10.5 million
const RewardReductionInterval = 210000 * 600 / config.BLOCK_TIME // 210000 comes from bitcoin
const BaseReward = (50 * 100000 * config.BLOCK_TIME) / 600 // convert bitcoin reward system to our block
// CalcBlockSubsidy returns the subsidy amount a block at the provided height
// should have. This is mainly used for determining how much the coinbase for
// newly generated blocks awards as well as validating the coinbase for blocks
// has the expected value.
//
// The subsidy is halved every SubsidyReductionInterval blocks. Mathematically
// this is: baseSubsidy / 2^(height/SubsidyReductionInterval)
//
// At the target block generation rate for the main network, this is
// approximately every 4 years.
//
// basically out of of the bitcoin supply, we have wiped of initial interval ( this wipes of 10.5 million, so total remaining is around 10.5 million
func CalcBlockReward(height uint64) uint64 {
return BaseReward >> ((height + RewardReductionInterval) / RewardReductionInterval)
}
// process the miner tx, giving fees, miner rewatd etc
func (chain *Blockchain) process_miner_transaction(bl *block.Block, genesis bool, balance_tree *graviton.Tree, fees uint64, height uint64) {
tx := bl.Miner_TX
var acckey crypto.Point
if err := acckey.DecodeCompressed(tx.MinerAddress[:]); err != nil {
panic(err)
}
if genesis == true { // process premine ,register genesis block, dev key
balance := crypto.ConstructElGamal(acckey.G1(), crypto.ElGamal_BASE_G) // init zero balance
balance = balance.Plus(new(big.Int).SetUint64(tx.Value << 1)) // add premine to users balance homomorphically
nb := crypto.NonceBalance{NonceHeight: 0, Balance: balance}
balance_tree.Put(tx.MinerAddress[:], nb.Serialize()) // reserialize and store
// we must process premine list and register and give them balance,
premine_count := 0
scanner := bufio.NewScanner(strings.NewReader(premine.List))
for scanner.Scan() {
data := strings.Split(scanner.Text(), ",")
if len(data) < 2 {
panic("invalid premine list")
}
var raw_tx [4096]byte
var rtx transaction.Transaction
if ramount, err := strconv.ParseUint(data[0], 10, 64); err != nil {
panic(err)
} else if n, err := hex.Decode(raw_tx[:], []byte(data[1])); err != nil {
panic(err)
} else if err := rtx.Deserialize(raw_tx[:n]); err != nil {
panic(err)
} else if !rtx.IsRegistration() {
panic("tx is not registration")
} else if !rtx.IsRegistrationValid() {
panic("tx registration signature is invalid")
} else {
var racckey crypto.Point
if err := racckey.DecodeCompressed(rtx.MinerAddress[:]); err != nil {
panic(err)
}
balance := crypto.ConstructElGamal(racckey.G1(), crypto.ElGamal_BASE_G) // init zero balance
balance = balance.Plus(new(big.Int).SetUint64(ramount)) // add premine to users balance homomorphically
nb := crypto.NonceBalance{NonceHeight: 0, Balance: balance}
balance_tree.Put(rtx.MinerAddress[:], nb.Serialize()) // reserialize and store
premine_count++
}
}
logger.V(1).Info("successfully added premine accounts", "count", premine_count)
return
}
// general coin base transaction
base_reward := CalcBlockReward(uint64(height))
full_reward := base_reward + fees
//full_reward is divided into equal parts for all miner blocks + miner address
// since perfect division is not possible, ( see money handling)
// any left over change is delivered to main miner who integrated the full block
share := full_reward / uint64(len(bl.MiniBlocks)) // one block integrator, this is integer division
leftover := full_reward - (share * uint64(len(bl.MiniBlocks))) // only integrator will get this
{ // giver integrator his reward
balance_serialized, err := balance_tree.Get(tx.MinerAddress[:])
if err != nil {
panic(err)
}
nb := new(crypto.NonceBalance).Deserialize(balance_serialized)
nb.Balance = nb.Balance.Plus(new(big.Int).SetUint64(share + leftover)) // add miners reward to miners balance homomorphically
balance_tree.Put(tx.MinerAddress[:], nb.Serialize()) // reserialize and store
}
// all the other miniblocks will get their share
for _, mbl := range bl.MiniBlocks {
if mbl.Final {
continue
}
_, key_compressed, balance_serialized, err := balance_tree.GetKeyValueFromHash(mbl.KeyHash[:16])
if err != nil {
panic(err)
}
nb := new(crypto.NonceBalance).Deserialize(balance_serialized)
nb.Balance = nb.Balance.Plus(new(big.Int).SetUint64(share)) // add miners reward to miners balance homomorphically
balance_tree.Put(key_compressed[:], nb.Serialize()) // reserialize and store
}
return
}
// process the tx, giving fees, miner rewatd etc
// this should be atomic, either all should be done or none at all
func (chain *Blockchain) process_transaction(changed map[crypto.Hash]*graviton.Tree, tx transaction.Transaction, balance_tree *graviton.Tree, height uint64) uint64 {
logger.V(2).Info("Processing/Executing transaction", "txid", tx.GetHash(), "type", tx.TransactionType.String())
switch tx.TransactionType {
case transaction.REGISTRATION: // miner address represents registration
if _, err := balance_tree.Get(tx.MinerAddress[:]); err != nil {
if !xerrors.Is(err, graviton.ErrNotFound) { // any other err except not found panic
panic(err)
}
} // address needs registration
var acckey crypto.Point
if err := acckey.DecodeCompressed(tx.MinerAddress[:]); err != nil {
panic(err)
}
zerobalance := crypto.ConstructElGamal(acckey.G1(), crypto.ElGamal_BASE_G)
if !globals.IsMainnet() { // give testnet users a dummy amount to play
zerobalance = zerobalance.Plus(new(big.Int).SetUint64(800000)) // add fix amount to every wallet to users balance for more testing
}
nb := crypto.NonceBalance{NonceHeight: 0, Balance: zerobalance}
balance_tree.Put(tx.MinerAddress[:], nb.Serialize())
return 0 // registration doesn't give any fees . why & how ?
case transaction.BURN_TX, transaction.NORMAL, transaction.SC_TX: // burned amount is not added anywhere and thus lost forever
for t := range tx.Payloads {
var tree *graviton.Tree
if tx.Payloads[t].SCID.IsZero() {
tree = balance_tree
} else {
tree = changed[tx.Payloads[t].SCID]
}
parity := tx.Payloads[t].Proof.Parity()
for i := 0; i < int(tx.Payloads[t].Statement.RingSize); i++ {
key_pointer := tx.Payloads[t].Statement.Publickeylist_pointers[i*int(tx.Payloads[t].Statement.Bytes_per_publickey) : (i+1)*int(tx.Payloads[t].Statement.Bytes_per_publickey)]
_, key_compressed, balance_serialized, err := tree.GetKeyValueFromHash(key_pointer)
if err != nil && !tx.Payloads[t].SCID.IsZero() {
if xerrors.Is(err, graviton.ErrNotFound) { // if the address is not found, lookup in main tree
_, key_compressed, _, err = balance_tree.GetKeyValueFromHash(key_pointer)
if err == nil {
var p bn256.G1
if err = p.DecodeCompressed(key_compressed[:]); err != nil {
panic(fmt.Errorf("key %d could not be decompressed", i))
}
balance := crypto.ConstructElGamal(&p, crypto.ElGamal_BASE_G) // init zero balance
nb := crypto.NonceBalance{NonceHeight: 0, Balance: balance}
balance_serialized = nb.Serialize()
}
}
}
if err != nil {
panic(fmt.Errorf("balance not obtained err %s\n", err))
}
nb := new(crypto.NonceBalance).Deserialize(balance_serialized)
echanges := crypto.ConstructElGamal(tx.Payloads[t].Statement.C[i], tx.Payloads[t].Statement.D)
nb.Balance = nb.Balance.Add(echanges) // homomorphic addition of changes
if (i%2 == 0) == parity { // this condition is well thought out and works good enough
nb.NonceHeight = height
}
tree.Put(key_compressed, nb.Serialize()) // reserialize and store
}
}
return tx.Fees()
default:
panic("unknown transaction, do not know how to process it")
return 0
}
}
type Tree_Wrapper struct {
tree *graviton.Tree
entries map[string][]byte
transfere []dvm.TransferExternal
}
func (t *Tree_Wrapper) Get(key []byte) ([]byte, error) {
if value, ok := t.entries[string(key)]; ok {
return value, nil
} else {
return t.tree.Get(key)
}
}
func (t *Tree_Wrapper) Put(key []byte, value []byte) error {
t.entries[string(key)] = append([]byte{}, value...)
return nil
}
// checks cache and returns a wrapped tree if possible
func wrapped_tree(cache map[crypto.Hash]*graviton.Tree, ss *graviton.Snapshot, id crypto.Hash) *Tree_Wrapper {
if cached_tree, ok := cache[id]; ok { // tree is in cache return it
return &Tree_Wrapper{tree: cached_tree, entries: map[string][]byte{}}
}
if tree, err := ss.GetTree(string(id[:])); err != nil {
panic(err)
} else {
return &Tree_Wrapper{tree: tree, entries: map[string][]byte{}}
}
}
// does additional processing for SC
// all processing occurs in wrapped trees, if any error occurs we dicard all trees
func (chain *Blockchain) process_transaction_sc(cache map[crypto.Hash]*graviton.Tree, ss *graviton.Snapshot, bl_height, bl_topoheight, bl_timestamp uint64, blid crypto.Hash, tx transaction.Transaction, balance_tree *graviton.Tree, sc_tree *graviton.Tree) (gas uint64, err error) {
if len(tx.SCDATA) == 0 {
return tx.Fees(), nil
}
gas = tx.Fees()
w_balance_tree := &Tree_Wrapper{tree: balance_tree, entries: map[string][]byte{}}
w_sc_tree := &Tree_Wrapper{tree: sc_tree, entries: map[string][]byte{}}
_ = w_balance_tree
var w_sc_data_tree *Tree_Wrapper
txhash := tx.GetHash()
scid := txhash
defer func() {
if r := recover(); r != nil {
logger.V(1).Error(nil, "Recover while executing SC ", "txid", txhash, "error", r, "stack", fmt.Sprintf("%s", string(debug.Stack())))
}
}()
if !tx.SCDATA.Has(rpc.SCACTION, rpc.DataUint64) { // tx doesn't have sc action
//err = fmt.Errorf("no scid provided")
return tx.Fees(), nil
}
action_code := rpc.SC_ACTION(tx.SCDATA.Value(rpc.SCACTION, rpc.DataUint64).(uint64))
switch action_code {
case rpc.SC_INSTALL: // request to install an SC
if !tx.SCDATA.Has(rpc.SCCODE, rpc.DataString) { // but only it is present
break
}
sc_code := tx.SCDATA.Value(rpc.SCCODE, rpc.DataString).(string)
if sc_code == "" { // no code provided nothing to do
err = fmt.Errorf("no code provided")
break
}
// check whether sc can be parsed
//var sc_parsed dvm.SmartContract
pos := ""
var sc dvm.SmartContract
if sc, pos, err = dvm.ParseSmartContract(sc_code); err != nil {
logger.V(2).Error(err, "error Parsing sc", "txid", txhash, "pos", pos)
break
}
meta := SC_META_DATA{}
if _, ok := sc.Functions["InitializePrivate"]; ok {
meta.Type = 1
}
w_sc_data_tree = wrapped_tree(cache, ss, scid)
// install SC, should we check for sanity now, why or why not
w_sc_data_tree.Put(SC_Code_Key(scid), dvm.Variable{Type: dvm.String, ValueString: sc_code}.MarshalBinaryPanic())
w_sc_tree.Put(SC_Meta_Key(scid), meta.MarshalBinary())
if meta.Type == 1 { // if its a a private SC
gas, err = chain.execute_sc_function(w_sc_tree, w_sc_data_tree, scid, bl_height, bl_topoheight, bl_timestamp, blid, tx, "InitializePrivate", 1)
} else {
gas, err = chain.execute_sc_function(w_sc_tree, w_sc_data_tree, scid, bl_height, bl_topoheight, bl_timestamp, blid, tx, "Initialize", 1)
}
if err != nil {
return
}
//fmt.Printf("Error status after initializing SC %s\n",err)
case rpc.SC_CALL: // trigger a CALL
if !tx.SCDATA.Has(rpc.SCID, rpc.DataHash) { // but only if it is present
err = fmt.Errorf("no scid provided")
break
}
if !tx.SCDATA.Has("entrypoint", rpc.DataString) { // but only if it is present
err = fmt.Errorf("no entrypoint provided")
break
}
scid = tx.SCDATA.Value(rpc.SCID, rpc.DataHash).(crypto.Hash)
if _, err = w_sc_tree.Get(SC_Meta_Key(scid)); err != nil {
err = fmt.Errorf("scid %s not installed", scid)
return
}
w_sc_data_tree = wrapped_tree(cache, ss, scid)
entrypoint := tx.SCDATA.Value("entrypoint", rpc.DataString).(string)
//fmt.Printf("We must call the SC %s function\n", entrypoint)
gas, err = chain.execute_sc_function(w_sc_tree, w_sc_data_tree, scid, bl_height, bl_topoheight, bl_timestamp, blid, tx, entrypoint, 1)
default: // unknown what to do
err = fmt.Errorf("unknown action what to do scid %x", scid)
return
}
// we must commit all the changes
// check whether we are not overflowing/underflowing, means SC is not over sending
if err == nil {
total_per_asset := map[crypto.Hash]uint64{}
for _, transfer := range w_sc_data_tree.transfere { // do external tranfer
if transfer.Amount == 0 {
continue
}
// an SCID can generate it's token infinitely
if transfer.Asset != scid && total_per_asset[transfer.Asset]+transfer.Amount <= total_per_asset[transfer.Asset] {
err = fmt.Errorf("Balance calculation overflow")
break
} else {
total_per_asset[transfer.Asset] = total_per_asset[transfer.Asset] + transfer.Amount
}
}
if err == nil {
for asset, value := range total_per_asset {
stored_value, _ := chain.LoadSCAssetValue(w_sc_data_tree, scid, asset)
// an SCID can generate it's token infinitely
if asset != scid && stored_value-value > stored_value {
err = fmt.Errorf("Balance calculation underflow stored_value %d transferring %d\n", stored_value, value)
break
}
var new_value [8]byte
binary.BigEndian.PutUint64(new_value[:], stored_value-value)
chain.StoreSCValue(w_sc_data_tree, scid, asset[:], new_value[:])
}
}
//also check whether all destinations are registered
if err == nil {
for _, transfer := range w_sc_data_tree.transfere {
if _, err = balance_tree.Get([]byte(transfer.Address)); err == nil || xerrors.Is(err, graviton.ErrNotFound) {
// everything is okay
} else {
err = fmt.Errorf("account is unregistered")
logger.V(1).Error(err, "account is unregistered", "txhash", txhash, "scid", scid, "address", transfer.Address)
break
}
}
}
}
if err != nil { // error occured, give everything to SC, since we may not have information to send them back
if chain.simulator {
logger.Error(err, "error executing sc", "txid", txhash)
}
for _, payload := range tx.Payloads {
var new_value [8]byte
w_sc_data_tree = wrapped_tree(cache, ss, scid) // get a new tree, discarding everything
stored_value, _ := chain.LoadSCAssetValue(w_sc_data_tree, scid, payload.SCID)
binary.BigEndian.PutUint64(new_value[:], stored_value+payload.BurnValue)
chain.StoreSCValue(w_sc_data_tree, scid, payload.SCID[:], new_value[:])
for k, v := range w_sc_data_tree.entries { // commit incoming balances to tree
if err = w_sc_data_tree.tree.Put([]byte(k), v); err != nil {
return
}
}
//for k, v := range w_sc_tree.entries {
// if err = w_sc_tree.tree.Put([]byte(k), v); err != nil {
// return
// }
//}
}
return
}
// anything below should never give error
if _, ok := cache[scid]; !ok {
cache[scid] = w_sc_data_tree.tree
}
for k, v := range w_sc_data_tree.entries { // commit entire data to tree
if _, ok := globals.Arguments["--debug"]; ok && globals.Arguments["--debug"] != nil && chain.simulator {
logger.V(1).Info("Writing", "txid", txhash, "scid", scid, "key", fmt.Sprintf("%x", k), "value", fmt.Sprintf("%x", v))
}
if err = w_sc_data_tree.tree.Put([]byte(k), v); err != nil {
return
}
}
for k, v := range w_sc_tree.entries {
if err = w_sc_tree.tree.Put([]byte(k), v); err != nil {
return
}
}
for i, transfer := range w_sc_data_tree.transfere { // do external tranfer
if transfer.Amount == 0 {
continue
}
//fmt.Printf("%d sending to external %s %x\n", i,transfer.Asset,transfer.Address)
var zeroscid crypto.Hash
var curbtree *graviton.Tree
switch transfer.Asset {
case zeroscid: // main dero balance, handle it
curbtree = balance_tree
case scid: // this scid balance, handle it
curbtree = cache[scid]
default: // any other asset scid
var ok bool
if curbtree, ok = cache[transfer.Asset]; !ok {
if curbtree, err = ss.GetTree(string(transfer.Asset[:])); err != nil {
panic(err)
}
cache[transfer.Asset] = curbtree
}
}
if curbtree == nil {
panic("tree cannot be nil at this point in time")
}
addr_bytes := []byte(transfer.Address)
if _, err = balance_tree.Get(addr_bytes); err != nil { // first check whether address is registered
err = fmt.Errorf("sending to non registered account acc %x err %s", addr_bytes, err) // this can only occur, if account no registered or dis corruption
panic(err)
}
var balance_serialized []byte
balance_serialized, err = curbtree.Get(addr_bytes)
if err != nil && xerrors.Is(err, graviton.ErrNotFound) { // if the address is not found, lookup in main tree
var p bn256.G1
if err = p.DecodeCompressed(addr_bytes[:]); err != nil {
panic(fmt.Errorf("key %x could not be decompressed", addr_bytes))
}
balance := crypto.ConstructElGamal(&p, crypto.ElGamal_BASE_G) // init zero balance
nb := crypto.NonceBalance{NonceHeight: 0, Balance: balance}
balance_serialized = nb.Serialize()
} else if err != nil {
fmt.Printf("%s %d could not transfer %d %+v\n", scid, i, transfer.Amount, addr_bytes)
panic(err) // only disk corruption can reach here
}
nb := new(crypto.NonceBalance).Deserialize(balance_serialized)
nb.Balance = nb.Balance.Plus(new(big.Int).SetUint64(transfer.Amount)) // add transfer to users balance homomorphically
curbtree.Put(addr_bytes, nb.Serialize()) // reserialize and store
}
//c := w_sc_data_tree.tree.Cursor()
//for k, v, err := c.First(); err == nil; k, v, err = c.Next() {
// fmt.Printf("key=%s (%x), value=%s\n", k, k, v)
//}
//fmt.Printf("cursor complete\n")
//h, err := data_tree.Hash()
//fmt.Printf("%s successfully executed sc_call data_tree hash %x %s\n", scid, h, err)
return tx.Fees(), nil
}
// func extract signer from a tx, if possible
// extract signer is only possible if ring size is 2
func extract_signer(tx *transaction.Transaction) (signer [33]byte, err error) {
for t := range tx.Payloads {
if uint64(len(tx.Payloads[t].Statement.Publickeylist_compressed)) != tx.Payloads[t].Statement.RingSize {
panic("tx is not expanded")
return signer, fmt.Errorf("tx is not expanded")
}
if tx.Payloads[t].SCID.IsZero() && tx.Payloads[t].Statement.RingSize == 2 {
parity := tx.Payloads[t].Proof.Parity()
for i := 0; i < int(tx.Payloads[t].Statement.RingSize); i++ {
if (i%2 == 0) == parity { // this condition is well thought out and works good enough
copy(signer[:], tx.Payloads[t].Statement.Publickeylist_compressed[i][:])
return
}
}
}
}
return signer, fmt.Errorf("unknown signer")
}

View File

@ -0,0 +1,453 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
import "fmt"
import "time"
/*import "bytes"
import "encoding/binary"
import "github.com/romana/rlog"
*/
import "sync"
import "runtime/debug"
import "golang.org/x/xerrors"
import "github.com/deroproject/graviton"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/cryptography/bn256"
// caches x of transactions validity
// it is always atomic
// the cache is txhash -> validity mapping
// if the entry exist, the tx is valid
// it stores special hash and first seen time
var transaction_valid_cache sync.Map
// this go routine continuously scans and cleans up the cache for expired entries
func clean_up_valid_cache() {
current_time := time.Now()
transaction_valid_cache.Range(func(k, value interface{}) bool {
first_seen := value.(time.Time)
if current_time.Sub(first_seen).Round(time.Second).Seconds() > 360 {
transaction_valid_cache.Delete(k)
}
return true
})
}
// Coinbase transactions need to verify registration
func (chain *Blockchain) Verify_Transaction_Coinbase(cbl *block.Complete_Block, minertx *transaction.Transaction) (err error) {
if !minertx.IsCoinbase() { // transaction is not coinbase, return failed
return fmt.Errorf("tx is not coinbase")
}
return nil // success comes last
}
// this checks the nonces of a tx agains the current chain state, this basically does a comparision of state trees in limited form
func (chain *Blockchain) Verify_Transaction_NonCoinbase_CheckNonce_Tips(hf_version int64, tx *transaction.Transaction, tips []crypto.Hash) (err error) {
var tx_hash crypto.Hash
defer func() { // safety so if anything wrong happens, verification fails
if r := recover(); r != nil {
logger.V(1).Error(nil, "Recovered while verifying tx", "txid", tx_hash, "r", r, "stack", debug.Stack())
err = fmt.Errorf("Stack Trace %s", debug.Stack())
}
}()
tx_hash = tx.GetHash()
if tx.TransactionType == transaction.REGISTRATION { // all other tx must be checked
return nil
}
if len(tips) < 1 {
return fmt.Errorf("no tips provided, cannot verify")
}
tips_string := tx_hash.String()
for _, tip := range tips {
tips_string += fmt.Sprintf("%s", tip.String())
}
if _, found := chain.cache_IsNonceValidTips.Get(tips_string); found {
return nil
}
// transaction needs to be expanded. this expansion needs balance state
version, err := chain.ReadBlockSnapshotVersion(tx.BLID)
if err != nil {
return err
}
ss_tx, err := chain.Store.Balance_store.LoadSnapshot(version)
if err != nil {
return err
}
var tx_balance_tree *graviton.Tree
if tx_balance_tree, err = ss_tx.GetTree(config.BALANCE_TREE); err != nil {
return err
}
if tx_balance_tree == nil {
return fmt.Errorf("mentioned balance tree not found, cannot verify TX")
}
// now we must solve the tips, against which the nonces will be verified
for _, tip := range tips {
var tip_balance_tree *graviton.Tree
version, err := chain.ReadBlockSnapshotVersion(tip)
if err != nil {
return err
}
ss_tip, err := chain.Store.Balance_store.LoadSnapshot(version)
if err != nil {
return err
}
if tip_balance_tree, err = ss_tip.GetTree(config.BALANCE_TREE); err != nil {
return err
}
if tip_balance_tree == nil {
return fmt.Errorf("mentioned tip balance tree not found, cannot verify TX")
}
for t := range tx.Payloads {
parity := tx.Payloads[t].Proof.Parity()
var tip_tree, tx_tree *graviton.Tree
if tx.Payloads[t].SCID.IsZero() { // choose whether we use main tree or sc tree
tip_tree = tip_balance_tree
tx_tree = tx_balance_tree
} else {
if tip_tree, err = ss_tip.GetTree(string(tx.Payloads[t].SCID[:])); err != nil {
return err
}
if tx_tree, err = ss_tx.GetTree(string(tx.Payloads[t].SCID[:])); err != nil {
return err
}
}
for i := 0; i < int(tx.Payloads[t].Statement.RingSize); i++ {
if (i%2 == 0) != parity { // this condition is well thought out and works good enough
continue
}
key_pointer := tx.Payloads[t].Statement.Publickeylist_pointers[i*int(tx.Payloads[t].Statement.Bytes_per_publickey) : (i+1)*int(tx.Payloads[t].Statement.Bytes_per_publickey)]
_, key_compressed, tx_balance_serialized, err := tx_tree.GetKeyValueFromHash(key_pointer)
if err != nil && tx.Payloads[t].SCID.IsZero() {
return err
}
if err != nil && xerrors.Is(err, graviton.ErrNotFound) && !tx.Payloads[t].SCID.IsZero() { // SC used a ring member not yet part
continue
}
var tx_nb, tip_nb crypto.NonceBalance
tx_nb.UnmarshalNonce(tx_balance_serialized)
_, _, tip_balance_serialized, err := tip_tree.GetKeyValueFromKey(key_compressed)
if err != nil && xerrors.Is(err, graviton.ErrNotFound) {
continue
}
if err != nil {
return err
}
tip_nb.UnmarshalNonce(tip_balance_serialized)
//fmt.Printf("tx nonce %d tip nonce %d\n", tx_nb.NonceHeight, tip_nb.NonceHeight)
if tip_nb.NonceHeight > tx_nb.NonceHeight {
return fmt.Errorf("Invalid Nonce, not usable, expected %d actual %d", tip_nb.NonceHeight, tx_nb.NonceHeight)
}
}
}
}
if chain.cache_enabled {
chain.cache_IsNonceValidTips.Add(tips_string, true) // set in cache
}
return nil
}
func (chain *Blockchain) Verify_Transaction_NonCoinbase(tx *transaction.Transaction) (err error) {
return chain.verify_Transaction_NonCoinbase_internal(false, tx)
}
func (chain *Blockchain) Expand_Transaction_NonCoinbase(tx *transaction.Transaction) (err error) {
return chain.verify_Transaction_NonCoinbase_internal(true, tx)
}
// all non miner tx must be non-coinbase tx
// each check is placed in a separate block of code, to avoid ambigous code or faulty checks
// all check are placed and not within individual functions ( so as we cannot skip a check )
// This function verifies tx fully, means all checks,
// if the transaction has passed the check it can be added to mempool, relayed or added to blockchain
// the transaction has already been deserialized thats it
// It also expands the transactions, using the repective state trie
func (chain *Blockchain) verify_Transaction_NonCoinbase_internal(skip_proof bool, tx *transaction.Transaction) (err error) {
var tx_hash crypto.Hash
defer func() { // safety so if anything wrong happens, verification fails
if r := recover(); r != nil {
logger.V(1).Error(nil, "Recovered while verifying tx", "txid", tx_hash, "r", r, "stack", debug.Stack())
err = fmt.Errorf("Stack Trace %s", debug.Stack())
}
}()
if tx.Version != 1 {
return fmt.Errorf("TX should be version 1")
}
tx_hash = tx.GetHash()
if tx.TransactionType == transaction.REGISTRATION {
if _, ok := transaction_valid_cache.Load(tx_hash); ok {
return nil //logger.Infof("Found in cache %s ",tx_hash)
} else {
//logger.Infof("TX not found in cache %s len %d ",tx_hash, len(tmp_buffer))
}
if tx.IsRegistrationValid() {
if chain.cache_enabled {
transaction_valid_cache.Store(tx_hash, time.Now()) // signature got verified, cache it
}
return nil
}
return fmt.Errorf("Registration has invalid signature")
}
// currently we allow following types of transaction
if !(tx.TransactionType == transaction.NORMAL || tx.TransactionType == transaction.SC_TX || tx.TransactionType == transaction.BURN_TX) {
return fmt.Errorf("Unknown transaction type")
}
if tx.TransactionType == transaction.BURN_TX {
if tx.Value == 0 {
return fmt.Errorf("Burn Value cannot be zero")
}
}
// avoid some bugs lurking elsewhere
if tx.Height != uint64(int64(tx.Height)) {
return fmt.Errorf("invalid tx height")
}
if len(tx.Payloads) < 1 {
return fmt.Errorf("tx must have at least one payload")
}
{ // we can not deduct fees, if no base, so make sure base is there
// this restriction should be lifted under suitable conditions
has_base := false
for i := range tx.Payloads {
if tx.Payloads[i].SCID.IsZero() {
has_base = true
}
}
if !has_base {
return fmt.Errorf("tx does not contains base")
}
}
for t := range tx.Payloads {
if tx.Payloads[t].Statement.Roothash != tx.Payloads[0].Statement.Roothash {
return fmt.Errorf("Roothash corrupted")
}
}
for t := range tx.Payloads {
// check sanity
if tx.Payloads[t].Statement.RingSize != uint64(len(tx.Payloads[t].Statement.Publickeylist_pointers)/int(tx.Payloads[t].Statement.Bytes_per_publickey)) {
return fmt.Errorf("corrupted key pointers ringsize")
}
if tx.Payloads[t].Statement.RingSize < 2 { // ring size minimum 2
return fmt.Errorf("RingSize for %d statement cannot be less than 2 actual %d", t, tx.Payloads[t].Statement.RingSize)
}
if tx.Payloads[t].Statement.RingSize > 128 { // ring size current limited to 128
return fmt.Errorf("RingSize for %d statement cannot be more than 128.Actual %d", t, tx.Payloads[t].Statement.RingSize)
}
if !crypto.IsPowerOf2(len(tx.Payloads[t].Statement.Publickeylist_pointers) / int(tx.Payloads[t].Statement.Bytes_per_publickey)) {
return fmt.Errorf("corrupted key pointers")
}
// check duplicate ring members within the tx
{
key_map := map[string]bool{}
for i := 0; i < int(tx.Payloads[t].Statement.RingSize); i++ {
key_map[string(tx.Payloads[t].Statement.Publickeylist_pointers[i*int(tx.Payloads[t].Statement.Bytes_per_publickey):(i+1)*int(tx.Payloads[t].Statement.Bytes_per_publickey)])] = true
}
if len(key_map) != int(tx.Payloads[t].Statement.RingSize) {
return fmt.Errorf("key_map does not contain ringsize members, ringsize %d , bytesperkey %d data %x", tx.Payloads[t].Statement.RingSize, tx.Payloads[t].Statement.Bytes_per_publickey, tx.Payloads[t].Statement.Publickeylist_pointers[:])
}
}
tx.Payloads[t].Statement.CLn = tx.Payloads[t].Statement.CLn[:0]
tx.Payloads[t].Statement.CRn = tx.Payloads[t].Statement.CRn[:0]
}
// transaction needs to be expanded. this expansion needs balance state
version, err := chain.ReadBlockSnapshotVersion(tx.BLID)
if err != nil {
return err
}
hash, err := chain.Load_Merkle_Hash(version)
if err != nil {
return err
}
if hash != tx.Payloads[0].Statement.Roothash {
return fmt.Errorf("Tx statement roothash mismatch ref blid %x expected %x actual %x", tx.BLID, tx.Payloads[0].Statement.Roothash, hash[:])
}
// we have found the balance tree with which it was built now lets verify
ss, err := chain.Store.Balance_store.LoadSnapshot(version)
if err != nil {
return err
}
var balance_tree *graviton.Tree
if balance_tree, err = ss.GetTree(config.BALANCE_TREE); err != nil {
return err
}
if balance_tree == nil {
return fmt.Errorf("mentioned balance tree not found, cannot verify TX")
}
//logger.Infof("dTX state tree has been found")
trees := map[crypto.Hash]*graviton.Tree{}
var zerohash crypto.Hash
trees[zerohash] = balance_tree // initialize main tree by default
for t := range tx.Payloads {
tx.Payloads[t].Statement.Publickeylist_compressed = tx.Payloads[t].Statement.Publickeylist_compressed[:0]
tx.Payloads[t].Statement.Publickeylist = tx.Payloads[t].Statement.Publickeylist[:0]
var tree *graviton.Tree
if _, ok := trees[tx.Payloads[t].SCID]; ok {
tree = trees[tx.Payloads[t].SCID]
} else {
// fmt.Printf("SCID loading %s tree\n", tx.Payloads[t].SCID)
tree, _ = ss.GetTree(string(tx.Payloads[t].SCID[:]))
trees[tx.Payloads[t].SCID] = tree
}
// now lets calculate CLn and CRn
for i := 0; i < int(tx.Payloads[t].Statement.RingSize); i++ {
key_pointer := tx.Payloads[t].Statement.Publickeylist_pointers[i*int(tx.Payloads[t].Statement.Bytes_per_publickey) : (i+1)*int(tx.Payloads[t].Statement.Bytes_per_publickey)]
_, key_compressed, balance_serialized, err := tree.GetKeyValueFromHash(key_pointer)
// if destination address could be found be found in sc balance tree, assume its zero balance
needs_init := false
if err != nil && !tx.Payloads[t].SCID.IsZero() {
if xerrors.Is(err, graviton.ErrNotFound) { // if the address is not found, lookup in main tree
_, key_compressed, _, err = balance_tree.GetKeyValueFromHash(key_pointer)
if err != nil {
return fmt.Errorf("balance not obtained err %s\n", err)
}
needs_init = true
}
}
if err != nil {
return fmt.Errorf("balance not obtained err %s\n", err)
}
// decode public key and expand
{
var p bn256.G1
var pcopy [33]byte
copy(pcopy[:], key_compressed)
if err = p.DecodeCompressed(key_compressed[:]); err != nil {
return fmt.Errorf("key %d could not be decompressed", i)
}
tx.Payloads[t].Statement.Publickeylist_compressed = append(tx.Payloads[t].Statement.Publickeylist_compressed, pcopy)
tx.Payloads[t].Statement.Publickeylist = append(tx.Payloads[t].Statement.Publickeylist, &p)
if needs_init {
var nb crypto.NonceBalance
nb.Balance = crypto.ConstructElGamal(&p, crypto.ElGamal_BASE_G) // init zero balance
balance_serialized = nb.Serialize()
}
}
var ll, rr bn256.G1
nb := new(crypto.NonceBalance).Deserialize(balance_serialized)
ebalance := nb.Balance
ll.Add(ebalance.Left, tx.Payloads[t].Statement.C[i])
tx.Payloads[t].Statement.CLn = append(tx.Payloads[t].Statement.CLn, &ll)
rr.Add(ebalance.Right, tx.Payloads[t].Statement.D)
tx.Payloads[t].Statement.CRn = append(tx.Payloads[t].Statement.CRn, &rr)
// prepare for another sub transaction
echanges := crypto.ConstructElGamal(tx.Payloads[t].Statement.C[i], tx.Payloads[t].Statement.D)
nb = new(crypto.NonceBalance).Deserialize(balance_serialized)
nb.Balance = nb.Balance.Add(echanges) // homomorphic addition of changes
tree.Put(key_compressed, nb.Serialize()) // reserialize and store temporarily, tree will be discarded after verification
}
}
if _, ok := transaction_valid_cache.Load(tx_hash); ok {
logger.V(2).Info("Found in cache, skipping verification", "txid", tx_hash)
return nil
} else {
//logger.Infof("TX not found in cache %s len %d ",tx_hash, len(tmp_buffer))
}
if skip_proof {
return nil
}
// at this point TX has been completely expanded, verify the tx statement
scid_map := map[crypto.Hash]int{}
for t := range tx.Payloads {
index := scid_map[tx.Payloads[t].SCID]
if !tx.Payloads[t].Proof.Verify(tx.Payloads[t].SCID, index, &tx.Payloads[t].Statement, tx.GetHash(), tx.Payloads[t].BurnValue) {
// fmt.Printf("Statement %+v\n", tx.Payloads[t].Statement)
// fmt.Printf("Proof %+v\n", tx.Payloads[t].Proof)
return fmt.Errorf("transaction statement %d verification failed", t)
}
scid_map[tx.Payloads[t].SCID] = scid_map[tx.Payloads[t].SCID] + 1 // increment scid counter
}
// these transactions are done
if tx.TransactionType == transaction.NORMAL || tx.TransactionType == transaction.BURN_TX || tx.TransactionType == transaction.SC_TX {
if chain.cache_enabled {
transaction_valid_cache.Store(tx_hash, time.Now()) // signature got verified, cache it
}
return nil
}
return nil
}

45
blockchain/tx_fees.go Normal file
View File

@ -0,0 +1,45 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package blockchain
//import "math/big"
import "github.com/deroproject/derohe/config"
//import "github.com/deroproject/derosuite/emission"
// this file implements the logic to calculate fees dynamicallly
// get maximum size of TX
func Get_Transaction_Maximum_Size() uint64 {
return config.STARGATE_HE_MAX_TX_SIZE
}
// get the tx fee
// this function assumes that fees are per KB
// for every part of 1KB multiply by fee_per_kb
func (chain *Blockchain) Calculate_TX_fee(hard_fork_version int64, tx_size uint64) uint64 {
size_in_kb := tx_size / 1024
if (tx_size % 1024) != 0 { // for any part there of, use a full KB fee
size_in_kb += 1
}
needed_fee := size_in_kb * config.FEE_PER_KB
return needed_fee
}

52
build_all.sh Normal file
View File

@ -0,0 +1,52 @@
#!/usr/bin/env bash
CURDIR=`/bin/pwd`
BASEDIR=$(dirname $0)
ABSPATH=$(readlink -f $0)
ABSDIR=$(dirname $ABSPATH)
unset GOPATH
version=`cat ./config/version.go | grep -i version |cut -d\" -f 2`
cd $CURDIR
bash $ABSDIR/build_package.sh "./cmd/derod"
bash $ABSDIR/build_package.sh "./cmd/explorer"
bash $ABSDIR/build_package.sh "./cmd/dero-wallet-cli"
bash $ABSDIR/build_package.sh "./cmd/dero-miner"
#bash $ABSDIR/build_package.sh "./cmd/simulator"
bash $ABSDIR/build_package.sh "./cmd/rpc_examples/pong_server"
for d in build/*; do cp Start.md "$d"; done
cd "${ABSDIR}/build"
#windows users require zip files
zip -r dero_windows_amd64.zip dero_windows_amd64
zip -r dero_windows_amd64_$version.zip dero_windows_amd64
#macos needs universal fat binaries, so lets build them
mkdir -p dero_darwin_universal
go run github.com/randall77/makefat ./dero_darwin_universal/derod-darwin ./dero_darwin_amd64/derod-darwin-amd64 ./dero_darwin_arm64/derod-darwin-arm64
go run github.com/randall77/makefat ./dero_darwin_universal/explorer-darwin ./dero_darwin_amd64/explorer-darwin-amd64 ./dero_darwin_arm64/explorer-darwin-arm64
go run github.com/randall77/makefat ./dero_darwin_universal/dero-wallet-cli-darwin ./dero_darwin_amd64/dero-wallet-cli-darwin-amd64 ./dero_darwin_arm64/dero-wallet-cli-darwin-arm64
go run github.com/randall77/makefat ./dero_darwin_universal/dero-miner-darwin ./dero_darwin_amd64/dero-miner-darwin-amd64 ./dero_darwin_arm64/dero-miner-darwin-arm64
#go run github.com/randall77/makefat ./dero_darwin_universal/simulator-darwin ./dero_darwin_amd64/simulator-darwin-amd64 ./dero_darwin_arm64/simulator-darwin-arm64
go run github.com/randall77/makefat ./dero_darwin_universal/pong_server-darwin ./dero_darwin_amd64/pong_server-darwin-amd64 ./dero_darwin_arm64/pong_server-darwin-arm64
rm -rf dero_darwin_amd64
rm -rf dero_darwin_arm64
#all other platforms are okay with tar.gz
find . -mindepth 1 -type d -not -name '*windows*' -exec tar -cvzf {}.tar.gz {} \;
find . -mindepth 1 -type d -not -name '*windows*' -exec tar -cvzf {}_$version.tar.gz {} \;
cd $CURDIR

89
build_package.sh Normal file
View File

@ -0,0 +1,89 @@
#!/usr/bin/env bash
package=$1
package_split=(${package//\// })
package_name=${package_split[-1]}
CURDIR=`/bin/pwd`
BASEDIR=$(dirname $0)
ABSPATH=$(readlink -f $0)
ABSDIR=$(dirname $ABSPATH)
PLATFORMS="darwin/amd64 darwin/arm64" # amd64/arm64 only as of go1.16
PLATFORMS="$PLATFORMS windows/amd64" # arm compilation not available for Windows
PLATFORMS="$PLATFORMS linux/amd64"
PLATFORMS="$PLATFORMS linux/arm64"
#PLATFORMS="$PLATFORMS linux/ppc64le" is it common enough ??
#PLATFORMS="$PLATFORMS linux/mips64le" # experimental in go1.6 is it common enough ??
PLATFORMS="$PLATFORMS freebsd/amd64"
#PLATFORMS="$PLATFORMS freebsd/arm64"
#PLATFORMS="$PLATFORMS netbsd/amd64" # amd64 only as of go1.6
#PLATFORMS="$PLATFORMS openbsd/amd64" # amd64 only as of go1.6
#PLATFORMS="$PLATFORMS dragonfly/amd64" # amd64 only as of go1.5
#PLATFORMS="$PLATFORMS plan9/amd64 plan9/386" # as of go1.4, is it common enough ??
# solaris disabled due to badger error below
#vendor/github.com/dgraph-io/badger/y/mmap_unix.go:57:30: undefined: syscall.SYS_MADVISE
#PLATFORMS="$PLATFORMS solaris/amd64" # as of go1.3
#PLATFORMS_ARM="linux freebsd netbsd"
PLATFORMS_ARM="linux"
#PLATFORMS="linux/amd64"
#PLATFORMS_ARM=""
type setopt >/dev/null 2>&1
SCRIPT_NAME=`basename "$0"`
FAILURES=""
CURRENT_DIRECTORY=${PWD##*/}
OUTPUT="$package_name" # if no src file given, use current dir name
GCFLAGS=""
#if [[ "${OUTPUT}" == "dero-miner" ]]; then GCFLAGS="github.com/deroproject/derohe/astrobwt=-B"; fi
for PLATFORM in $PLATFORMS; do
GOOS=${PLATFORM%/*}
GOARCH=${PLATFORM#*/}
OUTPUT_DIR="${ABSDIR}/build/dero_${GOOS}_${GOARCH}"
BIN_FILENAME="${OUTPUT}-${GOOS}-${GOARCH}"
echo mkdir -p $OUTPUT_DIR
if [[ "${GOOS}" == "windows" ]]; then BIN_FILENAME="${BIN_FILENAME}.exe"; fi
CMD="GOOS=${GOOS} GOARCH=${GOARCH} go build -gcflags=${GCFLAGS} -o $OUTPUT_DIR/${BIN_FILENAME} $package"
echo "${CMD}"
eval $CMD || FAILURES="${FAILURES} ${PLATFORM}"
# build docker image for linux amd64 competely static
#if [[ "${GOOS}" == "linux" && "${GOARCH}" == "amd64" && "${OUTPUT}" != "explorer" && "${OUTPUT}" != "dero-miner" ]] ; then
# BIN_FILENAME="docker-${OUTPUT}-${GOOS}-${GOARCH}"
# CMD="GOOS=${GOOS} GOARCH=${GOARCH} CGO_ENABLED=0 go build -o $OUTPUT_DIR/${BIN_FILENAME} $package"
# echo "${CMD}"
# eval $CMD || FAILURES="${FAILURES} ${PLATFORM}"
#fi
done
for GOOS in $PLATFORMS_ARM; do
GOARCH="arm"
# build for each ARM version
# for GOARM in 7 6 5; do
for GOARM in 7; do
OUTPUT_DIR="${ABSDIR}/build/dero_${GOOS}_${GOARCH}${GOARM}"
BIN_FILENAME="${OUTPUT}-${GOOS}-${GOARCH}${GOARM}"
CMD="GOARM=${GOARM} GOOS=${GOOS} GOARCH=${GOARCH} go build -gcflags=${GCFLAGS} -o $OUTPUT_DIR/${BIN_FILENAME} $package"
echo "${CMD}"
eval "${CMD}" || FAILURES="${FAILURES} ${GOOS}/${GOARCH}${GOARM}"
done
done
# eval errors
if [[ "${FAILURES}" != "" ]]; then
echo ""
echo "${SCRIPT_NAME} failed on: ${FAILURES}"
exit 1
fi

View File

@ -0,0 +1,84 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
// ripoff from blockchain folder
import "math/big"
import "github.com/deroproject/derohe/cryptography/crypto"
var (
// bigZero is 0 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigZero = big.NewInt(0)
// bigOne is 1 represented as a big.Int. It is defined here to avoid
// the overhead of creating it multiple times.
bigOne = big.NewInt(1)
// oneLsh256 is 1 shifted left 256 bits. It is defined here to avoid
// the overhead of creating it multiple times.
oneLsh256 = new(big.Int).Lsh(bigOne, 256)
// enabling this will simulation mode with hard coded difficulty set to 1
// the variable is knowingly not exported, so no one can tinker with it
//simulation = false // simulation mode is disabled
)
// HashToBig converts a PoW has into a big.Int that can be used to
// perform math comparisons.
func HashToBig(buf crypto.Hash) *big.Int {
// A Hash is in little-endian, but the big package wants the bytes in
// big-endian, so reverse them.
blen := len(buf) // its hardcoded 32 bytes, so why do len but lets do it
for i := 0; i < blen/2; i++ {
buf[i], buf[blen-1-i] = buf[blen-1-i], buf[i]
}
return new(big.Int).SetBytes(buf[:])
}
// this function calculates the difficulty in big num form
func ConvertDifficultyToBig(difficultyi uint64) *big.Int {
if difficultyi == 0 {
panic("difficulty can never be zero")
}
// (1 << 256) / (difficultyNum )
difficulty := new(big.Int).SetUint64(difficultyi)
denominator := new(big.Int).Add(difficulty, bigZero) // above 2 lines can be merged
return new(big.Int).Div(oneLsh256, denominator)
}
func ConvertIntegerDifficultyToBig(difficultyi *big.Int) *big.Int {
if difficultyi.Cmp(bigZero) == 0 { // if work_pow is less than difficulty
panic("difficulty can never be zero")
}
return new(big.Int).Div(oneLsh256, difficultyi)
}
// this function check whether the pow hash meets difficulty criteria
// however, it take diff in bigint format
func CheckPowHashBig(pow_hash crypto.Hash, big_difficulty_integer *big.Int) bool {
big_pow_hash := HashToBig(pow_hash)
big_difficulty := ConvertIntegerDifficultyToBig(big_difficulty_integer)
if big_pow_hash.Cmp(big_difficulty) <= 0 { // if work_pow is less than difficulty
return true
}
return false
}

View File

@ -0,0 +1,12 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
package main
import "testing"
func Test_Part1(t *testing.T) {
}

View File

@ -0,0 +1,46 @@
//go:build !windows
// +build !windows
package main
import "runtime"
import "golang.org/x/sys/unix"
// we skip type as go will automatically identify type
const (
UnixMax = 20
OSXMax = 20 // see this https://github.com/golang/go/issues/30401
)
type Limits struct {
Current uint64
Max uint64
}
func init() {
switch runtime.GOOS {
case "darwin":
unix.Setrlimit(unix.RLIMIT_NOFILE, &unix.Rlimit{Max: OSXMax, Cur: OSXMax})
case "linux", "netbsd", "openbsd", "freebsd":
unix.Setrlimit(unix.RLIMIT_NOFILE, &unix.Rlimit{Max: UnixMax, Cur: UnixMax})
default: // nothing to do
}
}
func Get() (*Limits, error) {
var rLimit unix.Rlimit
if err := unix.Getrlimit(unix.RLIMIT_NOFILE, &rLimit); err != nil {
return nil, err
}
return &Limits{Current: uint64(rLimit.Cur), Max: uint64(rLimit.Max)}, nil
}
/*
func Set(maxLimit uint64) error {
rLimit := unix.Rlimit {Max:maxLimit, Cur:maxLimit}
if runtime.GOOS == "darwin" && rLimit.Cur > OSXMax { //https://github.com/golang/go/issues/30401
rLimit.Cur = OSXMax
}
return unix.Setrlimit(unix.RLIMIT_NOFILE, &rLimit)
}
*/

528
cmd/dero-miner/miner.go Normal file
View File

@ -0,0 +1,528 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "io"
import "os"
import "fmt"
import "time"
import "net/url"
import "crypto/rand"
import "crypto/tls"
import "sync"
import "runtime"
import "math/big"
import "path/filepath"
import "encoding/hex"
import "encoding/binary"
import "os/signal"
import "sync/atomic"
import "strings"
import "strconv"
import "github.com/go-logr/logr"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
//import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/rpc"
import "github.com/chzyer/readline"
import "github.com/docopt/docopt-go"
import "github.com/deroproject/derohe/pow"
import "github.com/gorilla/websocket"
var mutex sync.RWMutex
var job rpc.GetBlockTemplate_Result
var job_counter int64
var maxdelay int = 10000
var threads int
var iterations int = 100
var max_pow_size int = 819200 //astrobwt.MAX_LENGTH
var wallet_address string
var daemon_rpc_address string
var counter uint64
var hash_rate uint64
var Difficulty uint64
var our_height int64
var block_counter uint64
var mini_block_counter uint64
var logger logr.Logger
var command_line string = `dero-miner
DERO CPU Miner for AstroBWT.
ONE CPU, ONE VOTE.
http://wiki.dero.io
Usage:
dero-miner --wallet-address=<wallet_address> [--daemon-rpc-address=<127.0.0.1:10102>] [--mining-threads=<threads>] [--testnet] [--debug]
dero-miner --bench [--max-pow-size=1120]
dero-miner -h | --help
dero-miner --version
Options:
-h --help Show this screen.
--version Show version.
--bench Run benchmark mode.
--daemon-rpc-address=<127.0.0.1:10102> Miner will connect to daemon RPC on this port.
--wallet-address=<wallet_address> This address is rewarded when a block is mined sucessfully.
--mining-threads=<threads> Number of CPU threads for mining [default: ` + fmt.Sprintf("%d", runtime.GOMAXPROCS(0)) + `]
Example Mainnet: ./dero-miner-linux-amd64 --wallet-address dero1qy0ehnqjpr0wxqnknyc66du2fsxyktppkr8m8e6jvplp954klfjz2qqhmy4zf --daemon-rpc-address=http://explorer.dero.io:10102
Example Testnet: ./dero-miner-linux-amd64 --wallet-address deto1qy0ehnqjpr0wxqnknyc66du2fsxyktppkr8m8e6jvplp954klfjz2qqdzcd8p --daemon-rpc-address=http://127.0.0.1:40402
If daemon running on local machine no requirement of '--daemon-rpc-address' argument.
`
var Exit_In_Progress = make(chan bool)
func main() {
var err error
globals.Arguments, err = docopt.Parse(command_line, nil, true, config.Version.String(), false)
if err != nil {
fmt.Printf("Error while parsing options err: %s\n", err)
return
}
// We need to initialize readline first, so it changes stderr to ansi processor on windows
l, err := readline.NewEx(&readline.Config{
//Prompt: "\033[92mDERO:\033[32m»\033[0m",
Prompt: "\033[92mDERO Miner:\033[32m>>>\033[0m ",
HistoryFile: filepath.Join(os.TempDir(), "dero_miner_readline.tmp"),
AutoComplete: completer,
InterruptPrompt: "^C",
EOFPrompt: "exit",
HistorySearchFold: true,
FuncFilterInputRune: filterInput,
})
if err != nil {
panic(err)
}
defer l.Close()
// parse arguments and setup logging , print basic information
exename, _ := os.Executable()
f, err := os.Create(exename + ".log")
if err != nil {
fmt.Printf("Error while opening log file err: %s filename %s\n", err, exename+".log")
return
}
globals.InitializeLog(l.Stdout(), f)
logger = globals.Logger.WithName("miner")
logger.Info("DERO Stargate HE AstroBWT miner : It is an alpha version, use it for testing/evaluations purpose only.")
logger.Info("Copyright 2017-2021 DERO Project. All rights reserved.")
logger.Info("", "OS", runtime.GOOS, "ARCH", runtime.GOARCH, "GOMAXPROCS", runtime.GOMAXPROCS(0))
logger.Info("", "Version", config.Version.String())
logger.V(1).Info("", "Arguments", globals.Arguments)
globals.Initialize() // setup network and proxy
logger.V(0).Info("", "MODE", globals.Config.Name)
if globals.Arguments["--wallet-address"] != nil {
addr, err := globals.ParseValidateAddress(globals.Arguments["--wallet-address"].(string))
if err != nil {
logger.Error(err, "Wallet address is invalid.")
return
}
wallet_address = addr.String()
}
if !globals.Arguments["--testnet"].(bool) {
daemon_rpc_address = "127.0.0.1:10100"
} else {
daemon_rpc_address = "127.0.0.1:10100"
}
if globals.Arguments["--daemon-rpc-address"] != nil {
daemon_rpc_address = globals.Arguments["--daemon-rpc-address"].(string)
}
threads = runtime.GOMAXPROCS(0)
if globals.Arguments["--mining-threads"] != nil {
if s, err := strconv.Atoi(globals.Arguments["--mining-threads"].(string)); err == nil {
threads = s
} else {
logger.Error(err, "Mining threads argument cannot be parsed.")
}
if threads > runtime.GOMAXPROCS(0) {
logger.Info("Mining threads is more than available CPUs. This is NOT optimal", "thread_count", threads, "max_possible", runtime.GOMAXPROCS(0))
}
}
if globals.Arguments["--bench"].(bool) {
var wg sync.WaitGroup
fmt.Printf("%20s %20s %20s %20s %20s \n", "Threads", "Total Time", "Total Iterations", "Time/PoW ", "Hash Rate/Sec")
iterations = 20000
for bench := 1; bench <= threads; bench++ {
processor = 0
now := time.Now()
for i := 0; i < bench; i++ {
wg.Add(1)
go random_execution(&wg, iterations)
}
wg.Wait()
duration := time.Now().Sub(now)
fmt.Printf("%20s %20s %20s %20s %20s \n", fmt.Sprintf("%d", bench), fmt.Sprintf("%s", duration), fmt.Sprintf("%d", bench*iterations),
fmt.Sprintf("%s", duration/time.Duration(bench*iterations)), fmt.Sprintf("%.1f", float32(time.Second)/(float32(duration/time.Duration(bench*iterations)))))
}
os.Exit(0)
}
logger.Info(fmt.Sprintf("System will mine to \"%s\" with %d threads. Good Luck!!", wallet_address, threads))
//threads_ptr := flag.Int("threads", runtime.NumCPU(), "No. Of threads")
//iterations_ptr := flag.Int("iterations", 20, "No. Of DERO Stereo POW calculated/thread")
/*bench_ptr := flag.Bool("bench", false, "run bench with params")
daemon_ptr := flag.String("rpc-server-address", "127.0.0.1:18091", "DERO daemon RPC address to get work and submit mined blocks")
delay_ptr := flag.Int("delay", 1, "Fetch job every this many seconds")
wallet_address := flag.String("wallet-address", "", "Owner of this wallet will receive mining rewards")
_ = daemon_ptr
_ = delay_ptr
_ = wallet_address
*/
if threads < 1 || iterations < 1 || threads > 2048 {
panic("Invalid parameters\n")
//return
}
// This tiny goroutine continuously updates status as required
go func() {
last_our_height := int64(0)
last_best_height := int64(0)
last_counter := uint64(0)
last_counter_time := time.Now()
last_mining_state := false
_ = last_mining_state
mining := true
for {
select {
case <-Exit_In_Progress:
return
default:
}
best_height := int64(0)
// only update prompt if needed
if last_our_height != our_height || last_best_height != best_height || last_counter != counter {
// choose color based on urgency
color := "\033[33m" // default is green color
pcolor := "\033[32m" // default is green color
mining_string := ""
if mining {
mining_speed := float64(counter-last_counter) / (float64(uint64(time.Since(last_counter_time))) / 1000000000.0)
last_counter = counter
last_counter_time = time.Now()
switch {
case mining_speed > 1000000:
mining_string = fmt.Sprintf("MINING @ %.3f MH/s", float32(mining_speed)/1000000.0)
case mining_speed > 1000:
mining_string = fmt.Sprintf("MINING @ %.3f KH/s", float32(mining_speed)/1000.0)
case mining_speed > 0:
mining_string = fmt.Sprintf("MINING @ %.0f H/s", mining_speed)
}
}
last_mining_state = mining
hash_rate_string := ""
switch {
case hash_rate > 1000000000000:
hash_rate_string = fmt.Sprintf("%.3f TH/s", float64(hash_rate)/1000000000000.0)
case hash_rate > 1000000000:
hash_rate_string = fmt.Sprintf("%.3f GH/s", float64(hash_rate)/1000000000.0)
case hash_rate > 1000000:
hash_rate_string = fmt.Sprintf("%.3f MH/s", float64(hash_rate)/1000000.0)
case hash_rate > 1000:
hash_rate_string = fmt.Sprintf("%.3f KH/s", float64(hash_rate)/1000.0)
case hash_rate > 0:
hash_rate_string = fmt.Sprintf("%d H/s", hash_rate)
}
testnet_string := ""
if !globals.IsMainnet() {
testnet_string = "\033[31m TESTNET"
}
l.SetPrompt(fmt.Sprintf("\033[1m\033[32mDERO Miner: \033[0m"+color+"Height %d "+pcolor+" BLOCKS %d MiniBlocks %d \033[32mNW %s %s>%s>>\033[0m ", our_height, block_counter, mini_block_counter, hash_rate_string, mining_string, testnet_string))
l.Refresh()
last_our_height = our_height
last_best_height = best_height
}
time.Sleep(1 * time.Second)
}
}()
l.Refresh() // refresh the prompt
go func() {
var gracefulStop = make(chan os.Signal, 1)
signal.Notify(gracefulStop, os.Interrupt) // listen to all signals
for {
sig := <-gracefulStop
fmt.Printf("received signal %s\n", sig)
if sig.String() == "interrupt" {
close(Exit_In_Progress)
}
}
}()
if threads > 255 {
logger.Error(nil, "This program supports maximum 256 CPU cores.", "available", threads)
threads = 255
}
go getwork(wallet_address)
for i := 0; i < threads; i++ {
go mineblock(i)
}
for {
line, err := l.Readline()
if err == readline.ErrInterrupt {
if len(line) == 0 {
fmt.Print("Ctrl-C received, Exit in progress\n")
close(Exit_In_Progress)
os.Exit(0)
break
} else {
continue
}
} else if err == io.EOF {
<-Exit_In_Progress
break
}
line = strings.TrimSpace(line)
line_parts := strings.Fields(line)
command := ""
if len(line_parts) >= 1 {
command = strings.ToLower(line_parts[0])
}
switch {
case line == "help":
usage(l.Stderr())
case strings.HasPrefix(line, "say"):
line := strings.TrimSpace(line[3:])
if len(line) == 0 {
fmt.Println("say what?")
break
}
case command == "version":
fmt.Printf("Version %s OS:%s ARCH:%s \n", config.Version.String(), runtime.GOOS, runtime.GOARCH)
case strings.ToLower(line) == "bye":
fallthrough
case strings.ToLower(line) == "exit":
fallthrough
case strings.ToLower(line) == "quit":
close(Exit_In_Progress)
os.Exit(0)
case line == "":
default:
fmt.Println("you said:", strconv.Quote(line))
}
}
<-Exit_In_Progress
return
}
func random_execution(wg *sync.WaitGroup, iterations int) {
var workbuf [255]byte
runtime.LockOSThread()
//threadaffinity()
rand.Read(workbuf[:])
for i := 0; i < iterations; i++ {
_ = pow.Pow(workbuf[:])
}
wg.Done()
runtime.UnlockOSThread()
}
// continuously get work
var connection *websocket.Conn
var connection_mutex sync.Mutex
func getwork(wallet_address string) {
var err error
for {
u := url.URL{Scheme: "wss", Host: daemon_rpc_address, Path: "/ws/" + wallet_address}
logger.Info("connecting to ", "url", u.String())
dialer := websocket.DefaultDialer
dialer.TLSClientConfig = &tls.Config{
InsecureSkipVerify: true,
}
connection, _, err = websocket.DefaultDialer.Dial(u.String(), nil)
if err != nil {
logger.Error(err, "Error connecting to server", "server adress", daemon_rpc_address)
logger.Info("Will try in 10 secs", "server adress", daemon_rpc_address)
time.Sleep(10 * time.Second)
continue
}
var result rpc.GetBlockTemplate_Result
wait_for_another_job:
if err = connection.ReadJSON(&result); err != nil {
logger.Error(err, "connection error")
continue
}
mutex.Lock()
job = result
job_counter++
mutex.Unlock()
if job.LastError != "" {
logger.Error(nil, "received error", "err", job.LastError)
}
block_counter = job.Blocks
mini_block_counter = job.MiniBlocks
hash_rate = job.Difficultyuint64
our_height = int64(job.Height)
Difficulty = job.Difficultyuint64
//fmt.Printf("recv: %s", result)
goto wait_for_another_job
}
}
func mineblock(tid int) {
var diff big.Int
var work [block.MINIBLOCK_SIZE]byte
nonce_buf := work[block.MINIBLOCK_SIZE-5:] //since slices are linked, it modifies parent
runtime.LockOSThread()
threadaffinity()
var local_job_counter int64
i := uint32(0)
for {
mutex.RLock()
myjob := job
local_job_counter = job_counter
mutex.RUnlock()
n, err := hex.Decode(work[:], []byte(myjob.Blockhashing_blob))
if err != nil || n != block.MINIBLOCK_SIZE {
logger.Error(err, "Blockwork could not decoded successfully", "blockwork", myjob.Blockhashing_blob, "n", n, "job", myjob)
time.Sleep(time.Second)
continue
}
work[block.MINIBLOCK_SIZE-1] = byte(tid)
diff.SetString(myjob.Difficulty, 10)
if work[0]&0xf != 1 { // check version
logger.Error(nil, "Unknown version, please check for updates", "version", work[0]&0x1f)
time.Sleep(time.Second)
continue
}
for local_job_counter == job_counter { // update job when it comes, expected rate 1 per second
i++
binary.BigEndian.PutUint32(nonce_buf, i)
powhash := pow.Pow(work[:])
atomic.AddUint64(&counter, 1)
if CheckPowHashBig(powhash, &diff) == true {
logger.V(1).Info("Successfully found DERO miniblock", "difficulty", myjob.Difficulty, "height", myjob.Height)
func() {
defer globals.Recover(1)
connection_mutex.Lock()
defer connection_mutex.Unlock()
connection.WriteJSON(rpc.SubmitBlock_Params{JobID: myjob.JobID, MiniBlockhashing_blob: fmt.Sprintf("%x", work[:])})
}()
}
}
}
}
func usage(w io.Writer) {
io.WriteString(w, "commands:\n")
io.WriteString(w, "\t\033[1mhelp\033[0m\t\tthis help\n")
io.WriteString(w, "\t\033[1mstatus\033[0m\t\tShow general information\n")
io.WriteString(w, "\t\033[1mbye\033[0m\t\tQuit the miner\n")
io.WriteString(w, "\t\033[1mversion\033[0m\t\tShow version\n")
io.WriteString(w, "\t\033[1mexit\033[0m\t\tQuit the miner\n")
io.WriteString(w, "\t\033[1mquit\033[0m\t\tQuit the miner\n")
}
var completer = readline.NewPrefixCompleter(
readline.PcItem("help"),
readline.PcItem("status"),
readline.PcItem("version"),
readline.PcItem("bye"),
readline.PcItem("exit"),
readline.PcItem("quit"),
)
func filterInput(r rune) (rune, bool) {
switch r {
// block CtrlZ feature
case readline.CharCtrlZ:
return r, false
}
return r, true
}

27
cmd/dero-miner/thread.go Normal file
View File

@ -0,0 +1,27 @@
//go:build !linux && !windows
// +build !linux,!windows
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
var processor int32
// TODO
func threadaffinity() {
}

View File

@ -0,0 +1,46 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "runtime"
import "sync/atomic"
import "golang.org/x/sys/unix"
var processor int32
// sets thread affinity to avoid cache collision and thread migration
func threadaffinity() {
var cpuset unix.CPUSet
lock_on_cpu := atomic.AddInt32(&processor, 1)
if lock_on_cpu >= int32(runtime.GOMAXPROCS(0)) { // threads are more than cpu, we do not know what to do
return
}
cpuset.Zero()
cpuset.Set(int(avoidHT(int(lock_on_cpu))))
unix.SchedSetaffinity(0, &cpuset)
}
func avoidHT(i int) int {
count := runtime.GOMAXPROCS(0)
if i < count/2 {
return i * 2
} else {
return (i-count/2)*2 + 1
}
}

View File

@ -0,0 +1,85 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "runtime"
import "sync/atomic"
import "syscall"
import "unsafe"
import "math/bits"
var libkernel32 uintptr
var setThreadAffinityMask uintptr
func doLoadLibrary(name string) uintptr {
lib, _ := syscall.LoadLibrary(name)
return uintptr(lib)
}
func doGetProcAddress(lib uintptr, name string) uintptr {
addr, _ := syscall.GetProcAddress(syscall.Handle(lib), name)
return uintptr(addr)
}
func syscall3(trap, nargs, a1, a2, a3 uintptr) uintptr {
ret, _, _ := syscall.Syscall(trap, nargs, a1, a2, a3)
return ret
}
func init() {
libkernel32 = doLoadLibrary("kernel32.dll")
setThreadAffinityMask = doGetProcAddress(libkernel32, "SetThreadAffinityMask")
}
var processor int32
// currently we suppport upto 64 cores
func SetThreadAffinityMask(hThread syscall.Handle, dwThreadAffinityMask uint) *uint32 {
ret1 := syscall3(setThreadAffinityMask, 2,
uintptr(hThread),
uintptr(dwThreadAffinityMask),
0)
return (*uint32)(unsafe.Pointer(ret1))
}
// CurrentThread returns the handle for the current thread.
// It is a pseudo handle that does not need to be closed.
func CurrentThread() syscall.Handle { return syscall.Handle(^uintptr(2 - 1)) }
// sets thread affinity to avoid cache collision and thread migration
func threadaffinity() {
lock_on_cpu := atomic.AddInt32(&processor, 1)
if lock_on_cpu >= int32(runtime.GOMAXPROCS(0)) { // threads are more than cpu, we do not know what to do
return
}
if lock_on_cpu >= bits.UintSize {
return
}
var cpuset uint
cpuset = 1 << uint(avoidHT(int(lock_on_cpu)))
SetThreadAffinityMask(CurrentThread(), cpuset)
}
func avoidHT(i int) int {
count := runtime.GOMAXPROCS(0)
if i < count/2 {
return i * 2
} else {
return (i-count/2)*2 + 1
}
}

View File

@ -0,0 +1,90 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -0,0 +1,13 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
package main
import "testing"
// Needs to expand test to cover failure conditions
func Test_Part1(t *testing.T) {
}

View File

@ -0,0 +1,438 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "io"
import "time"
import "fmt"
//import "io/ioutil"
import "strings"
//import "path/filepath"
//import "encoding/hex"
import "github.com/chzyer/readline"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/globals"
//import "github.com/deroproject/derohe/address"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/transaction"
// handle menu if a wallet is currently opened
func display_easymenu_post_open_command(l *readline.Instance) {
w := l.Stderr()
io.WriteString(w, "Menu:\n")
io.WriteString(w, "\t\033[1m1\033[0m\tDisplay account Address \n")
io.WriteString(w, "\t\033[1m2\033[0m\tDisplay Seed "+color_red+"(Please save seed in safe location)\n\033[0m")
io.WriteString(w, "\t\033[1m3\033[0m\tDisplay Keys (hex)\n")
if !wallet.IsRegistered() {
io.WriteString(w, "\t\033[1m4\033[0m\tAccount registration to blockchain (registration has no fee requirement and is precondition to use the account)\n")
io.WriteString(w, "\n")
io.WriteString(w, "\n")
} else { // hide some commands, if view only wallet
io.WriteString(w, "\t\033[1m4\033[0m\tDisplay wallet pool\n")
io.WriteString(w, "\t\033[1m5\033[0m\tTransfer (send DERO) to Another Wallet\n")
io.WriteString(w, "\t\033[1m6\033[0m\tToken transfer to another wallet\n")
io.WriteString(w, "\n")
}
io.WriteString(w, "\t\033[1m7\033[0m\tChange wallet password\n")
io.WriteString(w, "\t\033[1m8\033[0m\tClose Wallet\n")
if wallet.IsRegistered() {
io.WriteString(w, "\t\033[1m12\033[0m\tTransfer all balance (send DERO) To Another Wallet\n")
io.WriteString(w, "\t\033[1m13\033[0m\tShow transaction history\n")
io.WriteString(w, "\t\033[1m14\033[0m\tRescan transaction history\n")
}
io.WriteString(w, "\n\t\033[1m9\033[0m\tExit menu and start prompt\n")
io.WriteString(w, "\t\033[1m0\033[0m\tExit Wallet\n")
}
// this handles all the commands if wallet in menu mode and a wallet is opened
func handle_easymenu_post_open_command(l *readline.Instance, line string) (processed bool) {
var err error
_ = err
line = strings.TrimSpace(line)
line_parts := strings.Fields(line)
processed = true
if len(line_parts) < 1 { // if no command return
return
}
command := ""
if len(line_parts) >= 1 {
command = strings.ToLower(line_parts[0])
}
offline_tx := false
_ = offline_tx
switch command {
case "1":
fmt.Fprintf(l.Stderr(), "Wallet address : "+color_green+"%s"+color_white+"\n", wallet.GetAddress())
if !wallet.IsRegistered() {
reg_tx := wallet.GetRegistrationTX()
fmt.Fprintf(l.Stderr(), "Registration TX : "+color_green+"%x"+color_white+"\n", reg_tx.Serialize())
}
PressAnyKey(l, wallet)
case "2": // give user his seed
if !ValidateCurrentPassword(l, wallet) {
logger.Error(fmt.Errorf("Invalid password"), "")
PressAnyKey(l, wallet)
break
}
display_seed(l, wallet) // seed should be given only to authenticated users
PressAnyKey(l, wallet)
case "3": // give user his keys in hex form
if !ValidateCurrentPassword(l, wallet) {
logger.Error(fmt.Errorf("Invalid password"), "")
PressAnyKey(l, wallet)
break
}
display_spend_key(l, wallet)
PressAnyKey(l, wallet)
case "4": // Registration
if !wallet.IsRegistered() {
fmt.Fprintf(l.Stderr(), "Wallet address : "+color_green+"%s"+color_white+" is going to be registered.This is a pre-condition for using the online chain.It will take few seconds to register.\n", wallet.GetAddress())
reg_tx := wallet.GetRegistrationTX()
// at this point we must send the registration transaction
fmt.Fprintf(l.Stderr(), "Wallet address : "+color_green+"%s"+color_white+" is going to be registered.Pls wait till the account is registered.\n", wallet.GetAddress())
fmt.Fprintf(l.Stderr(), "Registration TXID %s\n", reg_tx.GetHash())
err := wallet.SendTransaction(reg_tx)
if err != nil {
fmt.Fprintf(l.Stderr(), "sending registration tx err %s\n", err)
} else {
fmt.Fprintf(l.Stderr(), "registration tx dispatched successfully\n")
}
} else {
}
case "6":
if !valid_registration_or_display_error(l, wallet) {
break
}
if !ValidateCurrentPassword(l, wallet) {
logger.Error(fmt.Errorf("Invalid password"), "")
break
}
scid, err := ReadSCID(l)
if err != nil {
logger.Error(err, "error reading SCID")
break
}
a, err := ReadAddress(l, wallet)
if err != nil {
logger.Error(err, "error reading address")
break
}
var amount_to_transfer uint64
amount_str := read_line_with_prompt(l, fmt.Sprintf("Enter token amount to transfer in SCID (max TODO): "))
if amount_str == "" {
amount_str = ".00001"
}
amount_to_transfer, err = globals.ParseAmount(amount_str)
if err != nil {
logger.Error(err, "Err parsing amount")
break // invalid amount provided, bail out
}
if ConfirmYesNoDefaultNo(l, "Confirm Transaction (y/N)") {
tx, err := wallet.TransferPayload0([]rpc.Transfer{rpc.Transfer{SCID: scid, Amount: amount_to_transfer, Destination: a.String()}}, 0, false, rpc.Arguments{}, false) // empty SCDATA
if err != nil {
logger.Error(err, "Error while building Transaction")
break
}
if err = wallet.SendTransaction(tx); err != nil {
logger.Error(err, "Error while dispatching Transaction")
break
}
logger.Info("Dispatched tx", "txid", tx.GetHash().String())
}
case "5":
if !valid_registration_or_display_error(l, wallet) {
break
}
if !ValidateCurrentPassword(l, wallet) {
logger.Error(fmt.Errorf("Invalid password"), "")
break
}
// a , amount_to_transfer, err := collect_transfer_info(l,wallet)
a, err := ReadAddress(l, wallet)
if err != nil {
logger.Error(err, "error reading address")
break
}
var amount_to_transfer uint64
var arguments = rpc.Arguments{
// { rpc.RPC_DESTINATION_PORT, rpc.DataUint64,uint64(0x1234567812345678)},
// { rpc.RPC_VALUE_TRANSFER, rpc.DataUint64,uint64(12345)},
// { rpc.RPC_EXPIRY , rpc.DataTime, time.Now().Add(time.Hour).UTC()},
// { rpc.RPC_COMMENT , rpc.DataString, "Purchase XYZ"},
}
if a.IsIntegratedAddress() { // read everything from the address
if a.Arguments.Validate_Arguments() != nil {
logger.Error(err, "Integrated Address arguments could not be validated.")
break
}
if !a.Arguments.Has(rpc.RPC_DESTINATION_PORT, rpc.DataUint64) { // but only it is present
logger.Error(fmt.Errorf("Integrated Address does not contain destination port."), "")
break
}
arguments = append(arguments, rpc.Argument{Name: rpc.RPC_DESTINATION_PORT, DataType: rpc.DataUint64, Value: a.Arguments.Value(rpc.RPC_DESTINATION_PORT, rpc.DataUint64).(uint64)})
// arguments = append(arguments, rpc.Argument{"Comment", rpc.DataString, "holygrail of all data is now working if you can see this"})
if a.Arguments.Has(rpc.RPC_EXPIRY, rpc.DataTime) { // but only it is present
if a.Arguments.Value(rpc.RPC_EXPIRY, rpc.DataTime).(time.Time).Before(time.Now().UTC()) {
logger.Error(nil, "This address has expired.", "expiry time", a.Arguments.Value(rpc.RPC_EXPIRY, rpc.DataTime))
break
} else {
logger.Info("This address will expire ", "expiry time", a.Arguments.Value(rpc.RPC_EXPIRY, rpc.DataTime))
}
}
logger.Info("Destination port is integreted in address.", "dst port", a.Arguments.Value(rpc.RPC_DESTINATION_PORT, rpc.DataUint64).(uint64))
if a.Arguments.Has(rpc.RPC_COMMENT, rpc.DataString) { // but only it is present
logger.Info("Integrated Message", "comment", a.Arguments.Value(rpc.RPC_COMMENT, rpc.DataString))
}
}
// arguments have been already validated
for _, arg := range a.Arguments {
if !(arg.Name == rpc.RPC_COMMENT || arg.Name == rpc.RPC_EXPIRY || arg.Name == rpc.RPC_DESTINATION_PORT || arg.Name == rpc.RPC_SOURCE_PORT || arg.Name == rpc.RPC_VALUE_TRANSFER || arg.Name == rpc.RPC_NEEDS_REPLYBACK_ADDRESS) {
switch arg.DataType {
case rpc.DataString:
if v, err := ReadString(l, arg.Name, arg.Value.(string)); err == nil {
arguments = append(arguments, rpc.Argument{Name: arg.Name, DataType: arg.DataType, Value: v})
} else {
logger.Error(fmt.Errorf("%s could not be parsed (type %s),", arg.Name, arg.DataType), "")
return
}
case rpc.DataInt64:
if v, err := ReadInt64(l, arg.Name, arg.Value.(int64)); err == nil {
arguments = append(arguments, rpc.Argument{Name: arg.Name, DataType: arg.DataType, Value: v})
} else {
logger.Error(fmt.Errorf("%s could not be parsed (type %s),", arg.Name, arg.DataType), "")
return
}
case rpc.DataUint64:
if v, err := ReadUint64(l, arg.Name, arg.Value.(uint64)); err == nil {
arguments = append(arguments, rpc.Argument{Name: arg.Name, DataType: arg.DataType, Value: v})
} else {
logger.Error(fmt.Errorf("%s could not be parsed (type %s),", arg.Name, arg.DataType), "")
return
}
case rpc.DataFloat64:
if v, err := ReadFloat64(l, arg.Name, arg.Value.(float64)); err == nil {
arguments = append(arguments, rpc.Argument{Name: arg.Name, DataType: arg.DataType, Value: v})
} else {
logger.Error(fmt.Errorf("%s could not be parsed (type %s),", arg.Name, arg.DataType), "")
return
}
case rpc.DataTime:
logger.Error(fmt.Errorf("time argument is currently not supported."), "")
break
}
}
}
if a.Arguments.Has(rpc.RPC_VALUE_TRANSFER, rpc.DataUint64) { // but only it is present
logger.Info("Transaction", "Value", globals.FormatMoney(a.Arguments.Value(rpc.RPC_VALUE_TRANSFER, rpc.DataUint64).(uint64)))
amount_to_transfer = a.Arguments.Value(rpc.RPC_VALUE_TRANSFER, rpc.DataUint64).(uint64)
} else {
amount_str := read_line_with_prompt(l, fmt.Sprintf("Enter amount to transfer in DERO (max TODO): "))
if amount_str == "" {
amount_str = ".00001"
}
amount_to_transfer, err = globals.ParseAmount(amount_str)
if err != nil {
logger.Error(err, "Err parsing amount")
break // invalid amount provided, bail out
}
}
// check whether the service needs the address of sender
// this is required to enable services which are completely invisisble to external entities
// external entities means anyone except sender/receiver
if a.Arguments.Has(rpc.RPC_NEEDS_REPLYBACK_ADDRESS, rpc.DataUint64) {
logger.Info("This RPC has requested your address.")
logger.Info("If you are expecting something back, it needs to be sent")
logger.Info("Your address will remain completely invisible to external entities(only sender/receiver can see your address)")
arguments = append(arguments, rpc.Argument{Name: rpc.RPC_REPLYBACK_ADDRESS, DataType: rpc.DataAddress, Value: wallet.GetAddress()})
}
// if no arguments, use space by embedding a small comment
if len(arguments) == 0 { // allow user to enter Comment
if v, err := ReadString(l, "Comment", ""); err == nil {
arguments = append(arguments, rpc.Argument{Name: "Comment", DataType: rpc.DataString, Value: v})
} else {
logger.Error(fmt.Errorf("%s could not be parsed (type %s),", "Comment", rpc.DataString), "")
return
}
}
if _, err := arguments.CheckPack(transaction.PAYLOAD0_LIMIT); err != nil {
logger.Error(err, "Arguments packing err")
return
}
if ConfirmYesNoDefaultNo(l, "Confirm Transaction (y/N)") {
//src_port := uint64(0xffffffffffffffff)
tx, err := wallet.TransferPayload0([]rpc.Transfer{rpc.Transfer{Amount: amount_to_transfer, Destination: a.String(), Payload_RPC: arguments}}, 0, false, rpc.Arguments{}, false) // empty SCDATA
if err != nil {
logger.Error(err, "Error while building Transaction")
break
}
if err = wallet.SendTransaction(tx); err != nil {
logger.Error(err, "Error while dispatching Transaction")
break
}
logger.Info("Dispatched tx", "txid", tx.GetHash().String())
//fmt.Printf("queued tx err %s\n")
}
case "12":
if !valid_registration_or_display_error(l, wallet) {
break
}
if !ValidateCurrentPassword(l, wallet) {
logger.Error(fmt.Errorf("Invalid password"), "")
break
}
logger.Error(err, "Not supported ")
/*
// a , amount_to_transfer, err := collect_transfer_info(l,wallet)
fmt.Printf("dest address %s\n", "deroi1qxqqkmaz8nhv4q07w3cjyt84kmrqnuw4nprpqfl9xmmvtvwa7cdykxq5dph4ufnx5ndq4ltraf (14686f5e2666a4da) dero1qxqqkmaz8nhv4q07w3cjyt84kmrqnuw4nprpqfl9xmmvtvwa7cdykxqpfpaes")
a, err := ReadAddress(l)
if err != nil {
globals.Logger.Warnf("Err :%s", err)
break
}
// if user provided an integrated address donot ask him payment id
if a.IsIntegratedAddress() {
globals.Logger.Infof("Payment ID is integreted in address ID:%x", a.PaymentID)
}
if ConfirmYesNoDefaultNo(l, "Confirm Transaction to send entire balance (y/N)") {
addr_list := []address.Address{*a}
amount_list := []uint64{0} // transfer 50 dero, 2 dero
fees_per_kb := uint64(0) // fees must be calculated by walletapi
uid, err := wallet.PoolTransfer(addr_list, amount_list, fees_per_kb, 0, true)
_ = uid
if err != nil {
globals.Logger.Warnf("Error while building Transaction err %s\n", err)
break
}
}
*/
//PressAnyKey(l, wallet) // wait for a key press
case "7": // change password
if ConfirmYesNoDefaultNo(l, "Change wallet password (y/N)") &&
ValidateCurrentPassword(l, wallet) {
new_password := ReadConfirmedPassword(l, "Enter new password", "Confirm password")
err = wallet.Set_Encrypted_Wallet_Password(new_password)
if err == nil {
logger.Info("Wallet password successfully changed")
} else {
logger.Error(err, "Wallet password could not be changed")
}
}
case "8": // close and discard user key
wallet.Close_Encrypted_Wallet()
prompt_mutex.Lock()
wallet = nil // overwrite previous instance
prompt_mutex.Unlock()
fmt.Fprintf(l.Stderr(), color_yellow+"Wallet closed"+color_white)
case "9": // enable prompt mode
menu_mode = false
logger.Info("Prompt mode enabled, type \"menu\" command to start menu mode")
case "0", "bye", "exit", "quit":
wallet.Close_Encrypted_Wallet() // save the wallet
prompt_mutex.Lock()
wallet = nil
globals.Exit_In_Progress = true
prompt_mutex.Unlock()
fmt.Fprintf(l.Stderr(), color_yellow+"Wallet closed"+color_white)
fmt.Fprintf(l.Stderr(), color_yellow+"Exiting"+color_white)
case "13":
var zeroscid crypto.Hash
show_transfers(l, wallet, zeroscid, 100)
case "14":
logger.Info("Rescanning wallet history")
rescan_bc(wallet)
default:
processed = false // just loop
}
return
}

View File

@ -0,0 +1,250 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "io"
import "fmt"
import "time"
import "strings"
import "encoding/hex"
import "github.com/chzyer/readline"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/walletapi"
import "github.com/deroproject/derohe/walletapi/rpcserver"
// display menu before a wallet is opened
func display_easymenu_pre_open_command(l *readline.Instance) {
w := l.Stderr()
io.WriteString(w, "Menu:\n")
io.WriteString(w, "\t\033[1m1\033[0m\tOpen existing Wallet\n")
io.WriteString(w, "\t\033[1m2\033[0m\tCreate New Wallet\n")
io.WriteString(w, "\t\033[1m3\033[0m\tRecover Wallet using recovery seed (25 words)\n")
io.WriteString(w, "\t\033[1m4\033[0m\tRecover Wallet using recovery key (64 char private spend key hex)\n")
io.WriteString(w, "\n\t\033[1m9\033[0m\tExit menu and start prompt\n")
io.WriteString(w, "\t\033[1m0\033[0m\tExit Wallet\n")
}
// handle all commands
func handle_easymenu_pre_open_command(l *readline.Instance, line string) {
var err error
line = strings.TrimSpace(line)
line_parts := strings.Fields(line)
if len(line_parts) < 1 { // if no command return
return
}
command := ""
if len(line_parts) >= 1 {
command = strings.ToLower(line_parts[0])
}
var wallett *walletapi.Wallet_Disk
//account_state := account_valid
switch command {
case "1": // open existing wallet
filename := choose_file_name(l)
// ask user a password
for i := 0; i < 3; i++ {
wallett, err = walletapi.Open_Encrypted_Wallet(filename, ReadPassword(l, filename))
if err != nil {
logger.Error(err, "Error occurred while opening wallet file", "filename", filename)
wallet = nil
break
} else { // user knows the password and is db is valid
break
}
}
if wallett != nil {
wallet = wallett
wallett = nil
logger.Info("Successfully opened wallet")
common_processing(wallet)
}
case "2": // create a new random account
filename := choose_file_name(l)
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
wallett, err = walletapi.Create_Encrypted_Wallet_Random(filename, password)
if err != nil {
logger.Error(err, "Error occurred while creating wallet file", "filename", filename)
wallet = nil
break
}
err = wallett.Set_Encrypted_Wallet_Password(password)
if err != nil {
logger.Error(err, "Error changing password")
}
wallet = wallett
wallett = nil
seed_language := choose_seed_language(l)
wallet.SetSeedLanguage(seed_language)
logger.V(1).Info("Seed", "Language", seed_language)
display_seed(l, wallet)
common_processing(wallet)
case "3": // create wallet from recovery words
filename := choose_file_name(l)
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
electrum_words := read_line_with_prompt(l, "Enter seed (25 words) : ")
wallett, err = walletapi.Create_Encrypted_Wallet_From_Recovery_Words(filename, password, electrum_words)
if err != nil {
logger.Error(err, "Error while recovering wallet using seed.")
break
}
wallet = wallett
wallett = nil
//globals.Logger.Debugf("Seed Language %s", account.SeedLanguage)
logger.Info("Successfully recovered wallet from seed")
common_processing(wallet)
case "4": // create wallet from hex seed
filename := choose_file_name(l)
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
seed_key_string := read_line_with_prompt(l, "Please enter your seed ( hex 64 chars): ")
seed_raw, err := hex.DecodeString(seed_key_string) // hex decode
if len(seed_key_string) >= 65 || err != nil { //sanity check
logger.Error(err, "Seed must be less than 66 chars hexadecimal chars")
break
}
wallett, err = walletapi.Create_Encrypted_Wallet(filename, password, new(crypto.BNRed).SetBytes(seed_raw))
if err != nil {
logger.Error(err, "Error while recovering wallet using seed key")
break
}
logger.Info("Successfully recovered wallet from hex seed")
wallet = wallett
wallett = nil
seed_language := choose_seed_language(l)
wallet.SetSeedLanguage(seed_language)
logger.V(1).Info("Seed", "Language", seed_language)
display_seed(l, wallet)
common_processing(wallet)
/*
case "5": // create new view only wallet // TODO user providing wrong key is not being validated, do it ASAP
filename := choose_file_name(l)
view_key_string := read_line_with_prompt(l, "Please enter your View Only Key ( hex 128 chars): ")
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
wallet, err = walletapi.Create_Encrypted_Wallet_ViewOnly(filename, password, view_key_string)
if err != nil {
globals.Logger.Warnf("Error while reconstructing view only wallet using view key err %s\n", err)
break
}
if globals.Arguments["--offline"].(bool) == true {
//offline_mode = true
} else {
wallet.SetOnlineMode()
}
case "6": // create non deterministic wallet // TODO user providing wrong key is not being validated, do it ASAP
filename := choose_file_name(l)
spend_key_string := read_line_with_prompt(l, "Please enter your Secret spend key ( hex 64 chars): ")
view_key_string := read_line_with_prompt(l, "Please enter your Secret view key ( hex 64 chars): ")
password := ReadConfirmedPassword(l, "Enter password", "Confirm password")
wallet, err = walletapi.Create_Encrypted_Wallet_NonDeterministic(filename, password, spend_key_string,view_key_string)
if err != nil {
globals.Logger.Warnf("Error while reconstructing view only wallet using view key err %s\n", err)
break
}
if globals.Arguments["--offline"].(bool) == true {
//offline_mode = true
} else {
wallet.SetOnlineMode()
}
*/
case "9":
menu_mode = false
logger.Info("Prompt mode enabled")
case "0", "bye", "exit", "quit":
globals.Exit_In_Progress = true
default: // just loop
}
//_ = account_state
// NOTE: if we are in online mode, it is handled automatically
// user opened or created a new account
// rescan blockchain in offline mode
//if account_state == false && account_valid && offline_mode {
// go trigger_offline_data_scan()
//}
}
// sets online mode, starts RPC server etc
func common_processing(wallet *walletapi.Wallet_Disk) {
if globals.Arguments["--offline"].(bool) == true {
//offline_mode = true
} else {
wallet.SetOnlineMode()
}
wallet.SetNetwork(!globals.Arguments["--testnet"].(bool))
// start rpc server if requested
if globals.Arguments["--rpc-server"].(bool) == true {
rpc_address := "127.0.0.1:" + fmt.Sprintf("%d", config.Mainnet.Wallet_RPC_Default_Port)
if !globals.IsMainnet() {
rpc_address = "127.0.0.1:" + fmt.Sprintf("%d", config.Testnet.Wallet_RPC_Default_Port)
}
if globals.Arguments["--rpc-bind"] != nil {
rpc_address = globals.Arguments["--rpc-bind"].(string)
}
logger.Info("Starting RPC server", "address", rpc_address)
if _, err := rpcserver.RPCServer_Start(wallet, "walletrpc"); err != nil {
logger.Error(err, "Error starting rpc server")
}
}
time.Sleep(time.Second)
// init_script_engine(wallet) // init script engine
// init_plugins_engine(wallet) // init script engine
}

606
cmd/dero-wallet-cli/main.go Normal file
View File

@ -0,0 +1,606 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
/// this file implements the wallet and rpc wallet
import "io"
import "os"
import "fmt"
import "time"
import "sync"
import "strings"
import "strconv"
import "runtime"
import "sync/atomic"
//import "io/ioutil"
//import "bufio"
//import "bytes"
//import "net/http"
import "github.com/go-logr/logr"
import "github.com/chzyer/readline"
import "github.com/docopt/docopt-go"
//import "github.com/vmihailenco/msgpack"
//import "github.com/deroproject/derosuite/address"
import "github.com/deroproject/derohe/config"
//import "github.com/deroproject/derohe/crypto"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/walletapi"
import "github.com/deroproject/derohe/walletapi/mnemonics"
//import "encoding/json"
var command_line string = `dero-wallet-cli
DERO : A secure, private blockchain with smart-contracts
Usage:
dero-wallet-cli [options]
dero-wallet-cli -h | --help
dero-wallet-cli --version
Options:
-h --help Show this screen.
--version Show version.
--wallet-file=<file> Use this file to restore or create new wallet
--password=<password> Use this password to unlock the wallet
--offline Run the wallet in completely offline mode
--offline_datafile=<file> Use the data in offline mode default ("getoutputs.bin") in current dir
--prompt Disable menu and display prompt
--testnet Run in testnet mode.
--debug Debug mode enabled, print log messages
--unlocked Keep wallet unlocked for cli commands (Does not confirm password before commands)
--generate-new-wallet Generate new wallet
--restore-deterministic-wallet Restore wallet from previously saved recovery seed
--electrum-seed=<recovery-seed> Seed to use while restoring wallet
--socks-proxy=<socks_ip:port> Use a proxy to connect to Daemon.
--remote use hard coded remote daemon https://rwallet.dero.live
--daemon-address=<host:port> Use daemon instance at <host>:<port> or https://domain
--rpc-server Run rpc server, so wallet is accessible using api
--rpc-bind=<127.0.0.1:20209> Wallet binds on this ip address and port
--rpc-login=<username:password> RPC server will grant access based on these credentials
`
var menu_mode bool = true // default display menu mode
//var account_valid bool = false // if an account has been opened, do not allow to create new account in this session
var offline_mode bool // whether we are in offline mode
var sync_in_progress int // whether sync is in progress with daemon
var wallet *walletapi.Wallet_Disk //= &walletapi.Account{} // all account data is available here
//var address string
var sync_time time.Time // used to suitable update prompt
var default_offline_datafile string = "getoutputs.bin"
var logger logr.Logger = logr.Discard() // default discard all logs
var color_black = "\033[30m"
var color_red = "\033[31m"
var color_green = "\033[32m"
var color_yellow = "\033[33m"
var color_blue = "\033[34m"
var color_magenta = "\033[35m"
var color_cyan = "\033[36m"
var color_white = "\033[37m"
var color_extra_white = "\033[1m"
var color_normal = "\033[0m"
var prompt_mutex sync.Mutex // prompt lock
var prompt string = "\033[92mDERO Wallet:\033[32m>>>\033[0m "
var tablock uint32
func main() {
var err error
globals.Arguments, err = docopt.Parse(command_line, nil, true, "DERO atlantis wallet : work in progress", false)
if err != nil {
fmt.Printf("Error while parsing options err: %s\n", err)
return
}
// init the lookup table one, anyone importing walletapi should init this first, this will take around 1 sec on any recent system
walletapi.Initialize_LookupTable(1, 1<<17)
// We need to initialize readline first, so it changes stderr to ansi processor on windows
l, err := readline.NewEx(&readline.Config{
//Prompt: "\033[92mDERO:\033[32m»\033[0m",
Prompt: prompt,
HistoryFile: "", // wallet never saves any history file anywhere, to prevent any leakage
AutoComplete: completer,
InterruptPrompt: "^C",
EOFPrompt: "exit",
HistorySearchFold: true,
FuncFilterInputRune: filterInput,
})
if err != nil {
panic(err)
}
defer l.Close()
// get ready to grab passwords
setPasswordCfg := l.GenPasswordConfig()
setPasswordCfg.SetListener(func(line []rune, pos int, key rune) (newLine []rune, newPos int, ok bool) {
l.SetPrompt(fmt.Sprintf("Enter password(%v): ", len(line)))
l.Refresh()
return nil, 0, false
})
l.Refresh() // refresh the prompt
// parse arguments and setup logging , print basic information
exename, _ := os.Executable()
f, err := os.Create(exename + ".log")
if err != nil {
fmt.Printf("Error while opening log file err: %s filename %s\n", err, exename+".log")
return
}
globals.InitializeLog(l.Stdout(), f)
logger = globals.Logger.WithName("wallet")
logger.Info("DERO Wallet : It is an alpha version, use it for testing/evaluations purpose only.")
logger.Info("Copyright 2017-2021 DERO Project. All rights reserved.")
logger.Info("", "OS", runtime.GOOS, "ARCH", runtime.GOARCH, "GOMAXPROCS", runtime.GOMAXPROCS(0))
logger.Info("", "Version", config.Version.String())
logger.V(1).Info("", "Arguments", globals.Arguments)
globals.Initialize() // setup network and proxy
logger.V(0).Info("", "MODE", globals.Config.Name)
// disable menu mode if requested
if globals.Arguments["--prompt"] != nil && globals.Arguments["--prompt"].(bool) {
menu_mode = false
}
wallet_file := "wallet.db" //dero.wallet"
if globals.Arguments["--wallet-file"] != nil {
wallet_file = globals.Arguments["--wallet-file"].(string) // override with user specified settings
}
wallet_password := "" // default
if globals.Arguments["--password"] != nil {
wallet_password = globals.Arguments["--password"].(string) // override with user specified settings
}
// lets handle the arguments one by one
if globals.Arguments["--restore-deterministic-wallet"].(bool) {
// user wants to recover wallet, check whether seed is provided on command line, if not prompt now
seed := ""
if globals.Arguments["--electrum-seed"] != nil {
seed = globals.Arguments["--electrum-seed"].(string)
} else { // prompt user for seed
seed = read_line_with_prompt(l, "Enter your seed (25 words) : ")
}
account, err := walletapi.Generate_Account_From_Recovery_Words(seed)
if err != nil {
logger.Error(err, "Error while recovering seed.")
return
}
// ask user a pass, if not provided on command_line
password := ""
if wallet_password == "" {
password = ReadConfirmedPassword(l, "Enter password", "Confirm password")
}
wallet, err = walletapi.Create_Encrypted_Wallet(wallet_file, password, account.Keys.Secret)
if err != nil {
logger.Error(err, "Error occurred while restoring wallet")
return
}
logger.V(1).Info("Seed Language", "language", account.SeedLanguage)
logger.Info("Successfully recovered wallet from seed")
}
// generare new random account if requested
if globals.Arguments["--generate-new-wallet"] != nil && globals.Arguments["--generate-new-wallet"].(bool) {
filename := choose_file_name(l)
// ask user a pass, if not provided on command_line
password := ""
if wallet_password == "" {
password = ReadConfirmedPassword(l, "Enter password", "Confirm password")
}
seed_language := choose_seed_language(l)
_ = seed_language
wallet, err = walletapi.Create_Encrypted_Wallet_Random(filename, password)
if err != nil {
logger.Error(err, "Error occured while creating new wallet.")
wallet = nil
return
}
logger.V(1).Info("Seed Language", "language", account.SeedLanguage)
display_seed(l, wallet)
}
if globals.Arguments["--rpc-login"] != nil {
userpass := globals.Arguments["--rpc-login"].(string)
parts := strings.SplitN(userpass, ":", 2)
if len(parts) != 2 {
logger.Error(fmt.Errorf("RPC user name or password invalid"), "cannot set password on wallet rpc")
return
}
logger.Info("Wallet RPC", "username", parts[0], "password", parts[1])
}
// if wallet is nil, check whether the file exists, if yes, request password
if wallet == nil {
if _, err = os.Stat(wallet_file); err == nil {
// if a wallet file and password has been provide, make sure that the wallet opens in 1st attempt, othwer wise exit
if globals.Arguments["--password"] != nil {
wallet, err = walletapi.Open_Encrypted_Wallet(wallet_file, wallet_password)
if err != nil {
logger.Error(err, "Error occurred while opening wallet.")
os.Exit(-1)
}
} else { // request user the password
// ask user a password
for i := 0; i < 3; i++ {
wallet, err = walletapi.Open_Encrypted_Wallet(wallet_file, ReadPassword(l, wallet_file))
if err != nil {
logger.Error(err, "Error occurred while opening wallet.")
} else { // user knows the password and is db is valid
break
}
}
}
//globals.Logger.Debugf("Seed Language %s", account.SeedLanguage)
//globals.Logger.Infof("Successfully recovered wallet from seed")
}
}
// check if offline mode requested
if wallet != nil {
common_processing(wallet)
}
go walletapi.Keep_Connectivity() // maintain connectivity
//pipe_reader, pipe_writer = io.Pipe() // create pipes
// reader ready to parse any data from the file
//go blockchain_data_consumer()
// update prompt when required
prompt_mutex.Lock()
go update_prompt(l)
prompt_mutex.Unlock()
// if wallet has been opened in offline mode by commands supplied at command prompt
// trigger the offline scan
// go trigger_offline_data_scan()
// start infinite loop processing user commands
for {
prompt_mutex.Lock()
if globals.Exit_In_Progress { // exit if requested so
prompt_mutex.Unlock()
break
}
prompt_mutex.Unlock()
if menu_mode { // display menu if requested
if wallet != nil { // account is opened, display post menu
display_easymenu_post_open_command(l)
} else { // account has not been opened display pre open menu
display_easymenu_pre_open_command(l)
}
}
line, err := l.Readline()
if err == readline.ErrInterrupt {
if len(line) == 0 {
logger.Info("Ctrl-C received, Exit in progress")
globals.Exit_In_Progress = true
break
} else {
continue
}
} else if err == io.EOF {
// break
time.Sleep(time.Second)
}
// pass command to suitable handler
if menu_mode {
if wallet != nil {
if !handle_easymenu_post_open_command(l, line) { // if not processed , try processing as command
handle_prompt_command(l, line)
PressAnyKey(l, wallet)
}
} else {
handle_easymenu_pre_open_command(l, line)
}
} else {
handle_prompt_command(l, line)
}
}
prompt_mutex.Lock()
globals.Exit_In_Progress = true
prompt_mutex.Unlock()
}
// update prompt as and when necessary
// TODO: make this code simple, with clear direction
func update_prompt(l *readline.Instance) {
last_wallet_height := uint64(0)
last_daemon_height := int64(0)
daemon_online := false
last_update_time := int64(0)
for {
time.Sleep(30 * time.Millisecond) // give user a smooth running number
prompt_mutex.Lock()
if globals.Exit_In_Progress {
prompt_mutex.Unlock()
return
}
prompt_mutex.Unlock()
if atomic.LoadUint32(&tablock) > 0 { // tab key has been presssed, stop delivering updates to prompt
continue
}
prompt_mutex.Lock() // do not update if we can not lock the mutex
// show first 8 bytes of address
address_trim := ""
if wallet != nil {
tmp_addr := wallet.GetAddress().String()
address_trim = tmp_addr[0:8]
} else {
address_trim = "DERO Wallet"
}
if wallet == nil {
l.SetPrompt(fmt.Sprintf("\033[1m\033[32m%s \033[0m"+color_green+"0/%d \033[32m>>>\033[0m ", address_trim, walletapi.Get_Daemon_Height()))
l.Refresh()
prompt_mutex.Unlock()
continue
}
// only update prompt if needed, or update atleast once every second
_ = daemon_online
//fmt.Printf("chekcing if update is required\n")
if last_wallet_height != wallet.Get_Height() || last_daemon_height != walletapi.Get_Daemon_Height() ||
/*daemon_online != wallet.IsDaemonOnlineCached() ||*/ (time.Now().Unix()-last_update_time) >= 1 {
// choose color based on urgency
color := "\033[32m" // default is green color
if wallet.Get_Height() < wallet.Get_Daemon_Height() {
color = "\033[33m" // make prompt yellow
}
//dheight := walletapi.Get_Daemon_Height()
/*if wallet.IsDaemonOnlineCached() == false {
color = "\033[33m" // make prompt yellow
dheight = 0
}*/
balance_string := ""
//balance_unlocked, locked_balance := wallet.Get_Balance_Rescan()// wallet.Get_Balance()
balance_unlocked, _ := wallet.Get_Balance()
balance_string = fmt.Sprintf(color_green+"%s "+color_white, globals.FormatMoney(balance_unlocked))
if wallet.Error != nil {
balance_string += fmt.Sprintf(color_red+" %s ", wallet.Error)
} /*else if wallet.PoolCount() > 0 {
balance_string += fmt.Sprintf(color_yellow+"(%d tx pending for -%s)", wallet.PoolCount(), globals.FormatMoney(wallet.PoolBalance()))
}*/
testnet_string := ""
if !globals.IsMainnet() {
testnet_string = "\033[31m TESTNET"
}
l.SetPrompt(fmt.Sprintf("\033[1m\033[32m%s \033[0m"+color+"%d/%d %s %s\033[32m>>>\033[0m ", address_trim, wallet.Get_Height(), walletapi.Get_Daemon_Height(), balance_string, testnet_string))
l.Refresh()
last_wallet_height = wallet.Get_Height()
last_daemon_height = walletapi.Get_Daemon_Height()
last_update_time = time.Now().Unix()
//daemon_online = wallet.IsDaemonOnlineCached()
_ = last_update_time
}
prompt_mutex.Unlock()
}
}
// create a new wallet from scratch from random numbers
func Create_New_Wallet(l *readline.Instance) (w *walletapi.Wallet_Disk, err error) {
// ask user a file name to store the data
walletpath := read_line_with_prompt(l, "Please enter wallet file name : ")
walletpassword := ""
account, _ := walletapi.Generate_Keys_From_Random()
account.SeedLanguage = choose_seed_language(l)
w, err = walletapi.Create_Encrypted_Wallet(walletpath, walletpassword, account.Keys.Secret)
if err != nil {
return
}
// set wallet seed language
// a new account has been created, append the seed to user home directory
//usr, err := user.Current()
/*if err != nil {
globals.Logger.Warnf("Cannot get current username to save recovery key and password")
}else{ // we have a user, get his home dir
}*/
return
}
/*
// create a new wallet from hex seed provided
func Create_New_Account_from_seed(l *readline.Instance) *walletapi.Account {
var account *walletapi.Account
var seedkey crypto.Key
seed := read_line_with_prompt(l, "Please enter your seed ( hex 64 chars): ")
seed = strings.TrimSpace(seed) // trim any extra space
seed_raw, err := hex.DecodeString(seed) // hex decode
if len(seed) != 64 || err != nil { //sanity check
globals.Logger.Warnf("Seed must be 64 chars hexadecimal chars")
return account
}
copy(seedkey[:], seed_raw[:32]) // copy bytes to seed
account, _ = walletapi.Generate_Account_From_Seed(seedkey) // create a new account
account.SeedLanguage = choose_seed_language(l) // ask user his seed preference and set it
account_valid = true
return account
}
// create a new wallet from viewable seed provided
// viewable seed consists of public spend key and private view key
func Create_New_Account_from_viewable_key(l *readline.Instance) *walletapi.Account {
var seedkey crypto.Key
var privateview crypto.Key
var account *walletapi.Account
seed := read_line_with_prompt(l, "Please enter your View Only Key ( hex 128 chars): ")
seed = strings.TrimSpace(seed) // trim any extra space
seed_raw, err := hex.DecodeString(seed)
if len(seed) != 128 || err != nil {
globals.Logger.Warnf("View Only key must be 128 chars hexadecimal chars")
return account
}
copy(seedkey[:], seed_raw[:32])
copy(privateview[:], seed_raw[32:64])
account, _ = walletapi.Generate_Account_View_Only(seedkey, privateview)
account_valid = true
return account
}
*/
// helper function to let user to choose a seed in specific lanaguage
func choose_seed_language(l *readline.Instance) string {
languages := mnemonics.Language_List()
fmt.Printf("Language list for seeds, please enter a number (default English)\n")
for i := range languages {
fmt.Fprintf(l.Stderr(), "\033[1m%2d:\033[0m %s\n", i, languages[i])
}
language_number := read_line_with_prompt(l, "Please enter a choice: ")
choice := 0 // 0 for english
if s, err := strconv.Atoi(language_number); err == nil {
choice = s
}
for i := range languages { // if user gave any wrong or ot of range choice, choose english
if choice == i {
return languages[choice]
}
}
// if no match , return Englisg
return "English"
}
// lets the user choose a filename or use default
func choose_file_name(l *readline.Instance) (filename string) {
default_filename := "wallet.db"
if globals.Arguments["--wallet-file"] != nil {
default_filename = globals.Arguments["--wallet-file"].(string) // override with user specified settings
}
filename = read_line_with_prompt(l, fmt.Sprintf("Enter wallet filename (default %s): ", default_filename))
if len(filename) < 1 {
filename = default_filename
}
return
}
// read a line from the prompt
// since we cannot query existing, we can get away by using password mode with
func read_line_with_prompt(l *readline.Instance, prompt_temporary string) string {
prompt_mutex.Lock()
defer prompt_mutex.Unlock()
l.SetPrompt(prompt_temporary)
line, err := l.Readline()
if err == readline.ErrInterrupt {
if len(line) == 0 {
logger.Info("Ctrl-C received, Exiting")
os.Exit(0)
}
} else if err == io.EOF {
os.Exit(0)
}
l.SetPrompt(prompt)
return line
}
// filter out specfic inputs from input processing
// currently we only skip CtrlZ background key
func filterInput(r rune) (rune, bool) {
switch r {
// block CtrlZ feature
case readline.CharCtrlZ:
return r, false
case readline.CharTab:
atomic.StoreUint32(&tablock, 1) // lock prompt update
case readline.CharEnter:
atomic.StoreUint32(&tablock, 0) // enable prompt update
}
return r, true
}

File diff suppressed because it is too large Load Diff

90
cmd/derod/LICENSE Normal file
View File

@ -0,0 +1,90 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

12
cmd/derod/dummy_test.go Normal file
View File

@ -0,0 +1,12 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
package main
import "testing"
func Test_Part1(t *testing.T) {
}

46
cmd/derod/fdlimits.go Normal file
View File

@ -0,0 +1,46 @@
//go:build !windows
// +build !windows
package main
import "runtime"
import "golang.org/x/sys/unix"
// we skip type as go will automatically identify type
const (
UnixMax = 999999
OSXMax = 24576 // see this https://github.com/golang/go/issues/30401
)
type Limits struct {
Current uint64
Max uint64
}
func init() {
switch runtime.GOOS {
case "darwin":
unix.Setrlimit(unix.RLIMIT_NOFILE, &unix.Rlimit{Max: OSXMax, Cur: OSXMax})
case "linux", "netbsd", "openbsd", "freebsd":
unix.Setrlimit(unix.RLIMIT_NOFILE, &unix.Rlimit{Max: UnixMax, Cur: UnixMax})
default: // nothing to do
}
}
func Get() (*Limits, error) {
var rLimit unix.Rlimit
if err := unix.Getrlimit(unix.RLIMIT_NOFILE, &rLimit); err != nil {
return nil, err
}
return &Limits{Current: uint64(rLimit.Cur), Max: uint64(rLimit.Max)}, nil
}
/*
func Set(maxLimit uint64) error {
rLimit := unix.Rlimit {Max:maxLimit, Cur:maxLimit}
if runtime.GOOS == "darwin" && rLimit.Cur > OSXMax { //https://github.com/golang/go/issues/30401
rLimit.Cur = OSXMax
}
return unix.Setrlimit(unix.RLIMIT_NOFILE, &rLimit)
}
*/

1026
cmd/derod/main.go Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,58 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
//import "fmt"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/blockchain"
// this function is only used by the RPC and is not used by the core and should be moved to RPC interface
/* fill up the above structure from the blockchain */
func GetBlockHeader(chain *blockchain.Blockchain, hash crypto.Hash) (result rpc.BlockHeader_Print, err error) {
bl, err := chain.Load_BL_FROM_ID(hash)
if err != nil {
return
}
result.TopoHeight = -1
if chain.Is_Block_Topological_order(hash) {
result.TopoHeight = chain.Load_Block_Topological_order(hash)
}
result.Height = chain.Load_Height_for_BL_ID(hash)
result.Depth = chain.Get_Height() - result.Height
result.Difficulty = chain.Load_Block_Difficulty(hash).String()
result.Hash = hash.String()
result.Major_Version = uint64(bl.Major_Version)
result.Minor_Version = uint64(bl.Minor_Version)
result.Orphan_Status = chain.Is_Block_Orphan(hash)
if result.TopoHeight >= chain.LocatePruneTopo()+10 { // this result may/may not be valid at just above prune heights
result.SyncBlock = chain.IsBlockSyncBlockHeight(hash)
}
result.SideBlock = chain.Isblock_SideBlock(hash)
//result.Reward = chain.Load_Block_Total_Reward(dbtx, hash)
result.TXCount = int64(len(bl.Tx_hashes))
for i := range bl.Tips {
result.Tips = append(result.Tips, bl.Tips[i].String())
}
//result.Prev_Hash = bl.Prev_Hash.String()
result.Timestamp = bl.Timestamp
return
}

View File

@ -0,0 +1,74 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "encoding/hex"
import "encoding/json"
import "runtime/debug"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/rpc"
//import "github.com/deroproject/derosuite/blockchain"
func GetBlock(ctx context.Context, p rpc.GetBlock_Params) (result rpc.GetBlock_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
var hash crypto.Hash
if crypto.HashHexToHash(p.Hash) == hash { // user requested using height
if int64(p.Height) > chain.Load_TOPO_HEIGHT() {
err = fmt.Errorf("user requested block at toopheight more than chain topoheight")
return
}
hash, err = chain.Load_Block_Topological_order_at_index(int64(p.Height))
if err != nil { // if err return err
return result, fmt.Errorf("User requested %d height block, chain height %d but err occured %s", p.Height, chain.Get_Height(), err)
}
} else {
hash = crypto.HashHexToHash(p.Hash)
}
block_header, err := GetBlockHeader(chain, hash)
if err != nil { // if err return err
return
}
bl, err := chain.Load_BL_FROM_ID(hash)
if err != nil { // if err return err
return
}
json_encoded_bytes, err := json.Marshal(bl)
if err != nil { // if err return err
return
}
return rpc.GetBlock_Result{ // return success
Block_Header: block_header,
Blob: hex.EncodeToString(bl.Serialize()),
Json: string(json_encoded_bytes),
Status: "OK",
}, nil
}

View File

@ -0,0 +1,27 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "context"
import "github.com/deroproject/derohe/rpc"
func GetBlockCount(ctx context.Context) rpc.GetBlockCount_Result {
return rpc.GetBlockCount_Result{
Count: uint64(chain.Get_Height()),
Status: "OK",
}
}

View File

@ -0,0 +1,39 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/rpc"
func GetBlockHeaderByHash(ctx context.Context, p rpc.GetBlockHeaderByHash_Params) (result rpc.GetBlockHeaderByHash_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
hash := crypto.HashHexToHash(p.Hash)
if block_header, err := GetBlockHeader(chain, hash); err == nil { // if err return err
return rpc.GetBlockHeaderByHash_Result{ // return success
Block_Header: block_header,
Status: "OK",
}, nil
}
return
}

View File

@ -0,0 +1,55 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/rpc"
func GetBlockHeaderByTopoHeight(ctx context.Context, p rpc.GetBlockHeaderByTopoHeight_Params) (result rpc.GetBlockHeaderByHeight_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
if int64(p.TopoHeight) > chain.Load_TOPO_HEIGHT() {
err = fmt.Errorf("Too big topo height: %d, current blockchain height = %d", p.TopoHeight, chain.Load_TOPO_HEIGHT())
return
}
//return nil, &jsonrpc.Error{Code: -2, Message: fmt.Sprintf("NOT SUPPORTED height: %d, current blockchain height = %d", p.Height, chain.Get_Height())}
hash, err := chain.Load_Block_Topological_order_at_index(int64(p.TopoHeight))
if err != nil { // if err return err
err = fmt.Errorf("User requested %d height block, chain topo height %d but err occured %s", p.TopoHeight, chain.Load_TOPO_HEIGHT(), err)
return
}
block_header, err := GetBlockHeader(chain, hash)
if err != nil { // if err return err
err = fmt.Errorf("User requested %d height block, chain topo height %d but err occured %s", p.TopoHeight, chain.Load_TOPO_HEIGHT(), err)
return
}
return rpc.GetBlockHeaderByHeight_Result{ // return success
Block_Header: block_header,
Status: "OK",
}, nil
}

View File

@ -0,0 +1,80 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "runtime/debug"
import "golang.org/x/time/rate"
import "github.com/deroproject/derohe/rpc"
// rate limiter is deployed, in case RPC is exposed over internet
// someone should not be just giving fake inputs and delay chain syncing
var get_block_limiter = rate.NewLimiter(16.0, 8) // 16 req per sec, burst of 8 req is okay
func GetBlockTemplate(ctx context.Context, p rpc.GetBlockTemplate_Params) (result rpc.GetBlockTemplate_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
/*
if !get_block_limiter.Allow() { // if rate limiter allows, then add block to chain
logger.Warnf("Too many get block template requests per sec rejected by chain.")
return nil,&jsonrpc.Error{
Code: jsonrpc.ErrorCodeInvalidRequest,
Message: "Too many get block template requests per sec rejected by chain.",
}
}
*/
// validate address
miner_address, err := rpc.NewAddress(p.Wallet_Address)
if err != nil {
return result, fmt.Errorf("Address could not be parsed, err:%s", err)
}
bl, mbl, mbl_hex, reserved_pos, err := chain.Create_new_block_template_mining(*miner_address)
_ = mbl
_ = reserved_pos
if err != nil {
return
}
prev_hash := ""
for i := range bl.Tips {
prev_hash = prev_hash + bl.Tips[i].String()
}
result.JobID = fmt.Sprintf("%d.%d.%s", bl.Timestamp, 0, p.Miner)
if p.Block {
result.Blocktemplate_blob = fmt.Sprintf("%x", bl.Serialize())
}
diff := chain.Get_Difficulty_At_Tips(bl.Tips)
result.Blockhashing_blob = mbl_hex
result.Height = bl.Height
result.Prev_Hash = prev_hash
result.Difficultyuint64 = diff.Uint64()
result.Difficulty = diff.String()
result.Status = "OK"
return result, nil
}

View File

@ -0,0 +1,201 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "math"
import "context"
import "runtime/debug"
import "golang.org/x/xerrors"
import "github.com/deroproject/graviton"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/errormsg"
import "github.com/deroproject/derohe/rpc"
//import "github.com/deroproject/derohe/dvm"
//import "github.com/deroproject/derohe/cryptography/crypto"
func GetEncryptedBalance(ctx context.Context, p rpc.GetEncryptedBalance_Params) (result rpc.GetEncryptedBalance_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
fmt.Printf("panic stack trace %s params %+v\n", debug.Stack(), p)
}
}()
uaddress, err := globals.ParseValidateAddress(p.Address)
if err != nil {
panic(err)
}
registration := LocatePointOfRegistration(uaddress)
topoheight := chain.Load_TOPO_HEIGHT()
if p.Merkle_Balance_TreeHash == "" && p.TopoHeight >= 0 && p.TopoHeight <= topoheight { // get balance tree at specific topoheight
topoheight = p.TopoHeight
}
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
panic(err)
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
panic(err)
}
var balance_tree *graviton.Tree
treename := config.BALANCE_TREE
keyname := uaddress.Compressed()
if !p.SCID.IsZero() {
treename = string(p.SCID[:])
}
if balance_tree, err = ss.GetTree(treename); err != nil {
panic(err)
}
bits, _, balance_serialized, err := balance_tree.GetKeyValueFromKey(keyname)
//fmt.Printf("balance_serialized %x err %s, scid %s keyname %x treename %x\n", balance_serialized,err,p.SCID, keyname, treename)
if err != nil {
if xerrors.Is(err, graviton.ErrNotFound) { // address needs registration
return rpc.GetEncryptedBalance_Result{ // return success
Registration: registration,
Status: errormsg.ErrAccountUnregistered.Error(),
}, errormsg.ErrAccountUnregistered
} else {
panic(err)
}
}
version, err := chain.ReadBlockSnapshotVersion(toporecord.BLOCK_ID)
if err != nil {
panic(err)
}
merkle_hash, err := chain.Load_Merkle_Hash(version)
if err != nil {
panic(err)
}
// calculate top height merkle tree hash
//var dmerkle_hash crypto.Hash
version, err = chain.ReadBlockSnapshotVersion(chain.Get_Top_ID())
if err != nil {
panic(err)
}
dmerkle_hash, err := chain.Load_Merkle_Hash(version)
if err != nil {
panic(err)
}
return rpc.GetEncryptedBalance_Result{ // return success
SCID: p.SCID,
Data: fmt.Sprintf("%x", balance_serialized),
Registration: registration,
Bits: bits, // no. of bbits required
Height: toporecord.Height,
Topoheight: topoheight,
BlockHash: toporecord.BLOCK_ID,
Merkle_Balance_TreeHash: fmt.Sprintf("%x", merkle_hash[:]),
DHeight: chain.Get_Height(),
DTopoheight: chain.Load_TOPO_HEIGHT(),
DMerkle_Balance_TreeHash: fmt.Sprintf("%x", dmerkle_hash[:]),
Status: "OK",
}, nil
}
// if address is unregistered, returns negative numbers
func LocatePointOfRegistration(uaddress *rpc.Address) int64 {
addr := uaddress.Compressed()
low := chain.LocatePruneTopo() // in case of purging DB, this should start from N
if low >= 1 {
low++
}
topoheight := chain.Load_TOPO_HEIGHT()
high := int64(topoheight)
if !IsRegisteredAtTopoHeight(addr, topoheight) {
return -1
}
if IsRegisteredAtTopoHeight(addr, low) {
return low
}
lowest := int64(math.MaxInt64)
for low <= high {
median := (low + high) / 2
if IsRegisteredAtTopoHeight(addr, median) {
if lowest > median {
lowest = median
}
high = median - 1
} else {
low = median + 1
}
}
//fmt.Printf("found point %d\n", lowest)
return lowest
}
func IsRegisteredAtTopoHeight(addr []byte, topoheight int64) bool {
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
panic(err)
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
panic(err)
}
var balance_tree *graviton.Tree
balance_tree, err = ss.GetTree(config.BALANCE_TREE)
if err != nil {
panic(err)
}
_, err = balance_tree.Get(addr)
if err != nil {
if xerrors.Is(err, graviton.ErrNotFound) { // address needs registration
return false
} else {
panic(err)
}
}
return true
}

View File

@ -0,0 +1,29 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "context"
import "github.com/deroproject/derohe/rpc"
func GetHeight(ctx context.Context) rpc.Daemon_GetHeight_Result {
return rpc.Daemon_GetHeight_Result{
Height: uint64(chain.Get_Height()),
StableHeight: chain.Get_Stable_Height(),
TopoHeight: chain.Load_TOPO_HEIGHT(),
Status: "OK",
}
}

View File

@ -0,0 +1,95 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/rpc"
//import "github.com/deroproject/derohe/blockchain"
func GetInfo(ctx context.Context) (result rpc.GetInfo_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
//result.Difficulty = chain.Get_Difficulty_At_Block(top_id)
result.Height = chain.Get_Height()
result.StableHeight = chain.Get_Stable_Height()
result.TopoHeight = chain.Load_TOPO_HEIGHT()
{
version, err := chain.ReadBlockSnapshotVersion(chain.Get_Top_ID())
if err != nil {
panic(err)
}
balance_merkle_hash, err := chain.Load_Merkle_Hash(version)
if err != nil {
panic(err)
}
result.Merkle_Balance_TreeHash = fmt.Sprintf("%X", balance_merkle_hash[:])
}
blid, err := chain.Load_Block_Topological_order_at_index(result.TopoHeight)
if err == nil {
result.Difficulty = chain.Get_Difficulty_At_Tips(chain.Get_TIPS()).Uint64()
}
result.Status = "OK"
result.Version = config.Version.String()
result.Top_block_hash = blid.String()
result.Target = chain.Get_Current_BlockTime()
if result.TopoHeight-chain.LocatePruneTopo() > 100 {
blid50, err := chain.Load_Block_Topological_order_at_index(result.TopoHeight - 50)
if err == nil {
now := chain.Load_Block_Timestamp(blid)
now50 := chain.Load_Block_Timestamp(blid50)
result.AverageBlockTime50 = float32(now-now50) / (50.0 * 1000)
}
}
//result.Target_Height = uint64(chain.Get_Height())
//result.Tx_pool_size = uint64(len(chain.Mempool.Mempool_List_TX()))
// get dynamic fees per kb, used by wallet for tx creation
//result.Dynamic_fee_per_kb = config.FEE_PER_KB
//result.Median_Block_Size = config.CRYPTONOTE_MAX_BLOCK_SIZE
//result.Total_Supply = chain.Load_Already_Generated_Coins_for_Topo_Index( result.TopoHeight)
result.Total_Supply = 0
if result.Total_Supply > (1000000 * 1000000000000) {
result.Total_Supply -= (1000000 * 1000000000000) // remove premine
}
result.Total_Supply = result.Total_Supply / 1000000000000
if globals.Config.Name != config.Mainnet.Name { // anything other than mainnet is testnet at this point in time
result.Testnet = true
}
if globals.IsSimulator() {
result.Network = "Simulator"
}
return result, nil
}

View File

@ -0,0 +1,123 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/rpc"
//import "github.com/deroproject/derohe/blockchain"
func GetRandomAddress(ctx context.Context, p rpc.GetRandomAddress_Params) (result rpc.GetRandomAddress_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
topoheight := chain.Load_TOPO_HEIGHT()
if topoheight > 100 {
topoheight -= 5
}
var cursor_list []string
{
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
panic(err)
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
panic(err)
}
treename := config.BALANCE_TREE
if !p.SCID.IsZero() {
treename = string(p.SCID[:])
}
balance_tree, err := ss.GetTree(treename)
if err != nil {
panic(err)
}
account_map := map[string]bool{}
for i := 0; i < 100; i++ {
k, _, err := balance_tree.Random()
if err != nil {
continue
}
var acckey crypto.Point
if err := acckey.DecodeCompressed(k[:]); err != nil {
continue
}
addr := rpc.NewAddressFromKeys(&acckey)
addr.Mainnet = true
if globals.Config.Name != config.Mainnet.Name { // anything other than mainnet is testnet at this point in time
addr.Mainnet = false
}
account_map[addr.String()] = true
if len(account_map) > 140 {
break
}
}
for k := range account_map {
cursor_list = append(cursor_list, k)
}
}
/*
c := balance_tree.Cursor()
for k, v, err := c.First(); err == nil; k, v, err = c.Next() {
_ = v
//fmt.Printf("key=%x, value=%x err %s\n", k, v, err)
var acckey crypto.Point
if err := acckey.DecodeCompressed(k[:]); err != nil {
panic(err)
}
addr := address.NewAddressFromKeys(&acckey)
if globals.Config.Name != config.Mainnet.Name { // anything other than mainnet is testnet at this point in time
addr.Network = globals.Config.Public_Address_Prefix
}
cursor_list = append(cursor_list, addr.String())
if len(cursor_list) >= 20 {
break
}
}
}
*/
result.Address = cursor_list
result.Status = "OK"
return result, nil
}

View File

@ -0,0 +1,217 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "encoding/binary"
import "runtime/debug"
//import "github.com/romana/rlog"
import "github.com/deroproject/derohe/cryptography/crypto"
//import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/dvm"
//import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/blockchain"
import "github.com/deroproject/graviton"
func GetSC(ctx context.Context, p rpc.GetSC_Params) (result rpc.GetSC_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace r %s %s", r, debug.Stack())
}
}()
result.VariableStringKeys = map[string]interface{}{}
result.VariableUint64Keys = map[uint64]interface{}{}
result.Balances = map[string]uint64{}
scid := crypto.HashHexToHash(p.SCID)
topoheight := chain.Load_TOPO_HEIGHT()
if p.TopoHeight >= 1 {
topoheight = p.TopoHeight
}
toporecord, err := chain.Store.Topo_store.Read(topoheight)
// we must now fill in compressed ring members
if err == nil {
var ss *graviton.Snapshot
ss, err = chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err == nil {
/*
var sc_meta_tree *graviton.Tree
if sc_meta_tree, err = ss.GetTree(config.SC_META); err == nil {
var meta_bytes []byte
if meta_bytes, err = sc_meta_tree.Get(blockchain.SC_Meta_Key(scid)); err == nil {
var meta blockchain.SC_META_DATA
if err = meta.UnmarshalBinary(meta_bytes); err == nil {
result.Balance = meta.Balance
}
}
} else {
return
}
*/
var sc_data_tree *graviton.Tree
sc_data_tree, err = ss.GetTree(string(scid[:]))
if err == nil {
var zerohash crypto.Hash
if balance_bytes, err := sc_data_tree.Get(zerohash[:]); err == nil {
if len(balance_bytes) == 8 {
result.Balance = binary.BigEndian.Uint64(balance_bytes[:])
}
}
if p.Code { // give SC code
var code_bytes []byte
var v dvm.Variable
if code_bytes, err = sc_data_tree.Get(blockchain.SC_Code_Key(scid)); err == nil {
if err = v.UnmarshalBinary(code_bytes); err != nil {
result.Code = "Unmarshal error"
} else {
result.Code = v.ValueString
}
}
}
if p.Variables { // user requested all variables
cursor := sc_data_tree.Cursor()
var k, v []byte
for k, v, err = cursor.First(); err == nil; k, v, err = cursor.Next() {
var vark, varv dvm.Variable
_ = vark
_ = varv
_ = k
_ = v
//fmt.Printf("key '%x' value '%x'\n", k, v)
if len(k) == 32 && len(v) == 8 { // it's SC balance
result.Balances[fmt.Sprintf("%x", k)] = binary.BigEndian.Uint64(v)
} else if k[len(k)-1] >= 0x3 && k[len(k)-1] < 0x80 && nil == vark.UnmarshalBinary(k) && nil == varv.UnmarshalBinary(v) {
switch vark.Type {
case dvm.Uint64:
if varv.Type == dvm.Uint64 {
result.VariableUint64Keys[vark.ValueUint64] = varv.ValueUint64
} else {
result.VariableUint64Keys[vark.ValueUint64] = fmt.Sprintf("%x", []byte(varv.ValueString))
}
case dvm.String:
if varv.Type == dvm.Uint64 {
result.VariableStringKeys[vark.ValueString] = varv.ValueUint64
} else {
result.VariableStringKeys[vark.ValueString] = fmt.Sprintf("%x", []byte(varv.ValueString))
}
default:
err = fmt.Errorf("UNKNOWN Data type")
return
}
}
}
}
// give any uint64 keys data if any
for _, value := range p.KeysUint64 {
var v dvm.Variable
key, _ := dvm.Variable{Type: dvm.Uint64, ValueUint64: value}.MarshalBinary()
var value_bytes []byte
if value_bytes, err = sc_data_tree.Get(key); err != nil {
result.ValuesUint64 = append(result.ValuesUint64, fmt.Sprintf("NOT AVAILABLE err: %s", err))
continue
}
if err = v.UnmarshalBinary(value_bytes); err != nil {
result.ValuesUint64 = append(result.ValuesUint64, "Unmarshal error")
continue
}
switch v.Type {
case dvm.Uint64:
result.ValuesUint64 = append(result.ValuesUint64, fmt.Sprintf("%d", v.ValueUint64))
case dvm.String:
result.ValuesUint64 = append(result.ValuesUint64, fmt.Sprintf("%x", []byte(v.ValueString)))
default:
result.ValuesUint64 = append(result.ValuesUint64, "UNKNOWN Data type")
}
}
for _, value := range p.KeysString {
var v dvm.Variable
key, _ := dvm.Variable{Type: dvm.String, ValueString: value}.MarshalBinary()
var value_bytes []byte
if value_bytes, err = sc_data_tree.Get(key); err != nil {
//fmt.Printf("Getting key %x\n", key)
result.ValuesString = append(result.ValuesString, fmt.Sprintf("NOT AVAILABLE err: %s", err))
continue
}
if err = v.UnmarshalBinary(value_bytes); err != nil {
result.ValuesString = append(result.ValuesString, "Unmarshal error")
continue
}
switch v.Type {
case dvm.Uint64:
result.ValuesString = append(result.ValuesUint64, fmt.Sprintf("%d", v.ValueUint64))
case dvm.String:
result.ValuesString = append(result.ValuesString, fmt.Sprintf("%x", []byte(v.ValueString)))
default:
result.ValuesString = append(result.ValuesString, "UNKNOWN Data type")
}
}
for _, value := range p.KeysBytes {
var v dvm.Variable
key, _ := dvm.Variable{Type: dvm.String, ValueString: string(value)}.MarshalBinary()
var value_bytes []byte
if value_bytes, err = sc_data_tree.Get(key); err != nil {
result.ValuesBytes = append(result.ValuesBytes, "NOT AVAILABLE")
continue
}
if err = v.UnmarshalBinary(value_bytes); err != nil {
result.ValuesBytes = append(result.ValuesBytes, "Unmarshal error")
continue
}
switch v.Type {
case dvm.Uint64:
result.ValuesBytes = append(result.ValuesBytes, fmt.Sprintf("%d", v.ValueUint64))
case dvm.String:
result.ValuesBytes = append(result.ValuesBytes, fmt.Sprintf("%s", v.ValueString))
default:
result.ValuesBytes = append(result.ValuesBytes, "UNKNOWN Data type")
}
}
}
}
}
result.Status = "OK"
err = nil
//logger.Debugf("result %+v\n", result);
return
}

View File

@ -0,0 +1,173 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "encoding/hex"
import "encoding/binary"
import "runtime/debug"
//import "github.com/romana/rlog"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/blockchain"
func GetTransaction(ctx context.Context, p rpc.GetTransaction_Params) (result rpc.GetTransaction_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
for i := 0; i < len(p.Tx_Hashes); i++ {
hash := crypto.HashHexToHash(p.Tx_Hashes[i])
{ // check if tx is from blockchain
var tx transaction.Transaction
var tx_bytes []byte
if tx_bytes, err = chain.Store.Block_tx_store.ReadTX(hash); err != nil { // if tx not found return empty rpc
// check whether we can get the tx from the pool
{
tx := chain.Mempool.Mempool_Get_TX(hash)
if tx != nil { // found the tx in the mempool
var related rpc.Tx_Related_Info
related.Block_Height = -1 // not mined
related.In_pool = true
result.Txs_as_hex = append(result.Txs_as_hex, hex.EncodeToString(tx.Serialize()))
result.Txs = append(result.Txs, related)
} else {
var related rpc.Tx_Related_Info
result.Txs_as_hex = append(result.Txs_as_hex, "") // a not found tx will return ""
result.Txs = append(result.Txs, related)
}
}
continue // no more processing required
} else {
//fmt.Printf("txhash %s loaded %d bytes\n", hash, len(tx_bytes))
if err = tx.Deserialize(tx_bytes); err != nil {
//logger.Warnf("rpc txhash %s could not be decoded, err %s\n", hash, err)
return
}
if err == nil {
var related rpc.Tx_Related_Info
// check whether tx is orphan
//if chain.Is_TX_Orphan(hash) {
// result.Txs_as_hex = append(result.Txs_as_hex, "") // given empty data
// result.Txs = append(result.Txs, related) // should we have an orphan tx marker
//} else
if tx.IsCoinbase() { // fill reward but only for coinbase
//blhash, err := chain.Load_Block_Topological_order_at_index(nil, int64(related.Block_Height))
//if err == nil { // if err return err
related.Reward = 999999 //chain.Load_Block_Total_Reward(nil, blhash)
//}
}
// also fill where the tx is found and in which block is valid and in which it is invalid
valid_blid, invalid_blid, valid := chain.IS_TX_Valid(hash)
//logger.Infof(" tx %s related info valid_blid %s invalid_blid %+v valid %v ",hash, valid_blid, invalid_blid, valid)
if valid {
related.ValidBlock = valid_blid.String()
// topo height at which it was mined
topo_height := int64(chain.Load_Block_Topological_order(valid_blid))
related.Block_Height = topo_height
if tx.TransactionType != transaction.REGISTRATION {
// we must now fill in compressed ring members
if toporecord, err := chain.Store.Topo_store.Read(topo_height); err == nil {
if ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version); err == nil {
if tx.TransactionType == transaction.SC_TX {
scid := tx.GetHash()
if tx.SCDATA.Has(rpc.SCACTION, rpc.DataUint64) && rpc.SC_INSTALL == rpc.SC_ACTION(tx.SCDATA.Value(rpc.SCACTION, rpc.DataUint64).(uint64)) {
if sc_data_tree, err := ss.GetTree(string(scid[:])); err == nil {
var code_bytes []byte
if code_bytes, err = sc_data_tree.Get(blockchain.SC_Code_Key(scid)); err == nil {
related.Code = string(code_bytes)
}
var zerohash crypto.Hash
if balance_bytes, err := sc_data_tree.Get(zerohash[:]); err == nil {
if len(balance_bytes) == 8 {
related.Balance = binary.BigEndian.Uint64(balance_bytes[:])
}
}
}
}
}
// expand the tx, no need to do proof checking
err = chain.Expand_Transaction_NonCoinbase(&tx)
if err != nil {
return result, err
}
for t := range tx.Payloads {
var ring []string
for j := 0; j < int(tx.Payloads[t].Statement.RingSize); j++ {
astring := rpc.NewAddressFromKeys((*crypto.Point)(tx.Payloads[t].Statement.Publickeylist[j]))
astring.Mainnet = globals.Config.Name == config.Mainnet.Name
ring = append(ring, astring.String())
}
related.Ring = append(related.Ring, ring)
}
}
}
}
}
for i := range invalid_blid {
related.InvalidBlock = append(related.InvalidBlock, invalid_blid[i].String())
}
result.Txs_as_hex = append(result.Txs_as_hex, hex.EncodeToString(tx.Serialize()))
result.Txs = append(result.Txs, related)
}
continue
}
}
}
result.Status = "OK"
err = nil
//logger.Debugf("result %+v\n", result)
return
}

View File

@ -0,0 +1,31 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "github.com/deroproject/derohe/rpc"
func GetTxPool(ctx context.Context) (result rpc.GetTxPool_Result) {
result.Status = "OK"
pool_list := chain.Mempool.Mempool_List_TX()
for i := range pool_list {
result.Tx_list = append(result.Tx_list, fmt.Sprintf("%s", pool_list[i]))
}
return result
}

View File

@ -0,0 +1,76 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/dvm"
//import "github.com/deroproject/derosuite/blockchain"
func NameToAddress(ctx context.Context, p rpc.NameToAddress_Params) (result rpc.NameToAddress_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
topoheight := chain.Load_TOPO_HEIGHT()
toporecord, err := chain.Store.Topo_store.Read(topoheight)
if err != nil {
panic(err)
}
ss, err := chain.Store.Balance_store.LoadSnapshot(toporecord.State_Version)
if err != nil {
panic(err)
}
var zerohash crypto.Hash
zerohash[31] = 1
treename := string(zerohash[:])
tree, err := ss.GetTree(treename)
if err != nil {
panic(err)
}
var value_bytes []byte
if value_bytes, err = tree.Get(dvm.Variable{Type: dvm.String, ValueString: p.Name}.MarshalBinaryPanic()); err == nil {
var v dvm.Variable
if err = v.UnmarshalBinary(value_bytes); err != nil {
return
}
addr, _ := rpc.NewAddressFromCompressedKeys([]byte(v.ValueString))
if err != nil {
return
}
addr.Mainnet = globals.IsMainnet()
result.Address = addr.String()
result.Name = p.Name
result.Status = "OK"
}
return
}

View File

@ -0,0 +1,68 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "encoding/hex"
import "runtime/debug"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/p2p"
import "github.com/deroproject/derohe/transaction"
//NOTE: finally we have shifted to json api
func SendRawTransaction(ctx context.Context, p rpc.SendRawTransaction_Params) (result rpc.SendRawTransaction_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
var tx transaction.Transaction
// rlog.Debugf("Incoming TX from RPC Server")
//lets decode the tx from hex
tx_bytes, err := hex.DecodeString(p.Tx_as_hex)
if err != nil {
result.Status = "TX could be hex decoded"
return
}
if len(tx_bytes) < 99 {
result.Status = "TX insufficient length"
return
}
// fmt.Printf("txbytes length %d data %s\n", len(p.Tx_as_hex), p.Tx_as_hex)
// lets add tx to pool, if we can do it, so can every one else
err = tx.Deserialize(tx_bytes)
if err != nil {
return
}
// lets try to add it to pool
if err = chain.Add_TX_To_Pool(&tx); err == nil {
p2p.Broadcast_Tx(&tx, 0) // broadcast tx
result.Status = "OK"
} else {
err = fmt.Errorf("Transaction %s rejected by daemon err '%s'", tx.GetHash(), err)
}
return
}

View File

@ -0,0 +1,64 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "fmt"
import "context"
import "encoding/hex"
import "runtime/debug"
import "github.com/deroproject/derohe/rpc"
func SubmitBlock(ctx context.Context, p rpc.SubmitBlock_Params) (result rpc.SubmitBlock_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
mbl_block_data_bytes, err := hex.DecodeString(p.MiniBlockhashing_blob)
if err != nil {
//logger.Info("Submitting block could not be decoded")
return result, fmt.Errorf("Submitted block could not be decoded. err: %s", err)
}
var tstamp, extra uint64
fmt.Sscanf(p.JobID, "%d.%d", &tstamp, &extra)
mblid, blid, sresult, err := chain.Accept_new_block(tstamp, mbl_block_data_bytes)
_ = mblid
if sresult {
//logger.Infof("Submitted block %s accepted", blid)
result.JobID = p.JobID
result.Status = "OK"
result.MiniBlock = blid.IsZero()
result.MBLID = mblid.String()
if !result.MiniBlock {
result.BLID = blid.String()
}
return result, nil
}
logger.V(1).Error(err, "Submitting block", "jobid", p.JobID)
return rpc.SubmitBlock_Result{
Status: "REJECTED",
}, err
}

View File

@ -0,0 +1,42 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
// get block template handler not implemented
import "fmt"
import "context"
import "runtime/debug"
import "github.com/deroproject/derohe/rpc"
func GetLastBlockHeader(ctx context.Context) (result rpc.GetLastBlockHeader_Result, err error) {
defer func() { // safety so if anything wrong happens, we return error
if r := recover(); r != nil {
err = fmt.Errorf("panic occured. stack trace %s", debug.Stack())
}
}()
top_hash := chain.Get_Top_ID()
block_header, err := GetBlockHeader(chain, top_hash)
if err != nil {
return
}
return rpc.GetLastBlockHeader_Result{
Block_Header: block_header,
Status: "OK",
}, nil
}

View File

@ -0,0 +1,341 @@
package rpc
import (
"flag"
"fmt"
"net/http"
"time"
"github.com/lesismal/llib/std/crypto/tls"
"github.com/lesismal/nbio/nbhttp"
"github.com/lesismal/nbio/nbhttp/websocket"
)
import "github.com/lesismal/nbio"
import "github.com/lesismal/nbio/logging"
import "net"
import "bytes"
import "encoding/hex"
import "encoding/json"
import "runtime"
import "strings"
import "math/big"
import "crypto/ecdsa"
import "crypto/elliptic"
//import "crypto/tls"
import "crypto/rand"
import "crypto/x509"
import "encoding/pem"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/graviton"
import "github.com/go-logr/logr"
// this file implements the non-blocking job streamer
// only job is to stream jobs to thousands of workers, if any is successful,accept and report back
import "sync"
var memPool = sync.Pool{
New: func() interface{} {
return make([]byte, 16*1024)
},
}
var logger_getwork logr.Logger
var (
svr *nbhttp.Server
print = flag.Bool("print", false, "stdout output of echoed data")
)
type user_session struct {
blocks uint64
miniblocks uint64
lasterr string
address rpc.Address
valid_address bool
address_sum [32]byte
}
var client_list_mutex sync.Mutex
var client_list = map[*websocket.Conn]*user_session{}
func CountMiners() int {
client_list_mutex.Lock()
defer client_list_mutex.Unlock()
return len(client_list)
}
func SendJob() {
var params rpc.GetBlockTemplate_Result
var buf bytes.Buffer
encoder := json.NewEncoder(&buf)
// get a block template, and then we will fill the address here as optimization
bl, mbl, _, _, err := chain.Create_new_block_template_mining(chain.IntegratorAddress())
if err != nil {
return
}
prev_hash := ""
for i := range bl.Tips {
prev_hash = prev_hash + bl.Tips[i].String()
}
params.JobID = fmt.Sprintf("%d.%d.%s", bl.Timestamp, 0, "notified")
diff := chain.Get_Difficulty_At_Tips(bl.Tips)
params.Height = bl.Height
params.Prev_Hash = prev_hash
params.Difficultyuint64 = diff.Uint64()
params.Difficulty = diff.String()
client_list_mutex.Lock()
defer client_list_mutex.Unlock()
for k, v := range client_list {
if !mbl.Final { //write miners address only if possible
copy(mbl.KeyHash[:], v.address_sum[:])
}
for i := range mbl.Nonce { // give each user different work
mbl.Nonce[i] = globals.Global_Random.Uint32() // fill with randomness
}
if v.lasterr != "" {
params.LastError = v.lasterr
v.lasterr = ""
}
if !v.valid_address && !chain.IsAddressHashValid(false, v.address_sum) {
params.LastError = "unregistered miner or you need to wait 15 mins"
} else {
v.valid_address = true
}
params.Blockhashing_blob = fmt.Sprintf("%x", mbl.Serialize())
params.Blocks = v.blocks
params.MiniBlocks = v.miniblocks
encoder.Encode(params)
k.WriteMessage(websocket.TextMessage, buf.Bytes())
buf.Reset()
}
}
func newUpgrader() *websocket.Upgrader {
u := websocket.NewUpgrader()
u.OnMessage(func(c *websocket.Conn, messageType websocket.MessageType, data []byte) {
// echo
c.WriteMessage(messageType, data)
if messageType != websocket.TextMessage {
return
}
sess := c.Session().(*user_session)
client_list_mutex.Lock()
client_list_mutex.Unlock()
var p rpc.SubmitBlock_Params
if err := json.Unmarshal(data, &p); err != nil {
}
mbl_block_data_bytes, err := hex.DecodeString(p.MiniBlockhashing_blob)
if err != nil {
//logger.Info("Submitting block could not be decoded")
sess.lasterr = fmt.Sprintf("Submitted block could not be decoded. err: %s", err)
return
}
var tstamp, extra uint64
fmt.Sscanf(p.JobID, "%d.%d", &tstamp, &extra)
_, blid, sresult, err := chain.Accept_new_block(tstamp, mbl_block_data_bytes)
if sresult {
//logger.Infof("Submitted block %s accepted", blid)
if blid.IsZero() {
sess.miniblocks++
} else {
sess.blocks++
}
}
})
u.OnClose(func(c *websocket.Conn, err error) {
client_list_mutex.Lock()
delete(client_list, c)
client_list_mutex.Unlock()
})
return u
}
func onWebsocket(w http.ResponseWriter, r *http.Request) {
if !strings.HasPrefix(r.URL.Path, "/ws/") {
http.NotFound(w, r)
return
}
address := strings.TrimPrefix(r.URL.Path, "/ws/")
addr, err := globals.ParseValidateAddress(address)
if err != nil {
fmt.Fprintf(w, "err: %s\n", err)
return
}
addr_raw := addr.PublicKey.EncodeCompressed()
upgrader := newUpgrader()
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
//panic(err)
return
}
wsConn := conn.(*websocket.Conn)
session := user_session{address: *addr, address_sum: graviton.Sum(addr_raw)}
wsConn.SetSession(&session)
client_list_mutex.Lock()
client_list[wsConn] = &session
client_list_mutex.Unlock()
}
func Getwork_server() {
var err error
logger_getwork = globals.Logger.WithName("GETWORK")
logging.SetLevel(logging.LevelNone) //LevelDebug)//LevelNone)
tlsConfig := &tls.Config{
Certificates: []tls.Certificate{generate_random_tls_cert()},
InsecureSkipVerify: true,
}
mux := &http.ServeMux{}
mux.HandleFunc("/", onWebsocket) // handle everything
default_address := fmt.Sprintf("0.0.0.0:%d", globals.Config.GETWORK_Default_Port)
if _, ok := globals.Arguments["--getwork-bind"]; ok && globals.Arguments["--getwork-bind"] != nil {
addr, err := net.ResolveTCPAddr("tcp", globals.Arguments["--getwork-bind"].(string))
if err != nil {
logger_getwork.Error(err, "--getwork-bind address is invalid")
return
} else {
if addr.Port == 0 {
logger_getwork.Info("GETWORK server is disabled, No ports will be opened for miners to get work")
return
} else {
default_address = addr.String()
}
}
}
logger_getwork.Info("GETWORK will listen", "address", default_address)
svr = nbhttp.NewServer(nbhttp.Config{
Name: "GETWORK",
Network: "tcp",
AddrsTLS: []string{default_address},
TLSConfig: tlsConfig,
Handler: mux,
MaxLoad: 10 * 1024,
MaxWriteBufferSize: 32 * 1024,
ReleaseWebsocketPayload: true,
KeepaliveTime: 240 * time.Hour, // we expects all miners to find a block every 10 days,
NPoller: runtime.NumCPU(),
})
svr.OnReadBufferAlloc(func(c *nbio.Conn) []byte {
return memPool.Get().([]byte)
})
svr.OnReadBufferFree(func(c *nbio.Conn, b []byte) {
memPool.Put(b)
})
globals.Cron.AddFunc("@every 2s", SendJob) // if daemon restart automaticaly send job
if err = svr.Start(); err != nil {
logger_getwork.Error(err, "nbio.Start failed.")
return
}
logger.Info("GETWORK/Websocket server started")
svr.Wait()
defer svr.Stop()
}
// generate default tls cert to encrypt everything
// NOTE: this does NOT protect from individual active man-in-the-middle attacks
func generate_random_tls_cert() tls.Certificate {
/* RSA can do only 500 exchange per second, we need to be faster
* reference https://github.com/golang/go/issues/20058
key, err := rsa.GenerateKey(rand.Reader, 512) // current using minimum size
if err != nil {
log.Fatal("Private key cannot be created.", err.Error())
}
// Generate a pem block with the private key
keyPem := pem.EncodeToMemory(&pem.Block{
Type: "RSA PRIVATE KEY",
Bytes: x509.MarshalPKCS1PrivateKey(key),
})
*/
// EC256 does roughly 20000 exchanges per second
key, err := ecdsa.GenerateKey(elliptic.P256(), rand.Reader)
b, err := x509.MarshalECPrivateKey(key)
if err != nil {
logger.Error(err, "Unable to marshal ECDSA private key")
panic(err)
}
// Generate a pem block with the private key
keyPem := pem.EncodeToMemory(&pem.Block{Type: "EC PRIVATE KEY", Bytes: b})
tml := x509.Certificate{
SerialNumber: big.NewInt(int64(time.Now().UnixNano())),
// TODO do we need to add more parameters to make our certificate more authentic
// and thwart traffic identification as a mass scale
// you can add any attr that you need
NotBefore: time.Now().AddDate(0, -1, 0),
NotAfter: time.Now().AddDate(1, 0, 0),
// you have to generate a different serial number each execution
/*
Subject: pkix.Name{
CommonName: "New Name",
Organization: []string{"New Org."},
},
BasicConstraintsValid: true, // even basic constraints are not required
*/
}
cert, err := x509.CreateCertificate(rand.Reader, &tml, &tml, &key.PublicKey, key)
if err != nil {
logger.Error(err, "Certificate cannot be created.")
panic(err)
}
// Generate a pem block with the certificate
certPem := pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: cert})
tlsCert, err := tls.X509KeyPair(certPem, keyPem)
if err != nil {
logger.Error(err, "Certificate cannot be loaded.")
panic(err)
}
return tlsCert
}

View File

@ -0,0 +1,368 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package rpc
import "io"
import "os"
import "net"
import "fmt"
import "net/http"
import "net/http/pprof"
import "time"
import "sort"
import "sync"
import "sync/atomic"
import "context"
import "strings"
import "runtime/debug"
import "encoding/json"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/blockchain"
import "github.com/deroproject/derohe/glue/rwc"
import "github.com/deroproject/derohe/metrics"
import "github.com/go-logr/logr"
import "github.com/gorilla/websocket"
import "github.com/creachadair/jrpc2"
import "github.com/creachadair/jrpc2/handler"
import "github.com/creachadair/jrpc2/channel"
//import "github.com/creachadair/jrpc2/server"
import "github.com/creachadair/jrpc2/jhttp"
/* this file implements the rpcserver api, so as wallet and block explorer tools can work without migration */
// all components requiring access to blockchain must use , this struct to communicate
// this structure must be update while mutex
type RPCServer struct {
srv *http.Server
mux *http.ServeMux
Exit_Event chan bool // blockchain is shutting down and we must quit ASAP
sync.RWMutex
}
//var Exit_In_Progress bool
var chain *blockchain.Blockchain
var logger logr.Logger
var client_connections sync.Map
var options = &jrpc2.ServerOptions{AllowPush: true, RPCLog: metrics_generator{}, DecodeContext: func(ctx context.Context, method string, param json.RawMessage) (context.Context, json.RawMessage, error) {
t := time.Now()
return context.WithValue(ctx, "start_time", &t), param, nil
}}
type metrics_generator struct{}
func (metrics_generator) LogRequest(ctx context.Context, req *jrpc2.Request) {}
func (metrics_generator) LogResponse(ctx context.Context, resp *jrpc2.Response) {
req := jrpc2.InboundRequest(ctx) // we cannot do anything here
if req == nil {
return
}
start_time, ok := ctx.Value("start_time").(*time.Time)
if !ok {
return //panic("cannot find time in context")
}
method := req.Method()
metrics.Set.GetOrCreateHistogram(method + "_duration_histogram_seconds").UpdateDuration(*start_time)
metrics.Set.GetOrCreateCounter(method + "_total").Inc()
if output, err := resp.MarshalJSON(); err == nil {
metrics.Set.GetOrCreateCounter(method + "_total_out_bytes").Add(len(output))
}
}
// this function triggers notification to all clients that they should repoll
func Notify_Block_Addition() {
for {
chain.RPC_NotifyNewBlock.L.Lock()
chain.RPC_NotifyNewBlock.Wait()
chain.RPC_NotifyNewBlock.L.Unlock()
go func() {
defer globals.Recover(2)
client_connections.Range(func(key, value interface{}) bool {
key.(*jrpc2.Server).Notify(context.Background(), "Block", nil)
return true
})
}()
}
}
// this function triggers notification to all clients that they should repoll
func Notify_MiniBlock_Addition() {
for {
chain.RPC_NotifyNewMiniBlock.L.Lock()
chain.RPC_NotifyNewMiniBlock.Wait()
chain.RPC_NotifyNewMiniBlock.L.Unlock()
go func() {
defer globals.Recover(2)
SendJob()
}()
}
}
func Notify_Height_Changes() {
for {
chain.RPC_NotifyNewBlock.L.Lock()
chain.RPC_NotifyNewBlock.Wait()
chain.RPC_NotifyNewBlock.L.Unlock()
go func() {
defer globals.Recover(2)
client_connections.Range(func(key, value interface{}) bool {
key.(*jrpc2.Server).Notify(context.Background(), "Height", nil)
return true
})
}()
}
}
func RPCServer_Start(params map[string]interface{}) (*RPCServer, error) {
var r RPCServer
metrics.Set.GetOrCreateGauge("rpc_client_count", func() float64 { // set a new gauge
count := float64(0)
client_connections.Range(func(k, value interface{}) bool {
count++
return true
})
return count
})
r.Exit_Event = make(chan bool)
logger = globals.Logger.WithName("RPC") // all components must use this logger
chain = params["chain"].(*blockchain.Blockchain)
go r.Run()
logger.Info("RPC/Websocket server started")
atomic.AddUint32(&globals.Subsystem_Active, 1) // increment subsystem
return &r, nil
}
// shutdown the rpc server component
func (r *RPCServer) RPCServer_Stop() {
r.Lock()
defer r.Unlock()
close(r.Exit_Event) // send signal to all connections to exit
if r.srv != nil {
r.srv.Shutdown(context.Background()) // shutdown the server
}
// TODO we must wait for connections to kill themselves
time.Sleep(1 * time.Second)
logger.Info("RPC Shutdown")
atomic.AddUint32(&globals.Subsystem_Active, ^uint32(0)) // this decrement 1 fom subsystem
}
// setup handlers
func (r *RPCServer) Run() {
// create a new mux
r.mux = http.NewServeMux()
default_address := "127.0.0.1:" + fmt.Sprintf("%d", config.Mainnet.RPC_Default_Port)
if !globals.IsMainnet() {
default_address = "127.0.0.1:" + fmt.Sprintf("%d", config.Testnet.RPC_Default_Port)
}
if _, ok := globals.Arguments["--rpc-bind"]; ok && globals.Arguments["--rpc-bind"] != nil {
addr, err := net.ResolveTCPAddr("tcp", globals.Arguments["--rpc-bind"].(string))
if err != nil {
logger.Error(err, "--rpc-bind address is invalid")
} else {
if addr.Port == 0 {
logger.Info("RPC server is disabled, No ports will be opened for RPC")
return
} else {
default_address = addr.String()
}
}
}
logger.Info("RPC will listen", "address", default_address)
r.Lock()
r.srv = &http.Server{Addr: default_address, Handler: r.mux}
r.Unlock()
r.mux.HandleFunc("/json_rpc", translate_http_to_jsonrpc_and_vice_versa)
r.mux.HandleFunc("/ws", ws_handler)
r.mux.HandleFunc("/", hello)
r.mux.HandleFunc("/metrics", metrics.WritePrometheus) // register metrics handler
//if DEBUG_MODE {
// r.mux.HandleFunc("/debug/pprof/", pprof.Index)
// Register pprof handlers individually if required
// we should provide a way to disable these
if os.Getenv("DISABLE_RUNTIME_PROFILE") == "1" { // daemon must have been started with DISABLE_RUNTIME_PROFILE=1
logger.Info("runtime profiling is disabled")
} else { // Register pprof handlers individually if required
r.mux.HandleFunc("/debug/pprof/", pprof.Index)
r.mux.HandleFunc("/debug/pprof/cmdline", pprof.Cmdline)
r.mux.HandleFunc("/debug/pprof/profile", pprof.Profile)
r.mux.HandleFunc("/debug/pprof/symbol", pprof.Symbol)
r.mux.HandleFunc("/debug/pprof/trace", pprof.Trace)
}
go Notify_Block_Addition() // process all blocks
go Notify_MiniBlock_Addition() // process all blocks
go Notify_Height_Changes() // gives notification of changed height
if err := r.srv.ListenAndServe(); err != http.ErrServerClosed {
logger.Error(err, "ListenAndServe failed")
}
}
func hello(w http.ResponseWriter, r *http.Request) {
io.WriteString(w, "DERO BLOCKCHAIN Hello world!")
}
var upgrader = websocket.Upgrader{CheckOrigin: func(r *http.Request) bool { return true }} // use default options
func ws_handler(w http.ResponseWriter, r *http.Request) {
var ws_server *jrpc2.Server
defer func() {
// safety so if anything wrong happens, verification fails
if r := recover(); r != nil {
logger.V(2).Error(nil, "Recovered while processing websocket request", "r", r, "stack", debug.Stack())
}
if ws_server != nil {
client_connections.Delete(ws_server)
}
}()
c, err := upgrader.Upgrade(w, r, nil)
if err != nil {
return
}
defer c.Close()
input_output := rwc.New(c)
ws_server = jrpc2.NewServer(d, options).Start(channel.RawJSON(input_output, input_output))
client_connections.Store(ws_server, 1)
ws_server.Wait()
}
func DAEMON_Echo(ctx context.Context, args []string) string {
return "DAEMON " + strings.Join(args, " ")
}
// used to verify whether the connection is alive
func Ping(ctx context.Context) string {
return "Pong "
}
func Echo(ctx context.Context, args []string) string {
return "DERO " + strings.Join(args, " ")
}
/*
//var internal_server = server.NewLocal(assigner,nil) // Use DERO.GetInfo names
var internal_server = server.NewLocal(historical_apis, nil) // uses traditional "getinfo" for compatibility reasons
// Bridge HTTP to the JSON-RPC server.
var bridge = jhttp.NewBridge(internal_server.Client)
*/
var historical_apis = handler.Map{"getinfo": handler.New(GetInfo),
"get_info": handler.New(GetInfo), // this is just an alias to above
"getblock": handler.New(GetBlock),
"getblockheaderbytopoheight": handler.New(GetBlockHeaderByTopoHeight),
"getblockheaderbyhash": handler.New(GetBlockHeaderByHash),
"gettxpool": handler.New(GetTxPool),
"getrandomaddress": handler.New(GetRandomAddress),
"gettransactions": handler.New(GetTransaction),
"sendrawtransaction": handler.New(SendRawTransaction),
"submitblock": handler.New(SubmitBlock),
"getheight": handler.New(GetHeight),
"getblockcount": handler.New(GetBlockCount),
"getlastblockheader": handler.New(GetLastBlockHeader),
"getblocktemplate": handler.New(GetBlockTemplate),
"getencryptedbalance": handler.New(GetEncryptedBalance),
"getsc": handler.New(GetSC),
"nametoaddress": handler.New(NameToAddress)}
var servicemux = handler.ServiceMap{
"DERO": handler.Map{
"Echo": handler.New(Echo),
"Ping": handler.New(Ping),
"GetInfo": handler.New(GetInfo),
"GetBlock": handler.New(GetBlock),
"GetBlockHeaderByTopoHeight": handler.New(GetBlockHeaderByTopoHeight),
"GetBlockHeaderByHash": handler.New(GetBlockHeaderByHash),
"GetTxPool": handler.New(GetTxPool),
"GetRandomAddress": handler.New(GetRandomAddress),
"GetTransaction": handler.New(GetTransaction),
"SendRawTransaction": handler.New(SendRawTransaction),
"SubmitBlock": handler.New(SubmitBlock),
"GetHeight": handler.New(GetHeight),
"GetBlockCount": handler.New(GetBlockCount),
"GetLastBlockHeader": handler.New(GetLastBlockHeader),
"GetBlockTemplate": handler.New(GetBlockTemplate),
"GetEncryptedBalance": handler.New(GetEncryptedBalance),
"GetSC": handler.New(GetSC),
"NameToAddress": handler.New(NameToAddress),
},
"DAEMON": handler.Map{
"Echo": handler.New(DAEMON_Echo),
},
}
type dummyassigner int
var d dummyassigner
func (d dummyassigner) Assign(ctx context.Context, method string) (handler jrpc2.Handler) {
if handler = servicemux.Assign(ctx, method); handler != nil {
return
}
if handler = historical_apis.Assign(ctx, method); handler != nil {
return
}
return nil
}
func (d dummyassigner) Names() []string {
names := servicemux.Names()
hist_names := historical_apis.Names()
names = append(names, hist_names...)
sort.Strings(names)
return names
}
var bridge = jhttp.NewBridge(d, nil)
func translate_http_to_jsonrpc_and_vice_versa(w http.ResponseWriter, r *http.Request) {
bridge.ServeHTTP(w, r)
}

314
cmd/derod/update.go Normal file
View File

@ -0,0 +1,314 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
import "fmt"
import "net"
import "time"
import "io"
//import "io/ioutil"
//import "net/http"
import "context"
import "strings"
import "math/rand"
import "encoding/base64"
import "encoding/json"
import "runtime/debug"
import "encoding/binary"
//import "crypto/tls"
import "github.com/blang/semver/v4"
import "github.com/miekg/dns"
import "github.com/deroproject/derohe/config"
import "github.com/deroproject/derohe/globals"
/* this needs to be set on update.dero.io. as TXT record, in encoded form as base64
*
{ "version" : "1.0.2",
"message" : "\n\n\u001b[32m This is a mandatory update\u001b[0m",
"critical" : ""
}
base64 eyAidmVyc2lvbiIgOiAiMS4wLjIiLAogIm1lc3NhZ2UiIDogIlxuXG5cdTAwMWJbMzJtIFRoaXMgaXMgYSBtYW5kYXRvcnkgdXBkYXRlXHUwMDFiWzBtIiwgCiJjcml0aWNhbCIgOiAiIiAKfQ==
TXT record should be set as update=eyAidmVyc2lvbiIgOiAiMS4wLjIiLAogIm1lc3NhZ2UiIDogIlxuXG5cdTAwMWJbMzJtIFRoaXMgaXMgYSBtYW5kYXRvcnkgdXBkYXRlXHUwMDFiWzBtIiwgCiJjcml0aWNhbCIgOiAiIiAKfQ==
*/
func check_update_loop() {
for {
if config.DNS_NOTIFICATION_ENABLED {
globals.Logger.V(2).Info("Checking update..")
check_update()
}
time.Sleep(2 * 3600 * time.Second) // check every 2 hours
}
}
// wrapper to make requests using proxy
func dialContextwrapper(ctx context.Context, network, address string) (net.Conn, error) {
return globals.Dialer.Dial(network, address)
}
type socks_dialer net.Dialer
func (d *socks_dialer) Dial(network, address string) (net.Conn, error) {
return globals.Dialer.Dial(network, address)
}
func (d *socks_dialer) DialContext(ctx context.Context, network, address string) (net.Conn, error) {
return globals.Dialer.Dial(network, address)
}
func dial_random_read_response(in []byte) (out []byte, err error) {
defer func() {
if r := recover(); r != nil {
logger.V(2).Error(nil, "Recovered while checking updates", "r", r, "stack", debug.Stack())
}
}()
// since we may be connecting through socks, grab the remote ip for our purpose rightnow
//conn, err := globals.Dialer.Dial("tcp", "208.67.222.222:53")
//conn, err := net.Dial("tcp", "8.8.8.8:53")
random_feeder := rand.New(globals.NewCryptoRandSource()) // use crypto secure resource
server_address := config.DNS_servers[random_feeder.Intn(len(config.DNS_servers))] // choose a random server cryptographically
conn, err := net.Dial("tcp", server_address)
//conn, err := tls.Dial("tcp", remote_ip.String(),&tls.Config{InsecureSkipVerify: true})
if err != nil {
logger.V(2).Error(err, "Dial failed ")
return
}
defer conn.Close() // close connection at end
// upgrade connection TO TLS ( tls.Dial does NOT support proxy)
//conn = tls.Client(conn, &tls.Config{InsecureSkipVerify: true})
//rlog.Tracef(1, "Sending %d bytes", len(in))
var buf [2]byte
binary.BigEndian.PutUint16(buf[:], uint16(len(in)))
conn.Write(buf[:]) // write length in bigendian format
conn.Write(in) // write data
// now we must wait for response to arrive
var frame_length_buf [2]byte
conn.SetReadDeadline(time.Now().Add(20 * time.Second))
nbyte, err := io.ReadFull(conn, frame_length_buf[:])
if err != nil || nbyte != 2 {
// error while reading from connection we must disconnect it
logger.V(2).Error(err, "Could not read DNS length prefix")
return
}
frame_length := binary.BigEndian.Uint16(frame_length_buf[:])
if frame_length == 0 {
// most probably memory DDOS attack, kill the connection
logger.V(2).Error(nil, "Frame length is too small")
return
}
out = make([]byte, frame_length)
conn.SetReadDeadline(time.Now().Add(20 * time.Second))
data_size, err := io.ReadFull(conn, out)
if err != nil || data_size <= 0 || uint16(data_size) != frame_length {
// error while reading from connection we must kiil it
//rlog.Warnf("Could not read DNS data size read %d, frame length %d err %s", data_size, frame_length, err)
logger.V(2).Error(err, "Could not read DNS data")
return
}
out = out[:frame_length]
return
}
func check_update() {
// add panic handler, in case DNS acts rogue and tries to attack
defer func() {
if r := recover(); r != nil {
logger.V(2).Error(nil, "Recovered while checking updates", r, "r", "stack", debug.Stack())
}
}()
if !config.DNS_NOTIFICATION_ENABLED { // if DNS notifications are disabled bail out
return
}
/* var u update_message
u.Version = "2.0.0"
u.Message = "critical msg txt\x1b[35m should \n be in RED"
globals.Logger.Infof("testing %s",u.Message)
j,err := json.Marshal(u)
globals.Logger.Infof("json format %s err %s",j,err)
*/
/*extract_parse_version("update=eyAidmVyc2lvbiIgOiAiMS4xLjAiLCAibWVzc2FnZSIgOiAiXG5cblx1MDAxYlszMm0gVGhpcyBpcyBhIG1hbmRhdG9yeSB1cGdyYWRlIHBsZWFzZSB1cGdyYWRlIGZyb20geHl6IFx1MDAxYlswbSIsICJjcml0aWNhbCIgOiAiIiB9")
return
*/
m1 := new(dns.Msg)
// m1.SetEdns0(65000, true), dnssec probably leaks current timestamp, it's disabled until more invetigation
m1.Id = dns.Id()
m1.RecursionDesired = true
m1.Question = make([]dns.Question, 1)
m1.Question[0] = dns.Question{Name: config.DNS_UPDATE_CHECK, Qtype: dns.TypeTXT, Qclass: dns.ClassINET}
packed, err := m1.Pack()
if err != nil {
globals.Logger.V(2).Error(err, "Error which packing DNS query for program update")
return
}
/*
// setup a http client
httpTransport := &http.Transport{}
httpClient := &http.Client{Transport: httpTransport}
// set our socks5 as the dialer
httpTransport.Dial = globals.Dialer.Dial
packed_base64:= base64.RawURLEncoding.EncodeToString(packed)
response, err := httpClient.Get("https://1.1.1.1/dns-query?ct=application/dns-udpwireformat&dns="+packed_base64)
_ = packed_base64
if err != nil {
rlog.Warnf("error making DOH request err %s",err)
return
}
defer response.Body.Close()
contents, err := ioutil.ReadAll(response.Body)
if err != nil {
rlog.Warnf("error reading DOH response err %s",err)
return
}
*/
contents, err := dial_random_read_response(packed)
if err != nil {
logger.V(2).Error(err, "error reading response from DNS server")
return
}
//rlog.Debugf("DNS response length from DNS server %d bytes", len(contents))
err = m1.Unpack(contents)
if err != nil {
logger.V(2).Error(err, "error decoding DOH response")
return
}
for i := range m1.Answer {
if t, ok := m1.Answer[i].(*dns.TXT); ok {
// replace any spaces so as records could be joined
logger.V(2).Info("Processing record ", "record", t.Txt)
joined := strings.Join(t.Txt, "")
extract_parse_version(joined)
}
}
//globals.Logger.Infof("response %+v err ",m1,err)
}
type update_message struct {
Version string `json:"version"`
Message string `json:"message"`
Critical string `json:"critical"` // always broadcasted, without checks for version
}
// our version are TXT record of following format
// version=base64 encoded json
func extract_parse_version(str string) {
strl := strings.ToLower(str)
if !strings.HasPrefix(strl, "update=") {
logger.V(2).Info("Skipping record", "record", str)
return
}
parts := strings.SplitN(str, "=", 2)
if len(parts) != 2 {
return
}
data, err := base64.StdEncoding.DecodeString(parts[1])
if err != nil {
logger.V(2).Error(err, "Could NOT decode base64 update message", "data", parts[1])
return
}
var u update_message
err = json.Unmarshal(data, &u)
//globals.Logger.Infof("data %+v", u)
if err != nil {
logger.V(2).Error(err, "Could NOT decode json update message")
return
}
uversion, err := semver.ParseTolerant(u.Version)
if err != nil {
logger.V(2).Error(err, "Could NOT update version")
}
current_version := config.Version
current_version.Pre = current_version.Pre[:0]
current_version.Build = current_version.Build[:0]
// give warning to update the daemon
if u.Message != "" && err == nil { // check semver
if current_version.LT(uversion) {
if current_version.Major != uversion.Major { // if major version is different give extract warning
logger.Info("\033[31m CRITICAL MAJOR update, please upgrade ASAP.\033[0m")
}
logger.Info(fmt.Sprintf("%s", u.Message)) // give the version upgrade message
logger.Info(fmt.Sprintf("\033[33mCurrent Version %s \033[32m-> Upgrade Version %s\033[0m ", current_version.String(), uversion.String()))
}
}
if u.Critical != "" { // give the critical upgrade message
logger.Info(fmt.Sprintf("%s", u.Critical))
}
}

90
cmd/explorer/LICENSE Normal file
View File

@ -0,0 +1,90 @@
RESEARCH LICENSE
Version 1.1.2
I. DEFINITIONS.
"Licensee " means You and any other party that has entered into and has in effect a version of this License.
“Licensor” means DERO PROJECT(GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8) and its successors and assignees.
"Modifications" means any (a) change or addition to the Technology or (b) new source or object code implementing any portion of the Technology.
"Research Use" means research, evaluation, or development for the purpose of advancing knowledge, teaching, learning, or customizing the Technology for personal use. Research Use expressly excludes use or distribution for direct or indirect commercial (including strategic) gain or advantage.
"Technology" means the source code, object code and specifications of the technology made available by Licensor pursuant to this License.
"Technology Site" means the website designated by Licensor for accessing the Technology.
"You" means the individual executing this License or the legal entity or entities represented by the individual executing this License.
II. PURPOSE.
Licensor is licensing the Technology under this Research License (the "License") to promote research, education, innovation, and development using the Technology.
COMMERCIAL USE AND DISTRIBUTION OF TECHNOLOGY AND MODIFICATIONS IS PERMITTED ONLY UNDER AN APPROPRIATE COMMERCIAL USE LICENSE AVAILABLE FROM LICENSOR AT <url>.
III. RESEARCH USE RIGHTS.
A. Subject to the conditions contained herein, Licensor grants to You a non-exclusive, non-transferable, worldwide, and royalty-free license to do the following for Your Research Use only:
1. reproduce, create Modifications of, and use the Technology alone, or with Modifications;
2. share source code of the Technology alone, or with Modifications, with other Licensees;
3. distribute object code of the Technology, alone, or with Modifications, to any third parties for Research Use only, under a license of Your choice that is consistent with this License; and
4. publish papers and books discussing the Technology which may include relevant excerpts that do not in the aggregate constitute a significant portion of the Technology.
B. Residual Rights. You may use any information in intangible form that you remember after accessing the Technology, except when such use violates Licensor's copyrights or patent rights.
C. No Implied Licenses. Other than the rights granted herein, Licensor retains all rights, title, and interest in Technology , and You retain all rights, title, and interest in Your Modifications and associated specifications, subject to the terms of this License.
D. Open Source Licenses. Portions of the Technology may be provided with notices and open source licenses from open source communities and third parties that govern the use of those portions, and any licenses granted hereunder do not alter any rights and obligations you may have under such open source licenses, however, the disclaimer of warranty and limitation of liability provisions in this License will apply to all Technology in this distribution.
IV. INTELLECTUAL PROPERTY REQUIREMENTS
As a condition to Your License, You agree to comply with the following restrictions and responsibilities:
A. License and Copyright Notices. You must include a copy of this License in a Readme file for any Technology or Modifications you distribute. You must also include the following statement, "Use and distribution of this technology is subject to the Java Research License included herein", (a) once prominently in the source code tree and/or specifications for Your source code distributions, and (b) once in the same file as Your copyright or proprietary notices for Your binary code distributions. You must cause any files containing Your Modification to carry prominent notice stating that You changed the files. You must not remove or alter any copyright or other proprietary notices in the Technology.
B. Licensee Exchanges. Any Technology and Modifications You receive from any Licensee are governed by this License.
V. GENERAL TERMS.
A. Disclaimer Of Warranties.
TECHNOLOGY IS PROVIDED "AS IS", WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, WARRANTIES THAT ANY SUCH TECHNOLOGY IS FREE OF DEFECTS, MERCHANTABLE, FIT FOR A PARTICULAR PURPOSE, OR NON-INFRINGING OF THIRD PARTY RIGHTS. YOU AGREE THAT YOU BEAR THE ENTIRE RISK IN CONNECTION WITH YOUR USE AND DISTRIBUTION OF ANY AND ALL TECHNOLOGY UNDER THIS LICENSE.
B. Infringement; Limitation Of Liability.
1. If any portion of, or functionality implemented by, the Technology becomes the subject of a claim or threatened claim of infringement ("Affected Materials"), Licensor may, in its unrestricted discretion, suspend Your rights to use and distribute the Affected Materials under this License. Such suspension of rights will be effective immediately upon Licensor's posting of notice of suspension on the Technology Site.
2. IN NO EVENT WILL LICENSOR BE LIABLE FOR ANY DIRECT, INDIRECT, PUNITIVE, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH OR ARISING OUT OF THIS LICENSE (INCLUDING, WITHOUT LIMITATION, LOSS OF PROFITS, USE, DATA, OR ECONOMIC ADVANTAGE OF ANY SORT), HOWEVER IT ARISES AND ON ANY THEORY OF LIABILITY (including negligence), WHETHER OR NOT LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. LIABILITY UNDER THIS SECTION V.B.2 SHALL BE SO LIMITED AND EXCLUDED, NOTWITHSTANDING FAILURE OF THE ESSENTIAL PURPOSE OF ANY REMEDY.
C. Termination.
1. You may terminate this License at any time by notifying Licensor in writing.
2. All Your rights will terminate under this License if You fail to comply with any of its material terms or conditions and do not cure such failure within thirty (30) days after becoming aware of such noncompliance.
3. Upon termination, You must discontinue all uses and distribution of the Technology , and all provisions of this Section V shall survive termination.
D. Miscellaneous.
1. Trademark. You agree to comply with Licensor's Trademark & Logo Usage Requirements, if any and as modified from time to time, available at the Technology Site. Except as expressly provided in this License, You are granted no rights in or to any Licensor's trademarks now or hereafter used or licensed by Licensor.
2. Integration. This License represents the complete agreement of the parties concerning the subject matter hereof.
3. Severability. If any provision of this License is held unenforceable, such provision shall be reformed to the extent necessary to make it enforceable unless to do so would defeat the intent of the parties, in which case, this License shall terminate.
4. Governing Law. This License is governed by the laws of the United States and the State of California, as applied to contracts entered into and performed in California between California residents. In no event shall this License be construed against the drafter.
5. Export Control. You agree to comply with the U.S. export controlsand trade laws of other countries that apply to Technology and Modifications.
READ ALL THE TERMS OF THIS LICENSE CAREFULLY BEFORE ACCEPTING.
BY CLICKING ON THE YES BUTTON BELOW OR USING THE TECHNOLOGY, YOU ARE ACCEPTING AND AGREEING TO ABIDE BY THE TERMS AND CONDITIONS OF THIS LICENSE. YOU MUST BE AT LEAST 18 YEARS OF AGE AND OTHERWISE COMPETENT TO ENTER INTO CONTRACTS.
IF YOU DO NOT MEET THESE CRITERIA, OR YOU DO NOT AGREE TO ANY OF THE TERMS OF THIS LICENSE, DO NOT USE THIS SOFTWARE IN ANY FORM.

View File

@ -0,0 +1,12 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
package main
import "testing"
func Test_Part1(t *testing.T) {
}

96
cmd/explorer/explorer.go Normal file
View File

@ -0,0 +1,96 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package main
// this file implements the explorer for DERO blockchain
// this needs only RPC access
// NOTE: Only use data exported from within the RPC interface, do direct use of exported variables fom packages
// NOTE: we can use structs defined within the RPCserver package
// TODO: error handling is non-existant ( as this was built up in hrs ). Add proper error handling
//
import "time"
import "fmt"
import "os"
import "runtime"
import "github.com/docopt/docopt-go"
import "github.com/go-logr/logr"
import "github.com/deroproject/derohe/cmd/explorer/explorerlib"
import "github.com/deroproject/derohe/globals"
var command_line string = `dero_explorer
DERO HE Explorer: A secure, private blockchain with smart-contracts
Usage:
dero_explorer [--help] [--version] [--debug] [--daemon-address=<127.0.0.1:18091>] [--http-address=<0.0.0.0:8080>]
dero_explorer -h | --help
dero_explorer --version
Options:
-h --help Show this screen.
--version Show version.
--debug Debug mode enabled, print log messages
--daemon-address=<127.0.0.1:10102> connect to this daemon port as client
--http-address=<0.0.0.0:8080> explorer listens on this port to serve user requests`
var logger logr.Logger
func main() {
var err error
globals.Arguments, err = docopt.Parse(command_line, nil, true, "DERO Explorer : work in progress", false)
if err != nil {
fmt.Printf("Error while parsing options err: %s\n", err)
return
}
exename, _ := os.Executable()
f, err := os.Create(exename + ".log")
if err != nil {
fmt.Printf("Error while opening log file err: %s filename %s\n", err, exename+".log")
return
}
globals.InitializeLog(os.Stdout, f)
logger = globals.Logger.WithName("explorer")
logger.Info("DERO HE explorer : It is an alpha version, use it for testing/evaluations purpose only.")
logger.Info("Copyright 2017-2021 DERO Project. All rights reserved.")
logger.Info("", "OS", runtime.GOOS, "ARCH", runtime.GOARCH, "GOMAXPROCS", runtime.GOMAXPROCS(0))
//logger.Info("","Version", config.Version.String())
logger.V(1).Info("", "Arguments", globals.Arguments)
endpoint := "127.0.0.1:8080"
if globals.Arguments["--daemon-address"] != nil {
endpoint = globals.Arguments["--daemon-address"].(string)
}
listen_address := "0.0.0.0:8081"
if globals.Arguments["--http-address"] != nil {
listen_address = globals.Arguments["--http-address"].(string)
}
if err = explorerlib.StartServer(logger, endpoint, listen_address); err == nil {
for {
time.Sleep(time.Second)
}
}
}

View File

@ -0,0 +1,988 @@
// Copyright 2017-2021 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license.
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
//
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
package explorerlib
// this file implements the explorer for DERO blockchain
// this needs only RPC access
// NOTE: Only use data exported from within the RPC interface, do direct use of exported variables fom packages
// NOTE: we can use structs defined within the RPCserver package
// TODO: error handling is non-existant ( as this was built up in hrs ). Add proper error handling
//
import "time"
import "fmt"
import "embed"
import "bytes"
import "unicode"
import "unsafe" // need to avoid this, but only used by byteviewer
import "strings"
import "strconv"
import "context"
import "encoding/hex"
import "net/http"
import "html/template"
//import "encoding/json"
//import "io/ioutil"
import "github.com/go-logr/logr"
import "github.com/deroproject/derohe/block"
import "github.com/deroproject/derohe/cryptography/crypto"
import "github.com/deroproject/derohe/globals"
import "github.com/deroproject/derohe/transaction"
import "github.com/deroproject/derohe/rpc"
import "github.com/deroproject/derohe/proof"
import "github.com/deroproject/derohe/glue/rwc"
import "github.com/creachadair/jrpc2"
import "github.com/creachadair/jrpc2/channel"
import "github.com/gorilla/websocket"
//go:embed templates/*.tmpl
var tpls embed.FS
//go:embed static/*
var static embed.FS
type Client struct {
WS *websocket.Conn
RPC *jrpc2.Client
}
var rpc_client = &Client{}
var Connected bool = false
var mainnet = true
var endpoint string
var replacer = strings.NewReplacer("h", ":", "m", ":", "s", "")
var logger logr.Logger
func (cli *Client) Call(method string, params interface{}, result interface{}) error {
try := 0
try_again:
if cli == nil || !cli.IsDaemonOnline() {
go Connect()
time.Sleep(time.Second)
try++
if try < 3 {
goto try_again
}
return fmt.Errorf("client is offline or not connected")
}
return cli.RPC.CallResult(context.Background(), method, params, result)
}
// this is as simple as it gets
// single threaded communication to get the daemon status and height
// this will tell whether the wallet can connection successfully to daemon or not
func (cli *Client) IsDaemonOnline() bool {
if cli.WS == nil || cli.RPC == nil {
return false
}
return true
}
func (cli *Client) onlinecheck_and_get_online() {
for {
if cli.IsDaemonOnline() {
var result string
if err := cli.Call("DERO.Ping", nil, &result); err != nil {
logger.V(1).Error(err, "Ping failed:")
cli.RPC.Close()
cli.WS = nil
cli.RPC = nil
Connect() // try to connect again
} else {
//fmt.Printf("Ping Received %s\n", result)
}
}
time.Sleep(time.Second)
}
}
// this is as simple as it gets
// single threaded communication to get the daemon status and height
// this will tell whether the wallet can connection successfully to daemon or not
func Connect() (err error) {
// TODO enable socks support here
//rpc_conn, err = rpcc.Dial("ws://"+ w.Daemon_Endpoint + "/ws")
daemon_endpoint := endpoint
rpc_client.WS, _, err = websocket.DefaultDialer.Dial("ws://"+daemon_endpoint+"/ws", nil)
// notify user of any state change
// if daemon connection breaks or comes live again
if err == nil {
if !Connected {
logger.V(1).Info("Connection to RPC server successful", "daemon_endpoint", "ws://"+daemon_endpoint+"/ws")
Connected = true
}
} else {
logger.Error(err, "Error connecting to daemon")
if Connected {
logger.Error(err, "Connection to RPC server Failed ", "daemon_endpoint", "ws://"+daemon_endpoint+"/ws")
}
Connected = false
return
}
input_output := rwc.New(rpc_client.WS)
rpc_client.RPC = jrpc2.NewClient(channel.RawJSON(input_output, input_output), nil)
var result string
if err := rpc_client.Call("DERO.Ping", nil, &result); err != nil {
logger.V(1).Error(err, "Ping failed:")
} else {
// fmt.Printf("Ping Received %s\n", result)
}
var info rpc.GetInfo_Result
// collect all the data afresh, execute rpc to service
if err = rpc_client.Call("DERO.GetInfo", nil, &info); err != nil {
logger.V(1).Error(err, "GetInfo failed:")
} else {
mainnet = !info.Testnet // inverse of testnet is mainnet
}
return nil
}
func StartServer(loggerb logr.Logger, daemon_endpoint string, listen_address string) (err error) {
logger = globals.Logger.WithName("explorer")
endpoint = daemon_endpoint
logger.Info("Daemon RPC endpoint ", "endpoint", endpoint)
logger.Info("Will listen ", "address", listen_address)
// execute rpc to service
err = Connect()
if err == nil {
logger.Info("Connection to RPC server successful")
} else {
logger.Error(err, "Connection to RPC server Failed")
return
}
go rpc_client.onlinecheck_and_get_online() // keep connectingto server
all_templates, err = template.ParseFS(tpls, "templates/*.tmpl")
if err != nil {
logger.Error(err, "error parsing templates.")
return
}
http.Handle("/static/", http.StripPrefix("/static/", http.FileServer(http.FS(static)))) // include all static assets
http.HandleFunc("/search", search_handler)
http.HandleFunc("/page/", page_handler)
http.HandleFunc("/block/", block_handler)
http.HandleFunc("/txpool/", txpool_handler)
http.HandleFunc("/tx/", tx_handler)
http.HandleFunc("/", root_handler)
go func() {
logger.Info("Listening for requests")
err = http.ListenAndServe(listen_address, nil)
logger.Error(err, "ListenAndServe failed")
}()
time.Sleep(50 * time.Millisecond)
return err
}
// all the tx info which ever needs to be printed
type txinfo struct {
Hex string // raw tx
Height string // height at which tx was mined
HeightBuilt uint64 // height at which tx was built
RootHash string // roothash which forms the basis for balance tree
TransactionType string // transaction type
Depth int64
Timestamp uint64 // timestamp
Age string // time diff from current time
Block_time string // UTC time from block header
Epoch uint64 // Epoch time
In_Pool bool // whether tx was in pool
Hash string // hash for hash
PrefixHash string // prefix hash
Version int // version of tx
Size string // size of tx in KB
Sizeuint64 uint64 // size of tx in bytes
Burn_Value string // value of burned amount
Fee string // fee in TX
Feeuint64 uint64 // fee in atomic units
In int // inputs counts
Out int // outputs counts
Amount string
CoinBase bool // is tx coin base
Extra string // extra within tx
Keyimages []string // key images within tx
OutAddress []string // contains output secret key
OutOffset []uint64 // contains index offsets
Type string // ringct or ruffct ( bulletproof)
ValidBlock string // the tx is valid in which block
InvalidBlock []string // the tx is invalid in which block
Skipped bool // this is only valid, when a block is being listed
Ring_size int
Ring [][]string // contains entire ring in string form
TXpublickey string
PayID32 string // 32 byte payment ID
PayID8 string // 8 byte encrypted payment ID
Proof_address string // address agains which which the proving ran
Proof_index int64 // proof satisfied for which index
Proof_amount string // decoded amount
Proof_Payload_raw string // payload raw bytes
Proof_Payload string // if proof decoded, decoded , else decode error
Proof_error string // error if any while decoding proof
SC_TX_Available string //bool // whether this contains an SC TX
SC_Signer string // whether SC signer
SC_Signer_verified string // whether SC signer can be verified successfully
SC_Balance uint64 // SC SC_Balance in atomic units
SC_Balance_string string // SC_Balance in DERO
SC_Keys map[string]string // SC key value of
SC_Args rpc.Arguments // rpc.Arguments
SC_Code string // install SC
SC_State rpc.GetSC_Result // current SC state
SC_Install bool
Assets []Asset
}
type Asset struct {
SCID string
Fees string
Burn string
Ring []string
Ring_size int
}
// any information for block which needs to be printed
type block_info struct {
Block block.Block
Major_Version uint64
Minor_Version uint64
Height int64
TopoHeight int64
Depth int64
Timestamp uint64
Hash string
Tips []string
Nonce uint64
Fees string
Reward string
Size string
Age string // time diff from current time
Block_time string // UTC time from block header
Epoch uint64 // Epoch time
Outputs string
Mtx txinfo
Txs []txinfo
Orphan_Status bool
SyncBlock bool // whether the block is sync block
Tx_Count int
}
var all_templates *template.Template
// load and setup block_info from rpc
// if hash is less than 64 bytes then it is considered a height parameter
func load_block_from_rpc(info *block_info, block_hash string, recursive bool) (err error) {
var bl block.Block
var bresult rpc.GetBlock_Result
var block_height int
var block_bin []byte
if len(block_hash) != 64 { // parameter is a height
fmt.Sscanf(block_hash, "%d", &block_height)
// user requested block height
logger.V(1).Info("User requested block", "topoheight", block_height, "user input", block_hash)
if err = rpc_client.Call("DERO.GetBlock", rpc.GetBlock_Params{Height: uint64(block_height)}, &bresult); err != nil {
return fmt.Errorf("getblock rpc failed. err %s", err)
}
} else { // parameter is the hex blob
logger.V(1).Info("User requested block using hash", "block_hash", block_hash)
if err = rpc_client.Call("DERO.GetBlock", rpc.GetBlock_Params{Hash: block_hash}, &bresult); err != nil {
return fmt.Errorf("getblock rpc failed")
}
}
// fmt.Printf("block %d %+v\n",i, bresult)
info.TopoHeight = bresult.Block_Header.TopoHeight
info.Height = bresult.Block_Header.Height
info.Depth = bresult.Block_Header.Depth
duration_millisecond := (uint64(time.Now().UTC().UnixMilli()) - bresult.Block_Header.Timestamp)
info.Age = replacer.Replace((time.Duration(duration_millisecond) * time.Millisecond).String())
info.Block_time = time.Unix(0, int64(bresult.Block_Header.Timestamp*uint64(time.Millisecond))).Format("2006-01-02 15:04:05")
info.Epoch = bresult.Block_Header.Timestamp
info.Outputs = fmt.Sprintf("%.03f", float32(bresult.Block_Header.Reward)/1000000000000.0)
info.Size = "N/A"
info.Hash = bresult.Block_Header.Hash
//info.Prev_Hash = bresult.Block_Header.Prev_Hash
info.Tips = bresult.Block_Header.Tips
info.Orphan_Status = bresult.Block_Header.Orphan_Status
info.SyncBlock = bresult.Block_Header.SyncBlock
info.Nonce = bresult.Block_Header.Nonce
info.Major_Version = bresult.Block_Header.Major_Version
info.Minor_Version = bresult.Block_Header.Minor_Version
info.Reward = fmt.Sprintf("%.03f", float32(bresult.Block_Header.Reward)/1000000000000.0)
block_bin, _ = hex.DecodeString(bresult.Blob)
//log.Infof("block %+v bresult %+v ", bl, bresult)
bl.Deserialize(block_bin)
info.Block = bl
if recursive {
// fill in miner tx info
//err = load_tx_from_rpc(&info.Mtx, bl.Miner_TX.GetHash().String()) //TODO handle error
load_tx_info_from_tx(&info.Mtx, &bl.Miner_TX)
// miner tx reward is calculated on runtime due to client protocol reasons in dero atlantis
// feed what is calculated by the daemon
reward := uint64(0)
if bl.Miner_TX.TransactionType == transaction.PREMINE {
reward += bl.Miner_TX.Value
}
info.Mtx.Amount = fmt.Sprintf("%.05f", float64(reward+bresult.Block_Header.Reward)/100000)
//logger.Error(err,"loading miner tx from rpc ", "txid", bl.Miner_TX.GetHash().String())
info.Tx_Count = len(bl.Tx_hashes)
fees := uint64(0)
size := uint64(len(bl.Serialize()))
// if we have any other tx load them also
for i := 0; i < len(bl.Tx_hashes); i++ {
var tx txinfo
err = load_tx_from_rpc(&tx, bl.Tx_hashes[i].String()) //TODO handle error
if err != nil {
logger.V(1).Error(err, "loading tx ", "txid", bl.Tx_hashes[i].String())
}
if tx.ValidBlock != bresult.Block_Header.Hash { // track skipped status
tx.Skipped = true
}
info.Txs = append(info.Txs, tx)
fees += tx.Feeuint64
size += tx.Sizeuint64
}
info.Fees = fmt.Sprintf("%.03f", float32(fees)/100000.0)
info.Size = fmt.Sprintf("%.03f", float32(size)/1024)
}
return
}
// this will fill up the info struct from the tx
func load_tx_info_from_tx(info *txinfo, tx *transaction.Transaction) (err error) {
info.Hash = tx.GetHash().String()
//info.PrefixHash = tx.GetPrefixHash().String()
info.TransactionType = tx.TransactionType.String()
info.Size = fmt.Sprintf("%.03f", float32(len(tx.Serialize()))/1024)
info.Sizeuint64 = uint64(len(tx.Serialize()))
info.Version = int(tx.Version)
//info.Extra = fmt.Sprintf("%x", tx.Extra)
if len(tx.Payloads) >= 1 {
info.RootHash = fmt.Sprintf("%x", tx.Payloads[0].Statement.Roothash[:])
}
info.HeightBuilt = tx.Height
//info.In = len(tx.Vin)
//info.Out = len(tx.Vout)
if tx.TransactionType == transaction.BURN_TX {
info.Burn_Value = fmt.Sprintf(" %.05f", float64(tx.Value)/100000)
}
switch tx.TransactionType {
case transaction.PREMINE:
var acckey crypto.Point
if err := acckey.DecodeCompressed(tx.MinerAddress[:]); err != nil {
panic(err)
}
astring := rpc.NewAddressFromKeys(&acckey)
astring.Mainnet = mainnet
info.OutAddress = append(info.OutAddress, astring.String())
info.Amount = globals.FormatMoney(tx.Value)
case transaction.REGISTRATION:
var acckey crypto.Point
if err := acckey.DecodeCompressed(tx.MinerAddress[:]); err != nil {
panic(err)
}
astring := rpc.NewAddressFromKeys(&acckey)
astring.Mainnet = mainnet
info.OutAddress = append(info.OutAddress, astring.String())
case transaction.COINBASE:
info.CoinBase = true
info.In = 0
var acckey crypto.Point
if err := acckey.DecodeCompressed(tx.MinerAddress[:]); err != nil {
panic(err)
}
astring := rpc.NewAddressFromKeys(&acckey)
astring.Mainnet = mainnet
info.OutAddress = append(info.OutAddress, astring.String())
case transaction.NORMAL, transaction.BURN_TX, transaction.SC_TX:
info.Fee = globals.FormatMoney(tx.Fees())
info.Ring_size = int(tx.Payloads[0].Statement.RingSize)
}
if tx.TransactionType == transaction.SC_TX {
info.SC_Args = tx.SCDATA
}
// if outputs cannot be located, do not panic
// this will be the case for pool transactions
if len(info.OutAddress) != len(info.OutOffset) {
info.OutOffset = make([]uint64, len(info.OutAddress), len(info.OutAddress))
}
switch 0 {
case 0:
info.Type = "DERO_HOMOMORPHIC"
default:
panic("not implemented")
}
if !info.In_Pool && !info.CoinBase && (tx.TransactionType == transaction.NORMAL || tx.TransactionType == transaction.BURN_TX || tx.TransactionType == transaction.SC_TX) { // find the age of block and other meta
var blinfo block_info
err := load_block_from_rpc(&blinfo, fmt.Sprintf("%s", info.Height), false) // we only need block data and not data of txs
if err != nil {
return err
}
// fmt.Printf("Blinfo %+v height %d", blinfo, info.Height);
info.Age = blinfo.Age
info.Block_time = blinfo.Block_time
info.Epoch = blinfo.Epoch
info.Timestamp = blinfo.Epoch
info.Depth = blinfo.Depth
}
return nil
}
// load and setup txinfo from rpc
func load_tx_from_rpc(info *txinfo, txhash string) (err error) {
var tx_params rpc.GetTransaction_Params
var tx_result rpc.GetTransaction_Result
//fmt.Printf("Requesting tx data %s", txhash);
tx_params.Tx_Hashes = append(tx_params.Tx_Hashes, txhash)
if err = rpc_client.Call("DERO.GetTransaction", tx_params, &tx_result); err != nil {
return fmt.Errorf("gettransa rpc failed err %s", err)
}
//fmt.Printf("TX response %+v", tx_result)
if tx_result.Status != "OK" {
return fmt.Errorf("No Such TX RPC error status %s", tx_result.Status)
}
var tx transaction.Transaction
if len(tx_result.Txs_as_hex[0]) < 50 {
return
}
info.Hex = tx_result.Txs_as_hex[0]
tx_bin, _ := hex.DecodeString(tx_result.Txs_as_hex[0])
tx.Deserialize(tx_bin)
// fill as much info required from headers
if tx_result.Txs[0].In_pool {
info.In_Pool = true
} else {
info.Height = fmt.Sprintf("%d", tx_result.Txs[0].Block_Height)
}
for x := range tx_result.Txs[0].Output_Indices {
info.OutOffset = append(info.OutOffset, tx_result.Txs[0].Output_Indices[x])
}
if tx.IsCoinbase() { // fill miner tx reward from what the chain tells us
info.Amount = fmt.Sprintf("%.05f", float64(uint64(tx_result.Txs[0].Reward))/100000)
}
info.ValidBlock = tx_result.Txs[0].ValidBlock
info.InvalidBlock = tx_result.Txs[0].InvalidBlock
info.Ring = tx_result.Txs[0].Ring
if tx.TransactionType == transaction.NORMAL || tx.TransactionType == transaction.BURN_TX || tx.TransactionType == transaction.SC_TX {
for t := range tx.Payloads {
var a Asset
a.SCID = tx.Payloads[t].SCID.String()
a.Fees = fmt.Sprintf("%.05f", float64(tx.Payloads[t].Statement.Fees)/100000)
a.Burn = fmt.Sprintf("%.05f", float64(tx.Payloads[t].BurnValue)/100000)
if len(tx_result.Txs[0].Ring) == 0 {
continue
}
a.Ring_size = len(tx_result.Txs[0].Ring[t])
a.Ring = tx_result.Txs[0].Ring[t]
info.Assets = append(info.Assets, a)
}
//fmt.Printf("assets now %+v\n", info.Assets)
}
info.SC_Balance = tx_result.Txs[0].Balance
info.SC_Balance_string = fmt.Sprintf("%.05f", float64(uint64(info.SC_Balance)/100000))
info.SC_Code = tx_result.Txs[0].Code
if tx.TransactionType == transaction.SC_TX && len(info.SC_Code) >= 1 {
if len(info.SC_Code) >= 1 {
info.SC_Install = true
}
var p = rpc.GetSC_Params{SCID: txhash, Variables: true}
var r rpc.GetSC_Result
if err = rpc_client.Call("DERO.GetSC", p, &r); err != nil {
logger.V(1).Error(err, "DERO.GetSC failed")
} else {
info.SC_State = r
}
}
//info.Ring = strings.Join(info.OutAddress, " ")
//fmt.Printf("tx_result %+v\n",tx_result.Txs)
// fmt.Printf("response contained tx %s \n", tx.GetHash())
return load_tx_info_from_tx(info, &tx)
}
func block_handler(w http.ResponseWriter, r *http.Request) {
param := ""
fmt.Sscanf(r.URL.EscapedPath(), "/block/%s", &param)
var blinfo block_info
err := load_block_from_rpc(&blinfo, param, true)
_ = err
// execute template now
data := map[string]interface{}{}
fill_common_info(data, false)
data["block"] = blinfo
err = all_templates.ExecuteTemplate(w, "block", data)
if err != nil {
return
}
return
// fmt.Fprint(w, "This is a valid block")
}
func tx_handler(w http.ResponseWriter, r *http.Request) {
var info txinfo
tx_hex := ""
fmt.Sscanf(r.URL.EscapedPath(), "/tx/%s", &tx_hex)
txhash := crypto.HashHexToHash(tx_hex)
logger.V(1).Info("user requested ", "txid", tx_hex)
err := load_tx_from_rpc(&info, txhash.String()) //TODO handle error
_ = err
// check whether user requested proof
tx_proof := r.PostFormValue("txproof")
raw_tx_data := r.PostFormValue("raw_tx_data")
if raw_tx_data != "" { // gives ability to prove transactions not in the blockchain
info.Hex = raw_tx_data
}
if tx_proof != "" {
logger.V(1).Info("Proving TX", "proof", tx_proof, "tx_hex", info.Hex, "ring", info.Ring)
// there may be more than 1 amounts, only first one is shown
addresses, amounts, raw, decoded, err := proof.Prove(tx_proof, info.Hex, info.Ring, mainnet)
if err == nil { //&& len(amounts) > 0 && len(indexes) > 0{
logger.V(1).Info("Successfully proved transaction", "txid", tx_hex, "payload_count", len(decoded))
info.Proof_address = addresses[0]
info.Proof_amount = globals.FormatMoney(amounts[0])
info.Proof_Payload_raw = BytesViewer(raw[0]).String() // raw payload
info.Proof_Payload = decoded[0]
} else {
logger.V(1).Error(err, "err while proving")
if err != nil {
info.Proof_error = err.Error()
}
}
}
// execute template now
data := map[string]interface{}{}
fill_common_info(data, false)
data["info"] = info
err = all_templates.ExecuteTemplate(w, "tx", data)
if err != nil {
return
}
return
}
func pool_handler(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, "This is a valid pool")
}
// if there is any error, we return back empty
// if pos is wrong we return back
// pos is descending order
func fill_tx_structure(pos int, size_in_blocks int) (data []block_info) {
i := pos
for ; i > (pos-size_in_blocks) && i >= 0; i-- { // query blocks by topo height
var blinfo block_info
if err := load_block_from_rpc(&blinfo, fmt.Sprintf("%d", i), true); err == nil {
data = append(data, blinfo)
} else {
logger.V(2).Error(err, "error loading block", "i", i)
}
}
if i == 0 {
var blinfo block_info
if err := load_block_from_rpc(&blinfo, fmt.Sprintf("%d", i), true); err == nil {
data = append(data, blinfo)
} else {
logger.V(2).Error(err, "error loading block", "i", i)
}
}
return
}
func show_page(w http.ResponseWriter, page int) {
data := map[string]interface{}{}
var info rpc.GetInfo_Result
var err error
if err = rpc_client.Call("DERO.GetInfo", nil, &info); err != nil {
goto exit_error
}
fill_common_info(data, true)
if page == 0 { // use requested invalid page, give current page
page = int(info.TopoHeight) / 10
}
data["previous_page"] = page - 1
if page <= 1 {
data["previous_page"] = 1
}
data["current_page"] = page
if (int(info.TopoHeight) % 10) == 0 {
data["total_page"] = (int(info.TopoHeight) / 10)
} else {
data["total_page"] = (int(info.TopoHeight) / 10)
}
data["next_page"] = page + 1
if (page + 1) > data["total_page"].(int) {
data["next_page"] = page
}
fill_tx_pool_info(data, 25)
if page == 1 { // page 1 has 11 blocks, it does not show genesis block
data["block_array"] = fill_tx_structure(int(page*10), 12)
} else {
if int(info.TopoHeight)-int(page*10) > 10 {
data["block_array"] = fill_tx_structure(int(page*10), 10)
} else {
data["block_array"] = fill_tx_structure(int(info.TopoHeight), int(info.TopoHeight)-int((page-1)*10))
}
}
//fmt.Printf("page %+v\n", data)
err = all_templates.ExecuteTemplate(w, "main", data)
if err != nil {
goto exit_error
}
return
exit_error:
fmt.Fprintf(w, "Error occurred err %s", err)
}
func txpool_handler(w http.ResponseWriter, r *http.Request) {
data := map[string]interface{}{}
fill_common_info(data, true)
fill_tx_pool_info(data, 500) // show only 500 txs
var err error
if err = all_templates.ExecuteTemplate(w, "txpool_page", data); err != nil {
goto exit_error
}
return
exit_error:
fmt.Fprintf(w, "Error occurred err %s", err)
}
// shows a page
func page_handler(w http.ResponseWriter, r *http.Request) {
page := 0
page_string := r.URL.EscapedPath()
fmt.Sscanf(page_string, "/page/%d", &page)
logger.V(1).Info("user requested page", "page", page)
show_page(w, page)
}
// root shows page 0
func root_handler(w http.ResponseWriter, r *http.Request) {
logger.V(1).Info("Showing main page")
show_page(w, 0)
}
// search handler, finds the items using rpc bruteforce
func search_handler(w http.ResponseWriter, r *http.Request) {
var info rpc.GetInfo_Result
var err error
logger.V(1).Info("Showing search page")
values, ok := r.URL.Query()["value"]
if !ok || len(values) < 1 {
show_page(w, 0)
return
}
// Query()["key"] will return an array of items,
// we only want the single item.
value := strings.TrimSpace(values[0])
good := false
// collect all the data afresh, execute rpc to service
if err = rpc_client.Call("DERO.GetInfo", nil, &info); err != nil {
goto exit_error
}
if len(value) != 64 {
if s, err := strconv.ParseInt(value, 10, 64); err == nil && s >= 0 && s <= info.TopoHeight {
good = true
}
} else { // check whether the string can be hex decoded
t, err := hex.DecodeString(value)
if err != nil || len(t) != 32 {
} else {
good = true
}
}
// value should be either 64 hex chars or a topoheight which should be less than current topoheight
if good {
// check whether the page is block or tx or height
var blinfo block_info
var tx txinfo
err := load_block_from_rpc(&blinfo, value, false)
if err == nil {
logger.V(1).Info("Redirecting user to block page")
http.Redirect(w, r, "/block/"+value, 302)
return
}
err = load_tx_from_rpc(&tx, value) //TODO handle error
if err == nil {
logger.V(1).Info("Redirecting user to tx page")
http.Redirect(w, r, "/tx/"+value, 302)
return
}
}
{
data := map[string]interface{}{}
fill_common_info(data, true)
if err = all_templates.ExecuteTemplate(w, "notfound_page", data); err == nil {
return
}
}
exit_error:
show_page(w, 0)
return
}
func fill_common_info(data map[string]interface{}, extra_data bool) error {
var info rpc.GetInfo_Result
data["title"] = "DERO HE BlockChain Explorer(v1)"
data["servertime"] = time.Now().UTC().Format("2006-01-02 15:04:05")
if !extra_data {
return nil
}
// collect all the data afresh, execute rpc to service
if err := rpc_client.Call("DERO.GetInfo", nil, &info); err != nil {
return err
}
//fmt.Printf("get info %+v", info)
data["Network_Difficulty"] = info.Difficulty
data["hash_rate"] = fmt.Sprintf("%.03f", float32(info.Difficulty/1000000)/float32(info.Target))
data["txpool_size"] = info.Tx_pool_size
data["testnet"] = info.Testnet
data["network"] = info.Network
data["fee_per_kb"] = float64(info.Dynamic_fee_per_kb) / 1000000000000
data["median_block_size"] = fmt.Sprintf("%.02f", float32(info.Median_Block_Size)/1024)
data["total_supply"] = info.Total_Supply
data["averageblocktime50"] = info.AverageBlockTime50
return nil
}
// fill all the tx pool info as per requested
func fill_tx_pool_info(data map[string]interface{}, max_count int) error {
var err error
var txs []txinfo
var txpool rpc.GetTxPool_Result
data["mempool"] = txs // initialize with empty data
if err = rpc_client.Call("DERO.GetTxPool", nil, &txpool); err != nil {
return fmt.Errorf("gettxpool rpc failed")
}
for i := range txpool.Tx_list {
var info txinfo
err := load_tx_from_rpc(&info, txpool.Tx_list[i]) //TODO handle error
if err != nil {
continue
}
txs = append(txs, info)
if len(txs) >= max_count {
break
}
}
data["mempool"] = txs
return nil
}
// BytesViewer bytes viewer
type BytesViewer []byte
// String returns view in hexadecimal
func (b BytesViewer) String() string {
if len(b) == 0 {
return "invlaid string"
}
const head = `
| Address | Hex | Text |
| -------: | :---------------------------------------------- | :--------------- |
`
const row = 16
result := make([]byte, 0, len(head)/2*(len(b)/16+3))
result = append(result, head...)
for i := 0; i < len(b); i += row {
result = append(result, "| "...)
result = append(result, fmt.Sprintf("%08x", i)...)
result = append(result, " | "...)
k := i + row
more := 0
if k >= len(b) {
more = k - len(b)
k = len(b)
}
for j := i; j != k; j++ {
if b[j] < 16 {
result = append(result, '0')
}
result = strconv.AppendUint(result, uint64(b[j]), 16)
result = append(result, ' ')
}
for j := 0; j != more; j++ {
result = append(result, " "...)
}
result = append(result, "| "...)
buf := bytes.Map(func(r rune) rune {
if unicode.IsSpace(r) {
return ' '
}
return r
}, b[i:k])
result = append(result, buf...)
for j := 0; j != more; j++ {
result = append(result, ' ')
}
result = append(result, " |\n"...)
}
return *(*string)(unsafe.Pointer(&result))
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View File

@ -0,0 +1,79 @@
{{define "block"}}
{{ template "header" . }}
<div>
<H4>Block Topo height (unique): {{.block.TopoHeight}} Block height: ({{.block.Height}})</H4>
<H4>Block hash: {{.block.Hash}}</H4>
{{range $i, $a := .block.Tips}}
<H5>Previous blocks: <a href="/block/{{$a}}">{{$a}}</a></H5>
{{end}}
<!--
<H5>Next block: <a href="/block/a8ade20d5cad5e23105cfc25687beb2498844a984b1450330c67705b6c720596">a8ade20d5cad5e23105cfc25687beb2498844a984b1450330c67705b6c720596</a></H5>
-->
<table class="center">
<tr>
<td>Timestamp [UCT] (epoch millisec):</td><td>{{.block.Block_time}} ({{.block.Epoch}})</td>
<td>Age [h:m:s.ms]:</td><td>{{.block.Age}}</td>
<td>Δ [h:m:s.ms]:</td><td></td>
</tr>
<tr>
<td>Major.minor version:</td><td>{{.block.Major_Version}}.{{.block.Minor_Version}}</td>
<td>Block reward:</td><td>{{.block.Reward}}</td>
<td>Block size [kB]:</td><td>{{.block.Size}}</td>
</tr>
<tr>
<td>nonce:</td><td>{{.block.Nonce}}</td>
<td>Total fees:</td><td>{{.block.Fees}}</td>
<td>No of txs:</td><td>{{.block.Tx_Count}}</td>
</tr>
</table>
<h3>Miner reward for this block</h3>
<table class="center">
<tr>
<td>Miner Address</td>
<td>outputs</td>
<td>size [kB]</td>
<td>version</td>
</tr>
<tr>
<td>{{index .block.Mtx.OutAddress 0}}</a>
<td>{{.block.Mtx.Amount}}</td>
<td>{{.block.Mtx.Size}}</td>
<td>{{.block.Mtx.Version}}</td>
</tr>
</table>
<h3>Transactions ({{.block.Tx_Count}})</h3>
<table class="center" style="width:80%">
<tr>
<td>hash</td>
<td>type</td>
<td>fee</td>
<td>ring size</td>
<td>version</td>
<td>size [kB]</td>
</tr>
{{range .block.Txs}}
<tr>
{{if .Skipped }}<td><a href="/tx/{{.Hash}}"><font color="indianred">{{.Hash}}</font> </a></td>
{{else}}
<td><a href="/tx/{{.Hash}}">{{.Hash}}</a></td>
{{end}}
<td>{{.TransactionType}}</td>
<td>{{.Fee}}</td>
<td>{{.Ring_size}}</td>
<td>{{.Version}}</td>
<td>{{.Size}}</td>
</tr>
{{end}}
</table>
</div>
{{ template "footer" . }}
{{end}}

View File

@ -0,0 +1,12 @@
{{define "footer"}}
<div class="center">
<h6 style="margin-top:10px">
<a href="https://github.com/deroproject/">DERO explorer source code</a>
| explorer version (api): under development (1.0)
| dero version: golang pre-alpha
| Copyright 2017-2021 Dero Project
</h6>
</div>
</body>
</html>
{{end}}

View File

@ -0,0 +1,197 @@
{{define "header"}}
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<META HTTP-EQUIV="CACHE-CONTROL" CONTENT="NO-CACHE">
<title>{{ .title }}</title>
<!--<link rel="stylesheet" type="text/css" href="/css/style.css">-->
<style type="text/css">
body {
margin: 0;
padding: 0;
color: green;
background-color: white;
}
h1, h2, h3, h4, h5, h6 {
text-align: center;
}
.center {
margin: auto;
width: 96%;
/*border: 1px solid #73AD21;
padding: 10px;*/
}
tr, li, #pages, .info {
font-family: "Lucida Console", Monaco, monospace;
font-size : 12px;
height: 22px;
}
#pages
{
margin-top: 15px;
}
td {
text-align: center;
}
a:link {
text-decoration: none;
color: blue;
}
a:visited {
text-decoration: none;
color: blue;
}
a:hover {
text-decoration: underline;
color: blue;
}
a:active {
text-decoration: none;
color: blue;
}
form {
display: inline-block;
text-align: center;
}
.style-1 input[type="text"] {
padding: 2px;
border: solid 1px #dcdcdc;
transition: box-shadow 0.3s, border 0.3s;
}
.style-1 input[type="text"]:focus,
.style-1 input[type="text"].focus {
border: solid 1px #707070;
box-shadow: 0 0 5px 1px #969696;
}
.tabs {
position: relative;
min-height: 220px; /* This part sucks */
clear: both;
margin: 25px 0;
}
.tab {
float: left;
}
.tab label {
background: white;
padding: 10px;
border: 1px solid #ccc;
margin-left: -1px;
position: relative;
left: 1px;
}
.tab [type=radio] {
display: none;
}
.content {
position: absolute;
top: 28px;
left: 0;
background: white;
right: 0;
bottom: 0;
padding: 20px;
border: 1px solid #ccc;
}
[type=radio]:checked ~ label {
background: #505050 ;
border-bottom: 1px solid green;
z-index: 2;
}
[type=radio]:checked ~ label ~ .content {
z-index: 1;
}
input#toggle-1[type=checkbox] {
position: absolute;
/*top: -9999px;*/
left: -9999px;
}
label#show-decoded-inputs {
/*-webkit-appearance: push-button;*/
/*-moz-appearance: button;*/
display: inline-block;
/*margin: 60px 0 10px 0;*/
cursor: pointer;
background-color: white;;
color: green;
width: 100%;
text-align: center;
}
div#decoded-inputs{
display: none;
}
/* Toggled State */
input#toggle-1[type=checkbox]:checked ~ div#decoded-inputs {
display: block;
}
</style>
</head>
<body>
<div>
<div class="center">
<h1 class="center">
<img alt="logo" style="vertical-align:middle" height="64" width="64" src="/static/static/logo.png" />
<a href="/">{{ .title }} {{if .testnet}} TestNet {{else}} Mainnet {{end}} {{.network}}</a></h1>
<!-- <h4 style="font-size: 15px; margin: 0px">(no javascript - no cookies - no web analytics trackers - no images - open sourced)</h4> -->
</div>
<div class="center">
<form action="/search" method="get" style="width:100%; margin-top:15px" class="style-1">
<input type="text" name="value" size="120"
placeholder="block height, block hash, transaction hash">
<input type="submit" value="Search">
</form>
</div>
</div>
{{if .Network_Difficulty}}
<div class="center">
<h3 style="font-size: 12px; margin-top: 20px">
Server time: {{ .servertime }} | <a href="/txpool">Transaction pool</a>
</h3>
<h3 style="font-size: 12px; margin-top: 5px; margin-bottom: 3">
Network difficulty: {{ .Network_Difficulty }}
| Hash rate: {{ .hash_rate }} KH&#x2F;s
| Average Block Time(50) {{.averageblocktime50}} sec
| Total supply : {{ .total_supply }}
| Mempool size : {{ .txpool_size }}
| Fee per kb: {{.fee_per_kb}}
| Median block size limit: {{.median_block_size}} kB
</h3>
</div>
{{end}}
{{end}}

View File

@ -0,0 +1,72 @@
{{define "main"}}
{{ template "header" . }}
{{ template "txpool" . }}
<h2 style="margin-bottom: 0px">Transactions in the last 11 blocks</h2>
<h4 style="font-size: 14px; margin-top: 0px">(Median size of these blocks: 0.09 kB)</h4>
{{ template "paging" . }}
<div class="center">
<table class="center">
<tr>
<td>height</td>
<td>topo height</td>
<td>age [h:m:s.ms]<!--(Δm)--></td>
<td>miniblocks</td>
<td>size [kiB]<!--(Δm)--></td>
<td>tx hash</td>
<td>type</td>
<td>fees</td>
<td>ring size</td>
<td>tx size [kB]</td>
</tr>
{{range .block_array}}
<tr>
<td> {{if .SyncBlock }} <strong>{{.Height}}</strong> {{else}} <font color="purple">{{.Height}}</font> {{end}} </td>
<td><a href="/block/{{.TopoHeight}}">{{.TopoHeight}}</a></td>
<td>{{.Age}}</td>
<td>{{len .Block.MiniBlocks}}</td>
<td>{{.Size}}</td>
<td>block <a href="/block/{{.Hash}}">{{.Hash}} </a></td>
<td>N/A</td>
<td>{{.Mtx.Amount}}</td>
<td></td>
<td>{{.Mtx.Size}}</td>
</tr>
{{range .Txs}}
<tr>
<td></td>
<td></td>
<td></td>
<td></td>
<td></td>
{{if .Skipped }}<td><a href="/tx/{{.Hash}}"><font color="indianred">{{.Hash}}</font> </a></td>
{{else}}
<td><a href="/tx/{{.Hash}}">{{.Hash}}</a></td>
{{end}}
<td>{{.TransactionType}}</td>
<td>{{.Fee}}</td>
<td>{{.Ring_size}}</td>
<td>{{.Size}}</td>
</tr>
{{end}}
{{end}}
</table>
{{ template "paging" . }}
</div>
{{ template "footer" . }}
{{end}}

View File

@ -0,0 +1,6 @@
{{define "notfound_page"}}
{{ template "header" . }}
<h2 style="margin-bottom: 0px"><font color="red">No details found in database</font></h2>
{{ template "footer" . }}
{{end}}

View File

@ -0,0 +1,10 @@
{{ define "paging"}}
<div id="pages" class="center" style="text-align: center;">
<a href="/page/{{.previous_page}}">previous page</a> |
<a href="/page/1">first page</a> |
current page: {{.current_page}}/<a href="/page/{{.total_page}}">{{.total_page}}</a>
| <a href="/page/{{.next_page}}">next page</a> | <a href="/">last page</a>
</div>
{{end}}

View File

@ -0,0 +1,245 @@
{{define "tx"}}
{{ template "header" . }}
<div>
<H4 style="margin:5px">Tx hash: {{.info.Hash}} Type {{.info.TransactionType }}</H4>
{{if eq .info.TransactionType "BURN" }}
<H4 style="margin:5px; color: red">Burns: {{.info.Burn_Value }} DERO</H4>
{{end}}
<H5>Block: <a href="/block/{{.info.ValidBlock}}">{{.info.ValidBlock}}</a> (VALID) </H5>
{{range $i, $e := .info.InvalidBlock}}
<H5>Block: <a href="/block/{{$e}}">{{$e}}</a></H5>
{{end}}
{{if eq .info.TransactionType "PREMINE"}}
<table class="center" style="width: 80%; margin-top:10px">
<tr>
<td>{{index .info.OutAddress 0}} Registered with funds {{.info.Amount}}</td>
</tr>
</table>
{{end}}
{{if eq .info.TransactionType "REGISTRATION"}}
<table class="center" style="width: 80%; margin-top:10px">
<tr>
<td>{{index .info.OutAddress 0}} Registered </td>
</tr>
</table>
{{end}}
{{if .info.SC_Install }}
<div class="center" style="border: 1px;width: 100%;overflow: hidden; text-overflow: ellipsis;">
<H5 style="margin:5px">SCID current reserves </H5>
<table class="center" style="width: 80%; margin-top:10px;border: 1px">
<tr>
<td>SCID</td> <td style="width: 20%">Amount(in atomic units)</td>
</tr>
{{range $k, $v := .info.SC_State.Balances}}
<tr>
<td>{{ $k }}</td> <td> {{ $v }} </td>
</tr>
{{end}}
</table>
<H5 style="margin:5px">SCID string variables </H5>
<table class="center" style="border: 1px;width: 80%; margin-top:10px;overflow: hidden; text-overflow: ellipsis;">
<tr>
<td>key</td> <td style="width: 20%;text-align:left">value</td>
</tr>
{{range $k, $v := .info.SC_State.VariableStringKeys}}
<tr>
<td>{{ $k }}</td> <td style="width: 20%;text-align:left;overflow: hidden; text-overflow: ellipsis;"> {{ $v }} </td>
</tr>
{{end}}
</table>
<H5 style="margin:5px">SCID uint64 variables </H5>
<table class="center" style="border: 1px;width: 80%; margin-top:10px">
<tr>
<td>key</td> <td style="width: 20%;text-align:left">value</td>
</tr>
{{range $k, $v := .info.SC_State.VariableUint64Keys}}
<tr>
<td>{{ $k }}</td> <td style="width: 20%;text-align:left;overflow: hidden; text-overflow: ellipsis;"> {{ $v }} </td>
</tr>
{{end}}
</table>
</div>
{{end}}
{{if or (eq .info.TransactionType "NORMAL") (eq .info.TransactionType "BURN") (eq .info.TransactionType "SC") }}
<H5 style="margin:5px">Tx RootHash: {{.info.RootHash}} built height : {{.info.HeightBuilt}} </H5>
<table class="center" style="width: 80%; margin-top:10px">
<tr>
<td>Timestamp: {{.info.Timestamp}} </td>
<td>Timestamp [UTC]: {{.info.Block_time}}</td>
<td>Age [y:d:h:m:s]: {{.info.Age}} </td>
</tr>
<tr>
<td>Block: <a href="/block/{{.info.Height}}">{{.info.Height}}</a></td>
<td>Fee: {{.info.Fee}}</td>
<td>Tx size: {{.info.Size}} kB</td>
</tr>
<tr>
<td>Tx version: {{.info.Version}}</td>
<td>No of confirmations: {{.info.Depth}}</td>
<td>Signature type: {{.info.Type}}</td>
</tr>
<tr>
<td colspan="3">Extra: {{.info.Extra}}</td>
</tr>
</table>
{{range $ii, $ee := .info.Assets}}
{{if eq $ee.SCID "0000000000000000000000000000000000000000000000000000000000000000" }}
<H5>DERO : {{$ee.Ring_size}} inputs/outputs (RING size) Fees {{$ee.Fees}}
{{if eq $.info.TransactionType "SC"}}
Deposited to SC {{$ee.Burn}}
{{else}}
Burned {{$ee.Burn}}
{{end}}
</H5>
{{else}}
<H5>Token: {{$ee.SCID}} {{$ee.Ring_size}} inputs/outputs (RING size) Fees {{$ee.Fees}} {{if eq $.info.TransactionType "SC"}}
Deposited Tokens to SC {{$ee.Burn}}
{{else}}
Burned {{$ee.Burn}}
{{end}}
</H5>
{{end}}
<div class="center">
<table class="center">
<tr>
<td>address</td>
</tr>
{{range $i, $e := $ee.Ring}}
<tr>
<td>{{ $e }}</td>
</tr>
{{end}}
</table>
</div>
{{end}}
{{if eq .info.TransactionType "SC"}}
<table class="center" style="width: 80%; margin-top:10px">
<tr>
<td>SC Balance: {{ .info.SC_Balance_string }} DERO</td>
</tr>
<tr>
<td>SC CODE:<pre style="text-align: left;"> {{ .info.SC_Code }}</pre></td>
</tr>
<tr>
<td>SC Arguments: {{ .info.SC_Args }}</td>
</tr>
</table>
{{end}}
<!-- TODO currently we donot enable user to prove or decode something -->
<br/>
<br/>
<div class="center" style="border: 1px">
<table class="center" border="1">
<tr>
<td> <h3>Prove to someone that you have sent them DERO in this transaction</h3> </td>
</tr>
<tr>
<td>
proof can be obtained using wallet
command in <i>dero-wallet-cli</i> or from the statement
<br>
Note: proof is sent to the server, as the calculations are done on the server side
</td>
</tr>
<tr>
<td>
<form method="post" style="width:100%;margin-top:2px" class="style-1">
<input name="txproof" size="120" placeholder="Tx Proof here" type="text"><br>
<input name="raw_tx_data" value="" type="hidden">
<!--above raw_tx_data field only used when checking raw tx data through tx pusher NOTE: comment should be be closed -->
<input value="Prove sending" style="margin-top:5px" type="submit">
</form>
</td>
</tr>
{{if .info.Proof_amount }}
<tr>
<td><h2><font color="blue">{{.info.Proof_address}} Received {{.info.Proof_amount}} DERO
{{if .info.Proof_Payload}}
<br/> Decoded Data {{ .info.Proof_Payload}}
<br/> Raw Data
<br/><pre>{{ .info.Proof_Payload_raw}}</pre>
{{end}}
</font> </h2>
</td>
</tr>
{{end}}
{{if .info.Proof_error }}
<tr>
<td> <font color="red">{{.info.Proof_error}}</font>
</td>
</tr>
{{end}}
</table>
</div>
{{end}}
<div class="center" style="border: 1px;width: 100%;overflow: hidden; text-overflow: ellipsis;">
<table class="center" border="1">
<tr>
<br/> TX hex bytes
<br/>{{ .info.Hex}}
</tr>
</table>
</div>
{{if eq .info.CoinBase false}}
<!-- <h3>{{.info.In}} input(s) for total of ? dero</h3>
<div class="center">
<table class="center">
<tr>
<td>
</table>
</div>
-->
{{end}}
</div>
{{ template "footer" . }}
{{end}}

View File

@ -0,0 +1,34 @@
{{define "txpool"}}
<h2 style="margin-bottom: 0px">
Transaction pool
</h2>
<h4 style="font-size: 12px; margin-top: 0px">(no of txs: {{ .txpool_size }}, size: 0.00 kB, updated every 5 seconds)</h4>
<div class="center">
<table class="center" style="width:80%">
<tr>
<td>height built</td>
<td>transaction hash</td>
<td>fee</td>
<td>ring size</td>
<td>tx size [kB]</td>
</tr>
{{range .mempool}}
<tr>
<td>{{.HeightBuilt}}</td>
<td><a href="/tx/{{.Hash}}">{{.Hash}}</a></td>
<td>{{.Fee}}</td>
<td>{{.Ring_size}}</td>
<td>{{.Size}}</td>
</tr>
{{end}}
</table>
</div>
{{end}}

View File

@ -0,0 +1,5 @@
{{define "txpool_page"}}
{{ template "header" . }}
{{ template "txpool" . }}
{{ template "footer" . }}
{{end}}

View File

@ -0,0 +1,12 @@
// Copyright 2017-2018 DERO Project. All rights reserved.
// Use of this source code in any form is governed by RESEARCH license
// license can be found in the LICENSE file.
// GPG: 0F39 E425 8C65 3947 702A 8234 08B2 0360 A03A 9DE8
package main
import "testing"
func Test_Part1(t *testing.T) {
}

Some files were not shown because too many files have changed in this diff Show More