Yet another Hive mirror instance
I’ll use this post for on-chain coordination on the progress of its deployment.
It will evolve for the next few days as the new instance of the mirror is brought to life.
Three months ago I started the first public mirror net instance for Hive.
There were three more instances running and it was of great help to all core devs involved in making Hive better. Now I’m in the process of updating it to the release candidate version for HF26.
As you can see in my posts… I mean, by the lack of them, there’s a lot of work going on.
You can find some more info on the mirrornet (a.k.a. fakenet) in my
I strongly recommend you read it before doing anything that involves Hive Mirrornet.
Logo reveal video that I’ve featured in my previous post converted to a fancy animated GIF
The "Recipe for Copy&Paste Experts" will currently not work
- Conversion process is time consuming and resource hungry (like everything that runs on such a huge amount of data), fortunately it needs to be done only once for the whole mirror. Other participants will just download the converted one.
- To have a better idea about amount of data, we are processing
629GBof data, just getting md5 checksum for the input file could take couple of minutes (on regular HDD it might take more than an hour). With 1Gbps network you will need roughly two hours to just download it.
- I've used a trick to speed things up - knowing that there were no significant changes in the blockchain converter itself, I’m reusing converted
block_logthat was used for a previous instance of the mirror and resume feature so I could just convert blocks in range
66000000-66755355. That saves us two days if the replay will succeed.
- It wasn't possible to just replay previous instance because the gap between the blocks that was to big and caused unexpected issues. (In the real world scenario it is very unlikely for Hive to be stopped for more than 7 days!)
- Sleeping is such a waste of time.
- Replay on my node took unexpectedly long time so I used plan B (that is I borrowed @blocktrades resources) - before my node reached 50M blocks, I was able to move data there, do the replay, do the snapshot, get the snapshot back to my infrastructure, load it and start production).
- Mirrornet (converted)
block_logand binaries are already uploaded to my server (the usual place where you can get the useful Hive stuff from), so those who are willing to run their own nodes can start downloading it. Before you download it, I will have a snapshot ready.
- The "Recipe for Copy&Paste Experts" should work again (see my
- The original "Recipe for Copy&Paste Experts" used
block_log.indexfile instead of new fancy
block_log.artifacts, so those who would use that would have to recreate artifacts on their own. Now with updated recipe it's downloaded which saves some time (assuming fast Internet connection). (IO+CPU) vs (Bandwidth) trade-off.
- Using current latest
- Using current state of mainnet, i.e. as the input for converter.
- Mainnet uncompressed
- Mainnet uncompressed
- Conversion started (incremental, see Notes above)
- Conversion finished
- Mirrornet compressed
- Production on Mirrornet started
Generated block #66755356 with timestamp 2022-08-06T06:39:09 at time 2022-08-06T06:39:09
block_logand binaries uploaded to https://gtg.openhive.network/get/testnet/mirror/
mirror-consensus-bootstrapsnapshot is now available.
config.inifile that's compatible with the snapshot.
- Updated original "Recipe for Copy&Paste Experts" to include
block_log.artifactsfile instead of obsolete
Work in progress...
This post will be updated during the next few days until the mirror consensus will be fully functional and other participants could join it. That of course will include "starter pack" to download and bootstrap your nodes. So please, pay attention, and then, once it's up and running, please participate.
(That's a mirror so you will participate to some extent even if you don't know it ;-) )