Intended for the Hive API node operators, witnesses, and developers.
At the time of Eclipse release I made a similar post that saved many (hours) of lives, so I’m creating an updated one for the upcoming Hard Fork 25.
Our core development efforts takes place in a community hosted GitLab repository (thanks @blocktrades). There's Hive core itself, but also many other Hive related software repositories.
We use it as a push mirror for GitLab repository, mostly for visibility and decentralization - if you have an account on GitHub please fork at least hive and hivemind and star them if you haven’t done so yet. We haven't paid much attention to it but apparently it's important for some outside metrics.
Please click both buttons
Soon to be switched to
v1.25.0 but because it’s heavily used in Hive related R&D it might not be your best choice if you are looking for a fast API node without any rate limiting. During the maintenance mode, it will fall back to https://api.hive.blog
v1.25.0 listens on
to use it in your
config.ini file just add the line:
p2p-seed-node = gtg.openhive.network:2001
If you don't have any
p2p-seed-node = entries in your config file, built-in defaults will be used (which contains my node too).
Stuff for download
cli_wallet binaries built on
Ubuntu 18.04 LTS which should also run fine on
Ubuntu 20.04 LTS
As usual, the
block_log file, roughly 350GB and counting.
For testing needs there's also
block_log.5M that is limited to first 5 million blocks.
./get/snapshot/api/ contains a relatively recent snapshot of the API node with all the fancy plugins.
There’s a snapshot for the upcoming version
v1.25.0 but also for the old one
v1.24.8 if you need to switch back.
Uncompressed snapshot takes roughly 480GB
There’s also the
example-api-config.ini file out there that contains settings compatible with the snapshot.
To decompress, you can use simply run it through something like:
lbzip2 -dc | tar xv
(Using parallel bzip2 on multi-threaded systems might save you a lot of time)
To use snapshot you need:
block_logfile, not smaller than the one used when the snapshot was made.
config.inifile, compatible with the snapshot (see above), adjusted to your needs, without changes that could affect it in a way that changes the state.
hivedbinary compatible with the snapshot
All of that you can find above.
--load-snapshot name, assuming the snapshot is stored in
hived API node runtime currently takes 823GB (incl. shm 19GB, excl. snapshot)
There’s also a snapshot meant for exchanges in
./get/snapshot/exchange/ that allows them to quickly get up and running, it requires a compatible configuration and that exchange account is one of those who are tracked by my node. If you run an exchange and want to be on that list to use a snapshot, just please let me know.
Hivemind database dump
./get/hivemind/ contains a relatively recent dump of the Hivemind database.
I use self-describing file names such as:
Date when dump was taken, revision of
hivemind that was running it.
You need at least that version, remember about
pg_restore with at least
-j 6 to run long running tasks in parallel
After restoring the database, make sure to run the
Even though during full sync database size peaks easily over 750GB, when restored from dump it takes roughly 500GB. Dump file itself is just 53GB.