[Docker compose] How to run a hived node / seed node / witness node

in HiveDevs3 days ago (edited)


network-hive.jpg

<p dir="auto">There are multiple ways of running a hived node. In this post we use docker compose for an easier setup. By default everything will be contained in one directory for easier management. <p dir="auto"><br /> <a href="https://peakd.com/hive-160391/@gtg/hive-node-setup-for-the-smart-the-dumb-and-the-lazy" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">Hive Node Setup for the Smart, the Dumb, and the Lazy.<span>Recently <a href="/@gtg">@gtg also published a good post. You might want to check that out. It is a good post specially for the exchanges: <hr /> <h3>Requirements <ul> <li>Ubuntu 22 <li>Storage: <ul> <li>1TB: for running a proper node with all the blockchain data <li><ul> <li>recommended for seed nodes and witness nodes <li>50GB: for pruned node <li><ul> <li>This node won't keep any blockchain data <li><ul> <li>e.g. use case would be having trusted source of state data like balances <li><ul> <li>To run "pruned" node you must add <code>block-log-split = 0 (or <code>block-log-split = 1 for one million blocks in storage) in config.ini <li>RAM: 4-64GB <ul> <li>You can get away with less ram by putting shared_memory on disk <li>I would recommend at least 8GB of RAM - Feel free to experiment <li>Otherwise you need at least 24GB of RAM dedicated for SHM (shared_memory) <li>32GB "should" be fine in this case - have not tested <li>With more plugins your SHM might grow and you might need more RAM <li>SHM on disk is totally fine (NVME/SSD) <hr /> <h3>Docker <p dir="auto">Docker installation is simple: <pre><code>curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh <hr /> <p dir="auto">Optional security step:<br /> You might want to run hived and docker under a non-root user which is recommended. <pre><code># Create a non-root user - I name it "myuser" adduser myuser # Allow the user to run docker addgroup myuser docker # Switch to the user su - myuser <p dir="auto">While on the security topic, you might want to disable the password login on your server and use ssh keys and also install fail2ban for additional security. You can do that after setting up hived with the help of internet. <hr /> <h3>Hived <pre><code>git clone https://gitlab.com/mahdiyari/hived_docker cd hived_docker cp .env.example .env <p dir="auto">Now edit .env file accordingly. You set the hived version and the hived arguments there. <pre><code>nano .env <h3>P2P sync <p dir="auto">For a p2p sync set <code>ARGUMENTS="" in the .env file (which is done by default). <pre><code># Start hived in the background docker compose up -d <hr /> <h3>Replay <p dir="auto">The p2p sync should be fast enough for most people and I would recommend just doing that but if you already have a block_log you can try replaying. <p dir="auto">The newer hived (not released yet - v1.27.7) by default will use splitted block logs instead of the legacy single block_log file. So even if you put one block_log, it will split it first and you should pay attention to your storage space in this case as 1TB might not be enough. To keep using the single block_log, you have to edit the config.ini before replaying.<br /> By default the config.ini will be in the following location: <pre><code>nano datadir/config.ini <p dir="auto">Add <code>block-log-split = -1 to keep block_log a single file. <p dir="auto">You need <code>ARGUMENTS="--replay" in the .env file. Then: <pre><code>docker compose up -d <p dir="auto">I recommend the P2P sync if you already don't have a block_log as the download speed of the block_log will probably be too slow to justify it over the P2P sync. <hr /> <h3>Docker commands <p dir="auto">Docker commands that might be useful: <pre><code># see last 100 lines of logs docker compose logs -f --tail 100 # stop and remove the container # DO NOT force shut down hived docker compose down <hr /> <h3>Witness node <p dir="auto">To run as a witness you need to do additional steps. Generate a pair of keys and put the private key in your config.ini and add the public key to your hive account. <p dir="auto"><span>The secure option for generating keys would be something offline like the cli_wallet or some other wallet. But you could also use something like <a href="https://hivetasks.com/key-generator" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://hivetasks.com/key-generator to generate random keys then copy one of the corresponding private and public keys. The website is safe at the time of writing. For this example I picked the generated posting key. It doesn't matter. You just need a pair of matching keys: <p dir="auto"><center><img src="https://images.hive.blog/768x0/https://files.peakd.com/file/peakd-hive/mahdiyari/EoEwLVDHHiGH74aMeqEDy2TRfhMjZjKwTEUgVcFG1gKEa7KaKYkmnnRSA8AuKTYL8CG.png" alt="image.png" srcset="https://images.hive.blog/768x0/https://files.peakd.com/file/peakd-hive/mahdiyari/EoEwLVDHHiGH74aMeqEDy2TRfhMjZjKwTEUgVcFG1gKEa7KaKYkmnnRSA8AuKTYL8CG.png 1x, https://images.hive.blog/1536x0/https://files.peakd.com/file/peakd-hive/mahdiyari/EoEwLVDHHiGH74aMeqEDy2TRfhMjZjKwTEUgVcFG1gKEa7KaKYkmnnRSA8AuKTYL8CG.png 2x" /> <p dir="auto"><code>datadir/config.ini <pre><code>witness="username" private-key=5Jfv7EK8VtnnTgwCpmwvkWsqhKVeNKmgtcYQFeWH3zzjA1Y5qaG <p dir="auto"><span>Then you can use <a href="https://hive.ausbit.dev/witness" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://hive.ausbit.dev/witness to register a new witness or update an already existing one. You would need to login first to the website then refresh. <p dir="auto"><center><img src="https://images.hive.blog/768x0/https://files.peakd.com/file/peakd-hive/mahdiyari/23uEyuAfwADeBUDXANmtHRrVrauCJsyi8fb6vVEBSHN455Pvg7dpfqQ3hbkdqpqRvfkBf.png" alt="image.png" srcset="https://images.hive.blog/768x0/https://files.peakd.com/file/peakd-hive/mahdiyari/23uEyuAfwADeBUDXANmtHRrVrauCJsyi8fb6vVEBSHN455Pvg7dpfqQ3hbkdqpqRvfkBf.png 1x, https://images.hive.blog/1536x0/https://files.peakd.com/file/peakd-hive/mahdiyari/23uEyuAfwADeBUDXANmtHRrVrauCJsyi8fb6vVEBSHN455Pvg7dpfqQ3hbkdqpqRvfkBf.png 2x" /> <p dir="auto">Then we put the public pair of our signing key in there and broadcast the the transaction: <p dir="auto"><center><img src="https://images.hive.blog/768x0/https://files.peakd.com/file/peakd-hive/mahdiyari/Eo8L56KsT6XqvLQtFvSaUGWbhDyE8DEQDKQf6B89HdywttbrZqDUe8J1JdqJfbLkzzX.png" alt="image.png" srcset="https://images.hive.blog/768x0/https://files.peakd.com/file/peakd-hive/mahdiyari/Eo8L56KsT6XqvLQtFvSaUGWbhDyE8DEQDKQf6B89HdywttbrZqDUe8J1JdqJfbLkzzX.png 1x, https://images.hive.blog/1536x0/https://files.peakd.com/file/peakd-hive/mahdiyari/Eo8L56KsT6XqvLQtFvSaUGWbhDyE8DEQDKQf6B89HdywttbrZqDUe8J1JdqJfbLkzzX.png 2x" /> <p dir="auto">Scroll down and: <p dir="auto"><center><img src="https://images.hive.blog/768x0/https://files.peakd.com/file/peakd-hive/mahdiyari/23tbMTMG2SQd3Vee9L9orPiJzY14gyyZyFegf5EK4beMyj35pYWJ5iWtfevJwCmut34dY.png" alt="image.png" srcset="https://images.hive.blog/768x0/https://files.peakd.com/file/peakd-hive/mahdiyari/23tbMTMG2SQd3Vee9L9orPiJzY14gyyZyFegf5EK4beMyj35pYWJ5iWtfevJwCmut34dY.png 1x, https://images.hive.blog/1536x0/https://files.peakd.com/file/peakd-hive/mahdiyari/23tbMTMG2SQd3Vee9L9orPiJzY14gyyZyFegf5EK4beMyj35pYWJ5iWtfevJwCmut34dY.png 2x" /> <hr /> <p dir="auto">You can ask your technical questions in the official hive discord (which you can find on hive.io bottom of the page) or <code>#witness or <code>#dev<span> channel on <a href="https://openhive.chat/" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://openhive.chat/ <p dir="auto">For future hived updates, you would just need to edit the hived version in the .env file and be good to go. <p dir="auto"><sub>*The first image is taken from pixabay.com
Sort:  

About system requirements: not sure how much ram is needed but I run node on machine with 4 gigabytes (maybe 32 gigabytes is for building hive from source. Also if you have limited disk space - you can run smth like "pruned" node in bitcoin - don't store all block log at all or store only 1milion last blocks. To run "pruned" node you must add "block-log-split = 0" (or "block-log-split = 1" for one milion blocks in storage) in config.ini. Massive sync from 0 takes like 1-2 days nowadays. It should be possible to run node on raspberry pi with external ssd like 50gigs, not tested this thought raspberry pi is ARM not sure if this will work smoothly. But for full block_log it takes like 1TB.

 2 days ago  

You are right. It should be possible to run hived on lower end machines with pruned block_log.

I should have probably mentioned that as well. Thanks.

I remember not being able to run hived with 8gb of ram but I think that was building from the source. I didn't think about it that much as with this one I was aiming for seed nodes and I don't think running a public seed node without the block_log would be that beneficial. But for other uses, it is valid.

 2 days ago  

I edited the requirements with more info.

What internet speed does one need to have to run a node?

 3 days ago  

Speed generally doesn't matter that much other than the initial sync that you need to download the blockchain basically (~500gb). I did check one of my nodes and the data usage is around 250GB a month after the initial sync. Maximum block size is 64KB and that happens once every 3 seconds. So you should be fine on any speed.

Loading...

Definitely don't generate keys using an online service. Maybe it does the right thing and never sends the keys to the server where it's hosted, but that kind of behavior might change over time, i.e. the server might get hacked, etc. Instead, the CLI wallet can be used to generate the keys locally.

So even if you put one block_log, it will split it first and you should pay attention to your storage space in this case as 1TB might not be enough.

Hmm, does this mean that when block_log is split into parts, it takes up much more space?

 3 days ago  

It doesn't consume the original block_log when generating the parts. So you end up with 2 full block_logs. Hence more space usage while splitting.

With docker setting up witness node looks pretty easy. But which hosting you can recommend to use their VPS for running witness node?
Also is there any option to run hive-engine witness node in docker?

 3 days ago  

Any hosting that you can find that is cheaper and meets the requirements. I have seen people use Hetzner (you'll get in trouble if they find out you are running anything crypto related), OVH, and scaleways.

I don't know about hive-engine. Try asking in their discord or check their repositories.

Hmm maybe you are right and in hive-engine discord it'll be really right answer about their witness node.


That's the reason that some hostings against to use their infrastructure for cryptography related things 🤷🏼‍♂️
And here I have another one trouble that if I'd like to order VPS I'll need possibility to pay for if from my country (since troubles with Visa MasterCard etc) or they need to accept crypto payments. But it's more problem of course now..


Also can you explain how to install cli wallet and how to install it?
I think it'll be enough link to this service or short description of it.

 3 days ago  

You can try ordering from the providers in your own country.
The binaries are available here for cli_wallet https://gtg.openhive.network/get/bin/ or you can build it yourself from the hive repository.

Thank you I'll try to look how to work with cli_wallet more detail!

Thank you for sharing such valuable content.

Is it only one terrabyte of storage? I thought it has to be more?

Also, is an Azure VM a viable option?

 2 days ago  

I would imagine Azure being very expensive but sure if it meets the requirements.

I used to run a node on AWS. Cost about $300/mo, so yes possible but VERY expensive.

I have a $150 credit every month from VS subscription, so was thinking of using that...

many, many providers that are out there, or colocate your own server with a provider.It might be possible to get it to fit within that price range using reserve: https://azure.microsoft.com/en-us/pricing/calculator/. I tend to avoid hyperscalers unless I need something that can scale quickly or to deploy to a bunch of regions easily or if I'm using a cloud native feature(aka lambdas). But whole lot cheaper to grab a dedicated server from one of the

Few comments/corrections:

<p dir="auto"><code>docker compose is unnecessary when you run just one application container. <p dir="auto">Docker is pretty much deprecated nowadays as it have a better, mature successor: <strong>Podman<span> (a.k.a. "libpod") <a href="https://podman.io/" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://podman.io/ <p dir="auto"><em>Ubuntu (redundant Debian derivative) is unnecessary and unjustifiable. Since it have no technical reasons to exist, it is better to use <a href="https://www.debian.org/" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link"><strong>Debian instead. <p dir="auto"><code>hived <code>v1.27.7<span> have been (already) tagged/released on 2024-12-14: <a href="https://github.com/openhive-network/hive/releases" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">https://github.com/openhive-network/hive/releases
 2 days ago  

hived requires certain parameters to be applied when you run the container and with the help of compose yaml files you get to just start the container with a very short command. There is nothing wrong with other ways of running a container.

Debian should be fine but Ubuntu 22 is the recommended OS for Hive apps.

That's just a bad release naming. The latest unstable version of hived currently is 1.27.7rc16 so the logical next stable release should have been 1.27.7. Once there is a stable release I can edit the post.

What do you mean by "bad release naming"? Project obviously follows semantic versioning
: https://semver.org/ https://en.wikipedia.org/wiki/Software_versioning

 2 days ago (edited) 

In the same source you sent:

Example: 1.0.0-alpha < 1.0.0-alpha.1 < 1.0.0-alpha.beta < 1.0.0-beta < 1.0.0-beta.2 < 1.0.0-beta.11 < 1.0.0-rc.1 < 1.0.0

RC comes before the stable release
You can't have 1.0.0 then release 1.0.0-rc.1
The latest release of hived is 1.27.7rc16 (https://gitlab.syncad.com/hive/hive/-/tags) and you can't release that after releasing 1.27.7

RC: Release Candidate
1.27.7rc16 means it is a candidate to become 1.27.7

And no. hived does not follow semver.

!PIZZA

Thankyou for sharing this

PIZZA!
Hive.Pizza upvoted this post.

$PIZZA slices delivered:
(8/10) @danzocal tipped @mahdiyari

Please vote for pizza.witness!