One-block Irreversibility (OBI) is a proposed protocol change whereby a block will be considered irreversible as soon as a sufficiently large super majority of the currently scheduled block producers vote that the block in question should become the next valid block in the blockchain. The goal of this protocol change is to enable most Hive transactions to become irreversible within a few hundred milliseconds (before a second block has even been produced, hence the name “One-block Irreversibility”).
Despite the name, the OBI protocol doesn’t guarantee that EVERY block will become irreversible in one block. Indeed, in the general case, it might take N blocks for irreversibility, which we could refer to as an OBI-N case, but in the overwhelming common case, we can expect irreversibility to happen in one block (i.e. OBI-one) because of the force inherent in this protocol.
As a side note, my understanding is that there have been proposals for somewhat-related, but not yet implemented, protocols to speedup block finality by Ethereum devs (Casper?) and EOS devs (DPOS 3.0+BFT), but for various reasons I didn’t review either of those proposed protocols in sufficient depth, so I won’t be comparing and contrasting them to this protocol.
Before going into detail on the proposed protocol change, let me first explain what is meant by an irreversible block, and why it is an important concept for DPOS-based blockchains such as Hive.
Similarities between Irreversible (DPOS terminology) and Fully Confirmed (Proof-of-work terminology)
An irreversible block in DPOS is similar to a fully confirmed block in a proof-of-work blockchain.
Both concepts are used as a way to be confident that a crypto transaction (for example, a money transfer to your wallet) has been accepted by the network and that enough block-producing nodes have agreed that this transaction happened that it is safe to assume the transaction (in the example case, your payment) can’t be reversed by a fork.
For example, if you operate a store that accepts bitcoin payments, you might not want to let your customer leave the store with their items until their bitcoin transaction has fully confirmed. For bitcoin, a block is generally considered fully confirmed when 6 further blocks have been built on top of the block (each subsequent block can be viewed as “vote of confidence” in the original block). With an average block production time of 10 minutes, this means you could be waiting about an hour (6 * 10 minutes) to be sure of your payment.
Obviously waiting one hour for a payment wouldn’t be practical for most retail stores, and this has led to many workarounds (including perhaps most famously, the Bitcoin Lightning Network).
An important difference: irreversible blocks cannot be automatically reversed
Theoretically, even “fully confirmed” blocks can automatically be reverted by bitcoin nodes, but it is generally assumed that such an eventuality is so extremely unlikely in practice that it is safe to rely on the payment.
So, while they are similar concepts, there is an important difference between DPOS irreversible blocks and POW’s fully confirmed blocks: irreversible blocks cannot be reversed automatically, but fully confirmed blocks can be.
In other words, unlike transactions on the bitcoin network, the transactions in irreversible blocks are irreversibly confirmed and can no longer be reverted from the node’s internal financial ledger due to a fork unless the node operator manually intervenes by popping the most recent blocks from the block history of the node and then replaying the blocks in the blockchain.
So if two nodes in a DPOS network end up on different forks with irreversible blocks in the two forks, those two nodes can never switch to a common fork (an irreversible split in the blockchain) without manual intervention by at least one of the node operators. This is undesirable, so it is best to be conservative when choosing a heuristic for determining when a node should treat a block as irreversible.
Irreversibility under current Hive DPOS protocol
Right now, a block becomes irreversible in Hive once 3/4ths of the witnesses (currently scheduled block producers) have “voted” on including the block into the blockchain.
Under the current DPOS protocol, a block producer votes for a previous block by linking to it when it produces its own block. For example, if block producer 1 (bp1) produces block A, the next block producer (i.e. bp2) can create a block B that links back to A (by including the hash of block A in block B). This can viewed as bp2 voting for the fork that includes block A. If the next block producer (bp3) builds off block B, this is yet another vote for the fork that includes block A.
Once 15 of the 21 block producers (3/4*21=15.75 rounded down to 15) have built off a block, it becomes irreversible. The basic idea behind this is that if ¾ of the block producers are on the same fork, it would be extremely unlikely that the remaining ¼ of the block producers could create a longer chain.
Another thing that further makes such a possibility unlikely is that Hive block producers, by default, are configured to not generate blocks if the “participation rate” drops below 33%. The participation rate is a metric used by block producers to see how many other block-producing nodes they are directly or indirectly in contact with via the peer-to-peer network (they measure this by tracking if they receive the most recent blocks produced by these block producers).
For example, imagine a network split happens between North America and Europe due to an ocean cable being cut, with 3/4ths of the block producers connected via the European side of the split, and the remaining 1/4th connected on the North American side of the split. The block producers on the European side would continue to produce blocks (because the participation rate would be ¾ = 75%) but the North American block producers would stop producing blocks entirely after the participation dropped below 33% (it would rapidly drop to ¼ = 25%) and only the chain fork on the European side would continue to add new blocks. This is generally beneficial, because it makes it difficult to launch a double-spend attack during the time the network is split.
So how does irreversibility play out in such a case? If all 3/4ths of the nodes on the European side were successfully producing blocks, blocks would still become irreversible, because a block would eventually get 75% of the block producers to build off of it. But if one of these block producers stopped producing because of some computer outage, even the European fork would no longer have enough block producers to mark the new blocks as irreversible.
Since the current irreversibility algorithm requires 15 blocks to be built off a block before that block becomes irreversible, the fastest time a block can become irreversible in Hive now is 45 seconds (15 blocks * 3 seconds /block).
We can see that the current delay in finalizing a block occurs because each block producer can only “vote” by creating his scheduled block, and these blocks are produced sequentially, once every 3 seconds. But what if all the scheduled block producers could vote immediately after they receive a block, instead of having to wait their turn to vote?
The OBI protocol in action
The distinguishing feature of the OBI protocol is that each block producer will broadcast a “valid vote” to the p2p network for each block it receives, immediately after it has successfully validated the block and made it the new head block for its local copy of the blockchain (instead of just waiting its turn to implicitly vote for the block when it produces its own block).
This new mechanism allows for block producers to reach consensus on the validity of a block much faster than the existing mechanism (in a well-connected network, a block should typically become irreversible before the next block is even produced).
Here’s a simple example of how this works in practice:
- Block producer 1 (the block producer scheduled to produce the next block) generates and broadcasts a block to the p2p network.
- Other nodes receive this block and temporarily apply it as the next block in their local copy of the blockchain to test if the block is valid. If the block is valid, the node’s local state will be updated with the transactions contained in the block. If the block is invalid, the node will roll back the changes made by the block to their local state. So far, this is how the DPOS protocol currently works.
- New OBI step: If the node is one of the scheduled block producers, the node signs and broadcasts a new type of p2p message call a block_validity_vote if it considered the block valid and made it the new head block for it’s local copy of the blockchain. This message, signed with the block producer’s signature, contains the block producer’s name and the block id of the newly applied block.
- New OBI step: Each node will keep a temporary buffer of the valid block_validity_votes it receives (and also propagate these votes to their peers using the normal p2p rules for message propagation). If a nodes receives the required ¾ majority of distinct block producer votes for a block, that block can be marked as irreversible and written to its block_log.
In a normally well-connected Hive p2p network, this should result in most blocks becoming irreversible on a node within a second or less after they are produced. The exact time required depends on the message latency between nodes and number of network hops between the node and the block producers.
As a side note, recent optimizations to the p2p network code have reduced the time for messages to traverse hops between nodes (and also made it easier for nodes to cheaply maintain direct connectivity to more peers and thus reduce the number of hops between nodes, but I think the current default of 20 peers will be more than sufficient for most use-cases).
Faster irreversibility without blockchain bloat
At the inception of the idea, the design behind One-block Irreversibility included storing the approval vote messages into the next block as a means of proving irreversibility of the prior block. But this adds unnecessary bloat to the size of blockchain, because the existing mechanism for proving a block is irreversible already works well for all but the most recent blocks.
Instead, to prove that recent blocks are irreversible, nodes can keep around the block_validity_votes that they receive to mark a block as irreversible until they have received a sufficient number of follow-on blocks that build off the block. At that point, the block_validity_votes for that block can be discarded.
So one of the nice aspects of the OBI protocol is that it doesn’t increase the amount of blockchain storage, because the block_validity_votes are only kept temporarily in memory (and only a small amount of memory is required).
New votes by a block producer override its old votes at a given block number
Nodes employing the OBI protocol only track the most recent vote cast by each block producer. So if a block producer switches to a different fork, all the votes it cast for blocks that will be discarded during the fork switch will be “overwritten” by the votes it casts for the new blocks at those block positions. In other words, at any given time, every node will only consider one vote by a specific block producer at a specific block position.
Better monitoring of the state of the P2P network and blockchain
Another interesting aspect of the OBI protocol is that it allows for much better monitoring of the status of the Hive P2P network when it is experiencing connectivity problems than was previously possible. Every node in the network (block producer or regular hived node) is tracking the current head block of every block producer it is connected to, effectively knowing how many block producers are connected to its network and which forks they are on.
Improving irreversibility by counting blocks as implicit votes
Blocks that build off a block are also votes for a block’s irreversibility. So rather than simply counting block_validity_votes, nodes will also count the number of subsequent blocks that build off the block (to the extent that these new “block votes” don’t overlap with block producer votes they have received). This is basically how DPOS 1.0 treated subsequent blocks as votes for the previous blocks.
In a well-connected network, this should not result in any speedup in the time it takes for a block to become irreversible, because votes will be cast much more rapidly than new blocks will be built off the block, but it should improve irreversibility time when the network is experiencing outages that cause some votes to be missed.
Longer witness scheduling to be able to determine the next block producers
For optimal performance, the OBI protocol should maintain constant knowledge of the next 21 scheduled block producers, so that nodes know which block producers are casting legal votes.
In the current code, the next set of block producers are selected once every 21 blocks. So, for example, after 18 blocks have been produced in the round, only the next 3 block producers are known. If a node only knows the next 3 block producers, it can’t get a ¾ majority of the next 21 block producers.
To enable all blocks to be potentially irreversible without waiting for more blocks to be generated, the OBI code also modifies the witness scheduling algorithm by scheduling a 2nd round beyond the round that is currently producing blocks. This means that nodes will know at least the next 21 block producers at all times, and as many as the next 42 block producers, so any given block can potentially receive a sufficient number of votes to be become irreversible without waiting for another block to be generated.
Longer schedules to reduce chance for irreversible forks during a big voting slate change
Another benefit of the above change is that votes to replace the current set of block producers will take longer to take effect and this helps reduce the chance for an irreversible fork.
In the current protocol, it can take between 1 to 21 blocks before a witness vote change affects which block producers are elected to product a block. For example, let’s assume that the nodes have just scheduled the next 21 witnesses to produce blocks. New witnesses can be voted on, but the currently scheduled witnesses will still be the ones to produce the next 21 blocks. Only after the round is finished will any new witness be able to produce a block. So, in the shortest case, a bunch of witness votes could be included into the last block in the round, and the entire slate of top 20 witnesses could be replaced in the next block. This could potentially lead to problems for One-block Irreversibility if such votes were cast in the last block of a round and the network forks during this time, leaving a split network where some nodes received the vote transaction(s) that changed the witness slate (and therefore start using a different set of witnesses to determine a ¾ majority than the other side of the fork). In such a problem case, both forks could get a ¾ majority of “their” witnesses, and the two sides of the split wouldn’t be able to regain consensus without manual intervention (an irreversible split).
Fortunately, to address problems of this type, the OBI protocol has a longer set of scheduled witnesses, so there will always be a guarantee of at least 21 blocks produced by the currently scheduled witnesses before newly-voted-in witnesses that were elected in the current head block can produce blocks.
Parting thoughts
As far as I’m aware, the One-block Irreversibility protocol will put Hive at the forefront in terms of transaction confirmation time.
Hive already had one of the fastest average blockchain confirmation times at 45 seconds, so you may be wondering while I think it’s so important to speed it up further. The biggest benefit comes for 2nd layer apps:
First, 2nd layer apps can now be more interactive with their users, since they have faster guarantees of irreversibility.
And second, HAF-based apps will benefit in terms of better performance, because HAF table views stitch together two types of tables (tables for irreversible data and reversible data). When most of the data is in the irreversible table, a HAF server will operate faster (because there’s more overhead required to maintain reversible tables).
Indeed, with OBI in play, it would not be surprising if many HAF apps don’t just elect to rely strictly on the irreversible data, since blocks will normally become irreversible within one second or less. And this will really speed up the performance of SQL queries for such apps, because the stitching together of data from two different tables is no longer required.
So, all-in-all, I believe the incorporation of One-block Irreversibility will have profound benefits for the scalability of Hive apps (and the potential growth rate of the entire ecosystem).
Blocktrades and I had a nice discussion on this; and I wanted to also present publicly here my suggestion for achieving both his goal of "high confidence" on the latest block, as well as my goal of conclusive BFT and finality.
In essence, the issues that arrive with treating a single phase of votes on a given block as evidence of global finality (not exactly what this proposal's goal is, mind you, but what my goal would be) is resolved by having a 2-phase approach: a pre-commit phase, and a commit phase. During the pre-commit phase, votes for a block are collected. During the commit phase, if enough votes are collected, the commit phase begins, and again if enough votes are collected the block can be treated as final and irreversible (by BFT constraints). This 2-phase approach is well known in BFT strategies and is what is employed by methods like PBFT and Paxos.
This two phase goal could also be achieved without added extra messages by appending the commit-phase messages for block N-1 (assuming you have enough pre-commit messages), into the pre-commit phase votes for block N.
In this way, we would rather declare the pre-commit votes for block N as a presentation to the network of high-confidence (this is Blocktrade's goal) in one block. But further, in the best (and average case!) we can also conclude and present true finality for block N-1. If consensus used this true finality, we would avoid any potential soft-lock issues or split consensus on irreversibility that would come with treating the high-confidence single phase as truth. dApps could choose whether to use the high-confidence metric or the finality metric for their own use-case.
Granted, this is more work and could be done afterwards, but I think achieving proper BFT and finality should be a goal for Hive, and is definitely achievable without extra p2p messages compared to this proposal.
I have some other work I need to do tonight, but I'll think about this proposal tomorrow.
I very much like this approach. I would add a soft/marketing benefit of this is that in crypto projects it is extremely important to avoid FUD vectors when possible (even if the FUD is out of context or nonsense, it can be harmful, or at best a distraction). Something that probably will work in practice can still be very vulnerable to FUD, but something that is feasible to show always works is less susceptible.
Although I support general direction, the devil is in the details. I can't help but worry that OBI might expose the problems we never knew we already had.
For example in current code witness can only change his vote once per schedule (he can produce block B that has no his previously produced block A in the chain, because he switched forks in the meantime), it can happen only in very specific situation and it still depends on other witnesses if they will accept that change of heart (they can still be on a fork that includes A but does not have B). With OBI the same witness can potentially change his vote couple of times per block, so if there is any problem that would be exposed during changed vote, OBI will make it that more likely to actually happen.
Then there is also a problem of witness signature. Witness can change signature. With current code such change will matter during next scheduled block for that witness. What about OBI? Note that changed signature can still be a pending change (but already in the state)... hmm... I actually wonder if it isn't a problem today (witness changing signature and producing block signed with new one being accepted even though the transaction that did the change was not yet included in any block, possibly never becoming part of one due to expiration for example). I did not check if we could potentially have such problem, I'm just thinking aloud, maybe we actually have something that prevents such situations.
Double producing. Whether it is intentional or a problem of misconfigured backup node, using the same or different witness signatures, now it can only happen once per schedule. With OBI the same node can legitimately vote for some block but revoke that vote immediately after receiving new block on different fork. Depending on network propagation some nodes might think the vote is still valid and act on it - they won't be able to switch forks even though they might find themselves in minority.
This is irrelevant for OBI. Doesn't matter whether BP votes with old signature and vote gets accepted or votes with new signature and gets accepted. BP still only gets one vote and that's all that matters. And all such votes aren't even retained, so no issue for replay either. Further, note that it is not a big deal if a BP's vote doesn't get counted because some node might not have applied the pending signature change, it's just one less vote towards irreversibility.
As to votes cast "by block linkage", all pending transactions are rolled back before applying a block, so no pending signature change will allow a BP to sign with the pending signature. So if BP changes their signature just before they produce a block and then use that signature (not sure they would and would need to check code to see if they do), then their upcoming block would just be rejected.
Yeah, I shouldn't have been writing so late at night it seems. I realized the pending being undone when I woke up, but you already pointed that out :o)
One block producer double voting isn't going to make anything irreversible, because a 3/4ths majority is needed. The problem case here would be if a LOT of witnesses were double producing. This is part of the reason we don't just use something like a simple majority for irreversibility.
There is one scenario where that could happen: if a lot of witnesses configured a backup producer to "fail-over" produce if the backup loses contact with their primary producer node. Of course, such a node configuration has always been a bad idea and could always result in irreversible forks along the split line if it managed to split backup nodes on one side and primary nodes on the other> But the proper setup for a backup node is to have it ready to run, but require manual intervention to enable block production, so that a network split doesn't result in two irreversible forks.
I always thought it works the following way: when backup node detects that main node lost block and there is no contact with it, it creates a transaction to change key to its own (signed with witness key of the main) and starts producing. This way if the main node really died, backup starts to produce blocks next schedule, if main is ok and backup just lost contact with network, backup will temporarily be on a fork. In worst case backup and main will just swap roles. On top of it, if changing key was becoming effective for the network only if block containing such change became irreversible, in normal cases swapping producing node would still be possible, but in case of massive network split, the network as a whole would rather stop entirely (having not enough valid producers on either side of split) then to make the split irreversible.
But ok, if manual intervention on backup is required each time, that works too.
Block producer? Boy you talkin like this is EOSIO wheres @dan larimer and elon musk to get you hired at twitter lol
I think block producer is a better name than witness, so when discussing the concept in a generic sense (as opposed to one where the duties are specific to block producers on Hive, such as voting for HBD interest rates, block size, etc) I prefer it over witness. Of course, DanL is the guy who came up with the term "witness" to start with, so I don't think use of either term is "sucking up" exactly.
If I had to make a guess, block producer at EOS was probably coined by someone who felt that the previous term "witness" was obscure and vaguely threatening, same as me.
imagine when we have a blocktrades eosio main net chain just for handling transactions for blocktrades.us and you have people setup their own blocktrades block producers and use alcor.exchange style dex and yeah have a dex with some system with tokenized stake in communities set up to hold bitcoin and litecoin and ethereum etc
While the above statement is literally true, it's too vague to address unless you have some more specific potential problem. For example, I could make the same statement that if there's any problem in generating transactions, allowing more transactions in a block increases the chance for a problem. It's true, but without pointing to a concrete problem that is exacerbated, there's nothing to really discuss.
I don't have specific problem yet, I'm just worried about potential of having one.
When it comes to transactions, allowing bigger blocks means allowing bigger transactions (I don't know why it is so, but transactions are limited by block, not by their dedicated constant, possibly to potentially allow bigger custom operations). If there was an operation that slows drastically with size (f.e. some linear search in handling code, or a bug in JSON parser, etc.) then as long as blocks and therefore transactions are small, we won't see the problem. Once we allow big ones, the problem we already had, but hidden, suddenly appears, potentially endangering the network. I see similar potential with OBI, especially that at least for the moment it seems to me like it will make irreversibility a forking choice. Currently when new block is accepted, the block 15 signatures below becomes irreversible. Since it already had 14 signatures, it is highly unlikely it will not be irreversible in other forks, even if recently accepted block will end up being dropped. With OBI two different head blocks can become irreversible (I hope not, but I'm not yet convinced they can't) because they might receive enough votes each in different parts of the network, just some of those votes were revoked in the meantime.
If witnesses are changing their votes then yes you can get two different irreversible head blocks. I would suggest that witnesses not ever change their votes. Of course, you can't literally prevent it, but if it is discouraged, you can probably rely on "enough" witnesses not changing their votes to prevent the problem.
I'm not sure this is actually sufficient when there are forks.
Yes, he's talking about the case of a fork switch. And originally I was also thinking that we had implemented it so that a block producer would cast a new vote for a block during a fork switch (which I think would be ok).
But I was wrong, instead each node keeps track of the last block number it voted for, so it won't cast another vote at the same or earlier block height. While this latter optimization isn't IMO necessary to prevent an irreversible fork, it does reduce the number of votes that get generated during a fork.
I'm not sure it would be okay if multiple votes were cast. You can't be sure that every producer sees every vote. So half may see one vote as part of a 3/4 majority (therefore declaring one fork as immutable) and another half may see the other vote as part of a different 3/4 majority (declaring the other fork as such). This will never resolve because each half will be permanently stuck on their own "immutable" fork.
Yes, its true that it makes it "worse" if multiple votes are cast and that was one of the considerations for avoiding it. But it would still take 50% of them doing so to get a 3/4ths majority on both sides.
In the general case of intentionally bad nodes, they can always cast different votes to different nodes and 50% bad nodes can cause an irreversible fork (that's the term I'm using for what you're calling an immutable one, figured it's best if we stick to just one term for it).
But avoiding two votes by good nodes prevents them from ever inadvertently behaving like a bad node, so practically speaking it is much better if they don't.
When we did the initial napkin design, we wanted a protocol that would allow for anything less than 1/2 of the witness to be behaving badly. So, the obvious worst case here would be 1/2 bad witnesses voting on two sides of a fork. In such a case, both forks would have 1/4 good witnesses + 1/2 bad witnesses and a 3/4ths majority would be achieved on both sides of the fork.
I wasn't necessarily viewing it as "bad" unless it is defined that way, which wasn't clear to me. The comment about changing votes many times suggested it might be considered as legitimate behavior, but if not then it seems fine that "some" threshold of bad witnesses can break it. That's essentially unavoidable.
This will be a huge help for me and @v4vapp. Those 45s between first seeing a transaction on Head and then knowing it is confirmed make a huge difference for any user interaction.
Thanks for the very clear explanation of an important though often overlooked concept in the world of crypto transactions.
I will pass this along to my students.
This is a wet dream for all the Dapps out there.
I couldn't think of any flaws in this design. This is perfectly designed and is a huge achievement for Hive and I'm really happy to see this happening. I will certainly give it more thought later.
We will need to use all the marketing we can get to claim this achievement in the crypto space.
This would really help Honeycomb, we run irreversible for consensus and head for API.
From my experience most dapps just use head and hope no reversals actually affect them. So for most dapps this won't be a performance altering change but will put them at less risk if they do decide to switch to irreversible instead of head.
It will be a big performance change for dapps using HAF, and I hope that before long most new dapps will be written using HAF (the performance benefits are substantial for any dapp that wants to scale).
But for existing dapps that just directly process the head block, you're correct: it won't speed them up, it'll make them more resistant to failures in the face of forks (arguably more important than making them faster, of course).
It also wouldn't surprise me if some existing apps wait a block or two (IIRC, hivemind does, for example).
This is very cool ! One thing that worries me is that if we discard the votes, it means we lose the info of who casted votes and let a block become irreversible, we retrospectively know who were the 21 who were allowed to vote on it but not who were the actual 3/4 who did, Although I guess at this point, if 3/4 managed to vote a "bad" block in, governance has shifted anyways. But it may be important for legal action should it be a double spend.
We'll know who produced the block, which would be the most important issue if a block producer was actually trying to break the protocol by allowing a double spend. And, yes, if 3/4 of the block producers then signed off on such a block, you can assume governance is gone.
Further, note that any node (not just block producing ones) CAN keep a record of all those votes, and they would serve as proof of who voted on a bad block (because they would have valid signatures).
I guess it wouldn't be too hard to make a plugin for it yeah
failtoban plugin =)
Interesting, at the end of my reading I didn't need paracetamol which seems to me to show that you are very good at this writing exercise.
I hope this short comment will motivate you to supervise the addition of what you wrote about the OBI protocol to the whitepaper and take the opportunity to do the same about what you wrote for the last change on the voting windows and any other elements that would be appropriate to put in (as the expiration of governance votes).
ps: If you have a link explaining as well how the scheduled block producers work in detail I would be interested as I'm having a hard time understanding its exact process.
ps: why can't we use the same options for the reward on comments as on posts (e.g.: decline the reward), what logic led to this differentiation? It's something that has nothing to do with the subject of your post but that disturbs me every time I would have used it.
I'm actually not one of the main maintainers of the whitepaper, although I've suggested ideas for it and helped proofread it. But I'll definitely answer any questions that may come up about this when it gets updated (I guess it would probably be a good idea to do it soon, with the upcoming hardfork).
I don't have such a link unfortunately: we had to read thru the existing code to figure out how it worked and how to update it. We try to add comments as we make changes to the code, but for the most part the original code didn't come with many comments.
AFAIK, you can decline the rewards for a comment (not just for posts). I think it's just a matter of if the frontends expose the functionality. But I could be wrong, and it's probably best to ask in frontend devs discord for more/better info on how to do it.
I read the last version of it this morning on Gitlab and it seems to be @guiltyparties? I am glad to hear that you are available to help update it since you are the person with the most knowledge about HIVE due to your involvement in its development since the beginning 6 years ago and your skills.
I would have tried Hahaha, It's regrettable that it's so time-consuming to learn HIVE when you're trying to get a little deeper into its processes. I know you are working on solutions like HAF to facilitate the dev of new dApp but for people like me who love to learn and deepen their knowledge, it is somewhat frustrating.
I didn't expect this answer, you've taught me something, I'll look into it more seriously to understand why no frontend offers it, thanks.
I think we will have to end up releasing the an update to the original whitepaper that is distinctly marked as a different document. The reason is that the current whitepaper is listed in a lot of places and updating all those versions will lead to confusion.
You give me too much credit. Best I can do is influence who listed or partnered with Hive.
i need a TLDR for dummies explanation of this.
Right now it takes 15 blocks before transactions in a past block are irreversible (transactions inside can't be canceled even if the computers processing hive transactions have network problems). This is a proposed change to how these computers process transactions, which would allow transactions to generally be irreversible after just one block. In other words, right now it takes at least 45 seconds to be sure a hive transaction has definitively been accepted. With the change, this will normally happen in 3 seconds or less (most commonly in less than 1 second).
So... make Hive even faster i get it now i think LOL, thanks for taking the time to explain!
Thank you for the update! I appreciate all your efforts @blocktrades !PIZZA
That's latest update it nice sharing it with people. Keep the good work 👍 @blocktrades
PIZZA Holders sent $PIZZA tips in this post's comments:
(1/10) @darkflame tipped @blocktrades (x1)
Join us in Discord!
This is a great explanation. Thank you Dan.
I dont understand all of the technical side, but I do like that HAF apps would be able to streamline their processes and speed up a bit. And I am guessing at scale, this all becomes far more important.
I assune this would not be included in the next fork, but I would be interested to hear what some of the app devs think.
It's already implemented and just waiting further testing, so I hope we can include it, unless we find some unexpected problem. But the code changes were actually quite localized, so I don't expect there to be any problems.
🤯 🤯 🤯
This is so rare. Build first, talk later.
Thank you - I missed the reply.
I am looking forward to this fork :)
~~~ embed:1523786268077096960 twitter metadata:TWF0aGV1c2dnckh8fGh0dHBzOi8vdHdpdHRlci5jb20vTWF0aGV1c2dnckgvc3RhdHVzLzE1MjM3ODYyNjgwNzcwOTY5NjB8 ~~~
The rewards earned on this comment will go directly to the person sharing the post on Twitter as long as they are registered with @poshtoken. Sign up at https://hiveposh.com.
OBI One FTW!
I'm glad someone acknowledged my nerd joke :-)
So it was intentional :)
The "force inherent in this protocol" was a broad hint...
It is inevitable that something like this is required, so why shouldn’t we be the first to get there. I want Hive to be around 50 years from now, so if 45 second irreversible block times would be laughable then, let’s go ahead and address it if we can now. The sooner, the better. Great work.
leaps and bounds, very looking forward to this.
This is really awesome and I can see great benefits for App builders.
May I ask if lower than 3 second block production will be possible in the future for the scaling purposes?
It is definitely possible, based on things we've done to latency times in the p2p layer. But I suspect there would be a fair amount of work on the blockchain side of the code and it would need further investigation.
I didn't realise that HIVE needs 15 blocks to get 100% to accept a transaction. Always thought this was 3 seconds. I can only say about this change: This is what is needed for sure. Maybe not that important for most of the dApps we have in our eco-system, but waiting 45 seconds before a payment transaction is finalised with 100% certainty (ie no risk to the parties involved in the payment transaction) is too long for use cases such as in-store payments and webshop payments. Therefore I do hope the tests will be flawless and we will see this implemented asap. This will make HIVE an even better general-purpose blockchain which is the direction we shall go towards in my honest opinion. Someone else mentioned the 1-second block time. I also read your response. Though maybe not straightforward and 'easy' to implement, for use cases like webshop payments, 1 second is by far preferred over 3 seconds. Maybe also for in-store payments, though the user experience in a store may be ok with a little larger than a 1-second delay on payment confirmation. Anyways, when we have the 1 second to deal with webshops payment, we also have it for in-store payments :)
I would say 1s block time is much more important for social media kind of apps rather than for the financial. 1s is just much more responsive.
True regarding more responsive social media. Maybe I already got accustomed to how our UIs work regarding social media and have my why of work to deal with the delay 🙃
Congratulations @blocktrades! Your post has been a top performer on the Hive blockchain and you have been rewarded with the following badges:
<table><tr><td><img src="https://images.hive.blog/60x60/http://hivebuzz.me/badges/topupvotedday.png" /><td>Post with the most upvotes of the day. <tr><td><img src="https://images.hive.blog/60x60/http://hivebuzz.me/badges/toppayoutday.png" /><td>Post with the highest payout of the day. <p dir="auto"><sub><em>You can view your badges on <a href="https://hivebuzz.me/@blocktrades" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">your board and compare yourself to others in the <a href="https://hivebuzz.me/ranking" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">Ranking<br /> <sub><em>If you no longer want to receive notifications, reply to this comment with the word <code>STOP <p dir="auto"><strong><span>Check out the last post from <a href="/@hivebuzz">@hivebuzz: <table><tr><td><a href="/hive-122221/@hivebuzz/20220505-website-update"><img src="https://images.hive.blog/64x128/https://i.imgur.com/mBNbtdP.png" /><td><a href="/hive-122221/@hivebuzz/20220505-website-update">We have finished upgrading our website<tr><td><a href="/hive-122221/@hivebuzz/pum-202204-delegations"><img src="https://images.hive.blog/64x128/https://i.imgur.com/fg8QnBc.png" /><td><a href="/hive-122221/@hivebuzz/pum-202204-delegations">Our Hive Power Delegations to the April Power Up Month Winners <h6>Support the HiveBuzz project. <a href="https://hivesigner.com/sign/update_proposal_votes?proposal_ids=%5B%22199%22%5D&approve=true" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">Vote for <a href="https://peakd.com/me/proposals/199" target="_blank" rel="noreferrer noopener" title="This link will take you away from hive.blog" class="external_link">our proposal!It's not a blocker per se as the smart contract system would work without it, but it is very important, because it will reduce the CPU and IO resources required to execute the contracts (because there will be fewer of the more-expense reversible records to process).
No, it's still sitting in a dev's local repo right now. I wanted testing to be completed on all the other locking and P2P changes we're making before introducing a new variable, just so it is easier for us to isolate the source of any problem we find and be sure that we're testing OBI on a solid base. All the tests we're developing for those previous changes will also be very useful for testing OBI, because they generally revolve around creating forks.
I think it's a great idea.Two questions:
1-Isn't OBI basically what tendermint does?
2-Doesn't OBI make HAF redundant, since the whole point was to enable devs not to deal with fork handling?
Don't know, haven't read about tendermint, although heard of it in passing. But several of the implementation details of OBI are pretty specific to the way Hive does DPOS, so to be the same would require very similar DPOS functionality. But some commonality wouldn't be surprising in any case.
No, but it makes HAF less critical. Why HAF is still important:
OneBlock.ai
What if for example, 10 witnesses don't vote at all? What will happen?
I think the only solution is to vote new witnesses in. Or wait for enough rounds to get the votes from backup witnesses.
I think it's not just related to OBI and is the same for the current design too. Assume 10 witnesses miss the blocks and don't produce anything for a while. Since it requires 15 different witnesses to vote for irreversibility by producing a block.
Do you think a built-in failover mechanism is needed here?
If witnesss don't vote then things just proceed as they do now (slower irreversibility, after a minute or so). Such witnesses should likely get voted out.
Thank you for the post. Very insightful
Some questions.
Would not backup witnesses come into play if 15 from 20 top 20 are gone ( no matter what reason it would be).
Or does backup witnesses always need to voted in first?
Another 2 question ( offtopic):
TerraUSD opinions?
and what do you think about liquid assets on hive become private coins ( if wanted)
like p-hbd and p-hive. I would really like the idea for many reasons. Special not everyone wants to show how much holdings they have.
On Hive power it doesn't work but for liquid assets? Would you support it?
Backup witnesses have to get voted in unless the higher voted witnesses are disabled (signing key set to null).
ok. That's good to know. I mean it makes sense.
But what is if the case is not 25% disappear instead 35%. At this point, no blocks would be produced right?
Default minimum participation is 33% so if 35% disappear then it keeps going but if 70% disappear then most witnesses would stop producing. Meeting would have to be called and chain restarted (by resetting minimum participation lower temporarily until issue resolved some other way).
Thank you!
Learning new things these days :)
Todo lo que beneficie al ecosistema de la criptomoneda Hive y origine rapidez y seguridad en nuestras transacciones es positivo y beneficioso para la colmena.
I like this change and am curious what type of overhead this has for the p2p network. Do the p2p improvements you mention cover this additional overhead?
There has been talks of speeding up transaction times to allow for better interactivity for users. Potentially 2 or even 1 second block times. The general feedback it is definitely possible at least in theory. Would 2 or even 3 second block time be possible with this additional overhead?
Like I mentioned in another comment, it is my experience most dapps use HEAD so they accept the risk (knowingly or not) of reversible blocks in favor of speed. Exchanges I believe are the main ones using IRREVERSIBLE. For most dapps they would see near similar speeds as HEAD but the protection of IRREVERSIBLE, which is a great best of both worlds scenario.
The overhead on the p2p network is very small: it's one small transaction generated by each of the producing witnesses. So in the normal case, it's 21 small additional transactions that travel over the network, but don't get recorded into the blockchain.
The p2p improvements vastly outweigh the additional overhead of these 21 transactions (it's not even close). Here's an example that only node operators are likely to understand of how substantial the improvements are with the new p2p code: on the mirrornet, with 2 nodes in the US and one overseas in Europe, the block offset times of all the witnesses are actually negative now (handling the same traffic as the mainnet).
Yes, it won't have any impact at all. The latency improvements I mentioned above would probably allow 1 second block times, especially if we were willing to accept a few more missed blocks occasionally. But switching to 1 second block times now that the chain is launched still wouldn't be trivial, because there's a lot of code that assumes unchanging block times.
Yes, that's another of the driving reasons for the change.
Would switching from 3 to 1 second block times also scale the network by 3X? I assume the block sizes would stay the same as they are now. I believe I remember reading a post somewhere about the eventual risk that blocks would fill up if we saw mass adoption. I am not sure if that post had much validity though or if that is even much of a limitation.
Well, if block size remained the same, you could store more data, but the easy option if we want to scale to process more data would just be to increase the block size.
This latter option doesn't even require a code change: if enough witnesses vote to increase the block size, it will increase automatically. The "con" to increasing the block size is just that it means the blockchain can grow faster and right now the block size puts an upper limit on that growth, so it can be viewed as a "safety net" versus a spam attack.
But increasing the block time offers another benefit that can't be gained by increasing the block size: decreased latency between when a transaction is broadcast and when it is accepted into the blockchain.
This can be interesting for things like games. A game will wait until a transaction gets accepted into a block before it will fully process it. So with the block time being 3s, the game will process the transaction about 1.5s (on average = 3/2s) after the transaction is broadcast, with a worst-case time of 3s (that's a slightly simplified model, but let's go with it). With a block time of time of 1s, the game will respond in about 0.5s (on average = 1/2 s), with a worst-case time of 1s.
Changing block time is an absolute nightmare, I wouldn't dare to go that route. I think faster block times for interactive games can be achieved easier. Even with some second layer consensus with multiple "application specific block producers". Each ASBP can accept incoming transactions in zero time, reacting immediately on a temporary side chain (broadcasting accepted transactions to other BPs of the same app) and then makes that side chain permanent by including it in custom op on HIVE. Of course that solution opens a lot of issues - how to broadcast side chain between blocks (maybe that is not needed if different ASBPs were working independently, sort of like different instance servers of some MMORPG - they only need to communicate final outcome), will the side chain fit in custom op (who knows) or more importantly why does that app need HIVE in the first place (to keep record on a proven chain with many nodes and established economy, to make reaching second layer consensus easier, stuff like that).
This topic is being discussed from time to time and I would love to read the longer explanation from your point of view why that's the wrong path.
Ok, a bit more but still briefly.
It is a high cost, high risk, low reward endeavor. First it might seem like all it takes is to change the constant that governs block interval. But there are plenty of places in the code where author(s) directly state that it is not prepared for different block times (guarded with static assertions) and even more places, where such assumption was made silently. Especially parts where stuff happens every block would need to be carefully reviewed. There is really no shortage of work that needs to be done instead of that.
Second, shorter blocks pose more problems with network communication (it is not an accident that with half-second blocks, EOS BPs produce 6 consecutive blocks each during their scheduled time). There is also more overhead.
Finally there is really not that many applications that can't work with 3s blocks but would be fine with 1s blocks (and those that are should really be based on EOS - one of the benefits of having many different chains in the crypto space). If anything, there is higher chance that application actually needs as fast as possible communication, where 1s blocks won't suffice. In such case there are two possibilities. First case: the app does not really need to store interactive data. Like previously mentioned MMORPG. When you are f.e. engaged in competitive PvP battle, server has to take over all communication - it can't go through blockchain (it only needs to be the same server for the battle participants, different matches can be handled by different servers). But in the end data on your exact position in every millisecond of the match is not needed to be permanently recorded. Server just needs to send transaction with final outcome, number of points gained, consumed or exchanged items, rewards given, etc., so other servers of the same app can properly update their state. Second case: when app really needs to record everything, f.e. due to regulatory requirements. Like if you wanted to make stock exchange. In such case the most reasonable approach would be to split the work between independent servers (at most one server per market pair) and either record everything on separate public side-chain(s) linking to HIVE with hashes only and records of finalized trades, or (most likely after filtering bloated HFT activity) put content of temporary side-chain inside custom op transactions (assuming it would fit).
To sum it up: a lot of effort that is better utilized elsewhere, risk of bugs, technical difficulties and in the end it does not really help.
Oh, cool. I didn’t realize the block size was so easily modified. I thought there were more technical/resource restrictions that were in play.
Awesome, thanks for the breakdown. I'm totally onboard with it.
We can estimate the overhead somewhat by observing that the signatures are going to be maybe 100 bytes roughly, so 2k bytes for all of them. Blocks are currently limited to 64k so this is around 3%. If block size goes up it would be less. (Ignores various details about how transactions and blocks are transmitted around, but close enough for discussion.)
That was a pretty good estimate, actually. It's done as a regular transaction (eg. has TAPOS data, etc) and it looks like the transaction will vary between 98 and 115 bytes in size (depending on size of account name of the witness).