Fieldza555
Fieldza555

Reputation: 13

How does DAG-based consensus like Narwhal prevent two validators from including identical transactions in their blocks?

I am reading about DAG-based consensus and BFT. However, I can't understand how validators interpret their DAG as a list of blocks. In BFT, like Tendermint, a leader is the only validator that proposes a block at each round, so other validators can agree on that block and store it in their storage without worrying that it will have duplicate transactions in the block. On the other hand, in DAG like Narwhal, they say in normal blockchain mempool:

A transaction submitted to one validator is gossiped to all others. This leads to fine-grained double transactions: most transactions are shared first by the mempool, and then the miner/leader creates a block that re-shares them.

So to prevent this Narwhal, reduce the need for double transmission when leaders propose blocks by broadcasting blocks instead of transactions and letting the leader propose a hash of a block, relying on the Mempool layer to provide its integrity-protected content.

Here, I am confused like hell. It makes sense if we have one leader, so when a leader is chosen, we can just choose the leader block at that round and discard other blocks in the same round. However, I think they don't do that. For example, in Bullshark, the leader block is an anchor, and they also commit other blocks besides the anchor using some deterministic rule.

Bullshark https://www.youtube.com/watch?v=aW1-XcGzJ8M

My questions are:

  1. If all the blocks in the same round get ordered with some deterministic rule, how can they ensure that there are no duplicate transactions in different blocks in that round? Because malicious actors can just send duplicate transactions to many validators, and each validator shares blocks of transactions instead of transactions themselves. If these blocks get committed, even though they can ignore the duplicate transactions with some rules, the space in those blocks will be wasted.
  2. Can anyone give an example of the deterministic rule that is used to order blocks between two anchors?
  3. Does this mean that the system achieves higher throughput when more validators join the consensus because there are more blocks at each round that are proposed?

Upvotes: 1

Views: 149

Answers (1)

whoopty
whoopty

Reputation: 1

I know this is 6 months later, and i'm not an expert on the Narwhal internals but I can give it a shot.

  1. "Blocks" in narwhal are the minimum unit of "transaction" dissemenation done in the protocol. In other words, theres never an instance of a narwhal worker sharing a single transaction. "Execution" would be responsible for verifying and handling a replay of a transaction, as it should include a nonce. If a validator was replaying transactions, it would be handled here

  2. Assign an index to each validator, when you can commit a leader go back and take transactions from the validators by index least to greatest per slot

  3. Feels like but not sure if theres tradeoffs in other ways

Upvotes: 0

Related Questions