I started writing a github issue about this, thinking we might need to extend the send-rate-limit but halfway through I realized we don’t really need to, so I am posting a write-up about it here instead.
What’s this send-rate-limit about?
The send-rate-limit prevents a node from sending more than 2 transactions per block. It is in place only until block 25000, which should occur mid-year 2025 about 5.6 months after the genesis block, assuming 10 minute block intervals. Think of it as training wheels for neptune-core.
The limit was put in place due to concerns about the possibility of a transaction sending amplification attack against the entire network that I/we feared might result in a complete denial-of-service for legitimate users, thus rendering the network unfit for purpose.
spoiler: that doesn’t appear to be the case (entirely).
The rate-limit is set to expire at block 25000 because it was assumed we would have more information and mitigations by then, and could possibly extend it if need be.
What’s this about an amplification attack?
Each neptune transaction requires a very computationally expensive proof called a SingleProof
. Typically when a transaction is initiated, the sending node does NOT compute the SingleProof
, but rather provides a much easier to compute proof called a ProofCollection
. This is then broadcast to other peers in the network to store in their mempool. Any of these peers can then voluntarily compute the proof, thereby upgrading the transaction so it can be included in a block. In so doing, the node that provides the proof collects a fee. (some details omitted.)
Ok, so already we see there is an amplification occurring here. It takes relatively little work for a sender to initiate a transaction, but a lot of work for a peer to upgrade it. The optimal scenario would be that for each transaction initiated, exactly one peer performs work to upgrade that transaction (leaving all other peers available to upgrade remaining transactions). Even in this optimal case though, we can see that since it takes minutes for a peer to upgrade a transaction and seconds for the sender to initiate a transaction, then the sender can easily flood the network. The mempool of the proving nodes will fill up with non-upgraded transactions.
we don’t have the optimal scenario
Presently neptune-core does not provide any mechanism for peers to coordinate with regards to upgrading proofs (to avoid duplicating work). This means that multiple peers may select the same proof to work on. And probably will, as the selection algorithm is deterministic, not random. Thus (needless) duplication of work will almost certainly be occurring at some points in time, and possibly most of the time.
A simple improvement would be to add a message so that a prover can tell other peers “hey I’m working on Transaction X”, so they can ignore transaction X for some time, perhaps something like 10 minutes or 2 more blocks. After that if the transaction is still in their mempool (has not been included in a block) they might consider it again.
Is this attack a serious problem, or not?
short answer: maybe, but not a complete denial of service.
Such an attack will make transaction fees rise for all users, but honest users should still be able to get transactions confirmed, provided they are willing to pay a higher transaction fee than the attacker’s highest fee(s).
Why?
In the flood attack scenario, mempools of all nodes will begin to fill and may well become completely full, at which point the lowest fee-per-byte transactions start being ejected.
Proof upgrading nodes can only upgrade one transaction at a time, and the remaining transactions just wait in the mempool. When an upgrading node finishes upgrading it will look in the mempool for the transaction with the highest fee to upgrade.
For an attacker Mallory to successfully deny service to honest participants, Mallory must always have a transaction in the mempool with the highest fee. But then Mallory’s transactions keep getting upgraded, which costs Mallory more money.
It’s important to note though that Mallory only needs one highest-fee transaction in the mempool at any given time. So all the rest of her transactions could be very low, or perhaps 0 fee. In practice, Mallory would likely try to keep a number of high fee transactions in the mempool, perhaps 5 or 10, since proof upgraders finish work at different times and will not be selecting the same proof at once.
Thus, if Mallory is well funded she can make the network cost-prohibitive for honest users to use. It is not a true denial of service because an honest user can always pay a higher fee to get their transaction upgraded and included in a block.
how can we mitigate or improve this?
-
A good first step will be to reduce or eliminate duplicate work between upgrade nodes. This will increase the network’s overall capacity, so it can process more transactions in parallel. That then requires Mallory to have more high-fee transaction in the mempool at any given time, making the attack more expensive.
-
Each transaction sender can generate the
SingleProof
themself rather than relying on an upgrade node to do it. When a transaction is broadcast with aSingleProof
it can immediately be included in the next block and effectively bypasses the upgrade bottleneck. Recently new APIs have been added that enable neptune-core RPC clients to generate a proof outside neptune-core, perhaps even on another device. -
Proof generation times will come down over time. Hardware is continuously getting faster and more powerful. Further there is a path towards generating proofs with GPUs and perhaps eventually with dedicated devices, ie ASICs.
-
Anyone, perhaps a community member reading this, could create a mempool viewer website (or app) that makes it easy to see what the highest fee Tx in the mempool presently is. This can help users to get Tx upgraded and confirmed faster when time is of the essence. (This could also be integrated into wallet apps, such as neptune-dashboard)
-
??? Ideas welcome!
should we extend the send-rate-limit?
short answer. probably not.
This limit can be considered problematic for services such as exchanges that need to perform higher volumes of payments.
Note however that a transaction can include many outputs to different recipients, so such services could batch all outgoing payments into one or two transactions per block.
Most importantly, the attack does not appear to enable the attacker to perform a complete denial of service to honest participants, and is thus not an existential threat to the network.
what do you think?
Let’s hear your thoughts, q’s, ideas, corrections, etc.