A Composer who uses the 2% minimum Composer fee to outbid his competition will win over the Composer who does not. So Composers will tend to use that minimum fee. In the limit of this tendency this is the space of strategies at play.
World 4: 2% minimum composer fee, the dominant block proposal is not yours and promises the guesser 125.44, and a transaction was broadcast paying 0.01 NPT in fees.
strategy A: idle. Gain: 0.
strategy B: produce a new proposal confirming only this transaction. Regardless of the Guesser split, the resulting proposal fails to outbid the currently dominant proposal, so no Guessers will adopt it. Gain: 0.
strategy C: produce a new proposal confirming this transaction and a transaction spending 2.56 from your own purse on fees. The new block proposal promises the guesser 125.445 NPT in fees which is more so the guessers adopt it. Gain: 0.005 NPT, and if you discount the cost of having some of those gained coins locked up for 3 years, the true gain is less than 0.005 NPT.
So it looks like this proposal gives Composers less incentive than World 1 (current situation) does for including transactions into block proposals when the Guesser fee has reached the maximum.
In my observation the main problem of the current incentives is that the dominant composer is incentivized to recompose multiple times during a block to overbid his competitors while completely ignoring mempool. The only way to be competitive while re-composing several time during a block is to mimic his behavior and to discard the mempool. I am not against re-composing, but I thinks doing so and overbidding competitors while not including any transactions is damaging for the network and should be penalized.
My proposal: Penalizing TX fee Downgrades.
While the shift toward a three-stage mining pipeline (with distinct roles for Upgraders, Composers, and Guessers) is promising, I believe it does not fully address the current incentives bringing us those empty blocks.
Nodes should consider rejecting composition proposals whose total fee contribution is lower than that of an earlier observed composition proposal for the same block height. Moreover, nodes propagating such “regressive” compositions could have their reputation locally downgraded.
This approach doesn’t require enforcing specific transaction content or changing the minimum merge requirement. Instead, it focuses purely on fee-based regressions, which can be objectively compared and used to guide propagation behavior.
This could be seen as a soft reputation mechanism rather than a consensus rule — but one that helps align incentives and encourages composers to improve on prior proposals, rather than undercut them with minimal merges and lower fees.
Would appreciate any thoughts on feasibility or unintended consequences of such an approach.
iiuc, so far in this thread discussion is focusing on incentive mechanisms to encourage composers to include mempool tx in block proposals. Which is fine and good, but let me ask a simple question: Is there some way we could modify the consensus rules to enforce that a minimum percentage of mempool tx are included in each block?
I can think of several reasons why that doesn’t seem to be workable, but I ask the question in case someone sees a way it can work.
I want to add a piece of context that made it click for me. Ignoring the mempool does not allow you to produce a composition faster[1], but it does allow you to start earlier. If the objective is to outbid as quickly as possible, it is worthwhile to sit on already-made proposals and only broadcast them when you need to. To do that, you need to make use of the opportunity for starting early.
The only information you have to go by is the fee on the one transaction in the block. I don’t know what you mean by total fee contribution but it sounds different (and hence unattainable).
You could make a distinction based on the number of inputs or outputs but the malicious Composer can always make dummy inputs and outputs so that does not help to distinguish good behavior from bad.
I doubt it. The consensus rules cannot distinguish inputs and outputs originating from genuine transactions from inptus and outputs originating from dummy transactions. And if the criterion used for distinguishing genuine from dummy is whether it was in the mempool, then you are solving the immediate problem by introducing a more profound one: how to establish consensus about what lives in “the” mempool?
Or at least, I really don’t think so. Feel free to prove me wrong. ↩︎
Nodes prefer block proposals that confirm transactions in their (local) mempools. So for every incoming block proposal, the node computes a score measuring how well the block proposal absorbs transactions from the mempool. Block proposals that regress on this metric are ignored (and their senders punished). Block proposals that improve on this metric are relayed.
Fees on the matched transactions in mempool might also be a relevant factor. Maybe zero-fee paying transactions that are in the mempool don’t add positive weight to a block proposal.