Trident: the provable language

I want to share a language I’ve been working on recently: Trident.

This is not production-ready software. It’s a community preview — meant for exploration and play. But I believe it contains ideas that could shape where Neptune goes next.

Honest Disclosure

I’ll be upfront: I would never have been able to write software this sophisticated on my own. It was mostly written by Claude with my guidance — the architecture, the design decisions, the triangulation strategy, and the vision were mine; the volume of code was the machine’s. So expect chaos. Expect bugs. Expect rough edges everywhere.

But Trident existing is better than Trident not existing. I hope it marks the beginning of a new era in Neptune development.

Gratitude

I want to thank the Triton and Neptune creators. I genuinely enjoyed building this. You inspired me. You gave me the path to verify compiler correctness. And through that process, I discovered some pieces I believe are essential for superintelligence.

A Note on Correctness

Of course it’s not possible to develop something this complex in two weeks and trust it blindly. So how do I know it produces working output? The honest answer: I can’t.

Instead, I used a triangulation technique. Three sources of TASM:

  1. Formally compiled — Trident compiler output
  2. Handwritten — Claude-authored TASM
  3. Neural compiled — neural compiler output

I ran the prover on real data and verified results with triton-verify. If at least one version works, the model can fix the other two. By the end I got results that are good enough to play with. I hope you enjoy them.

The Gold Standard

I’m proposing something here that I believe could define the future of Neptune: the Gold Standard.

This comes from a decade of my pain with tokens — going all the way back to Mastercoin in 2013. Every standard I’ve seen treats tokens as dumb ledger entries. The Gold Standard treats them as capability-bearing proof objects.

The core of it is the PLUMB framework — and this was a genuine aha moment for me: everything can be expressed as a token with a capability, because proofs compose. A token isn’t just a balance. It’s a proof that you can do something — own, stake, govern, access, compute, verify. And because STARK proofs compose, these capabilities compose. Tokens become living things.

But here’s the hard part, and I won’t sugarcoat it: proving really sophisticated programs on Neptune will eventually demand serious quantum computing capabilities. No shortcut possible. Understanding proving complexity will be essential to designing anything non-trivial on top of the Gold Standard. This is not a limitation to hide from — it’s a design constraint to embrace. And I hope Trident can help reason about that complexity before you hit it.

Trinity

I also built something weird — the thing that really drove me.

Trinity is, I believe, the first example of a provable program demonstrating that FHE, neural inference, LUT-based cryptographic hashing, Poseidon2, programmable bootstrapping, and quantum circuits can all execute inside one STARK trace — with data-dependent coupling between phases.

Basically, I managed to blend every piece of rocket science I’ve been researching into a single provable execution.

I won’t disclose the benchmark results here — I’ll leave that discovery for you. But I can reveal that using a GPU-based prover I built (not ready for release yet), the Trinity test demonstrates feasibility of private quantum neural networks today.

Where Trident Is Going

One more honest disclosure: Trident does not belong to Triton VM.

Triton VM is the first target, yes — and a great one. But Trident is powered by an intermediate representation, the architecture designed specifically to compile to other provable ecosystems and even more traditional blockchain and non-VM targets. Self-hosting will eventually be done on a more compact VM optimized for collective computational graphs that I’m working on. Trident is designed to be a language for verifiable computation broadly, not a language for one specific vimputer. Neptune is where it starts. It’s not where it ends.

I share this not to diminish the relationship with Neptune, but to be transparent about the trajectory. The work I’m doing here is real, the contribution is real, and I want this community to benefit from it fully — while knowing the full picture.


The legend says it’s a weapon from the future.
And this weapon can’t be held by those who can’t hold it.

cargo install trident-lang
trident --help
2 Likes

Very cool project! I’m one of the founders of Neptune Cash, and I have written a big part of the consensus programs and underlying helper functions found in our “standard library” for Triton VM, tasm-lib.

The last months I’ve been busy fine-tuning Neptune Cash to make it much more performant for parties that manage thousands of UTXOs. The performance issues have IMO been solved, and I’m now working on exchange-related endpoints.

This mundane work has and continues to distract me from more visionary features like succinctness, smart contract integration/compiler/standardization etc, some of the work you might be lifting with Trident. I’m especially interested in a TASM compiler (since writing TASM by hand is slow and hard) and your standardization efforts with respect to fungible token contracts.

What can we neptune-core developers do for you to make trident more useful? Do you need new endpoints for smart contract interactions, or the publishing of new smart contracts?

In case you’re up for a big task, I helped write a compiler for a declarative smart contract language for financial contracts some years ago. The current version of the compiler on targets Ethereum as it compiles to EVM assembler. It would be cool if that could run on Neptune Cash, as I think that might be one of the places where Neptune could really shine.

See also: Announcing the Sword compiler - DEV Community

1 Like

Two GPU prover projects have been built for Triton VM. I believe this is the best one:

I’m not sure it’s upgraded to Triton VM 2.0 though. So look out for that. The rewrite from 1.0 to 2.0 should be fairly easy though.

1 Like

I had a read of the gold standard, the plumb framework, and the coin standard (tsp1). You are building on a lot of internalized knowledge that I’m sure was hard won to you but I’m afraid is also opaque to me. And the end-result is that I don’t understand most of it. The good news is that the things I do understand, I think are exactly right.

To help with our understanding process, how about you explain what goes on mechanically in a toy thought experiment scenario? For example:

  • A company, which is incorporated on the blockchain and not in any jurisdiction, holds a meeting. Holders of voting shares can cast votes on certain proposals.
  • The vote is cast in favor of a proposal to issue dividends proportionally to dividend share holders.
  • The dividends are paid out in NPT.

Which transactions are broadcast and by whom?

How do users track the state?

If I understand correctly, all token allocations live on one UTXO. Why not use native UTXOs with a specialized token type and use the mutator set for privacy?

Why would a quantum computer help to produce proofs faster or more cheaply?

1 Like

This is a foundational question. Let me answer from first principles.

The short answer: classical proving has polynomial overhead. Some computations have exponential classical complexity. Quantum provides the only known path to tractable proving of those computations.

The longer picture:

A STARK proof says “this computation was done correctly.” But the prover must actually DO the computation first, then prove it. For a token transfer (a few thousand cycles), this is fine. For complex skills — auctions with combinatorial optimization, AMMs with multi-asset rebalancing, DAOs with privacy-preserving voting over large sets — the computation itself becomes the bottleneck.

Three classes where classical hits a wall:

  1. Search/optimization — Vickrey auctions with combinatorial bids, optimal routing across liquidity pools. Grover’s quadratic speedup: O(sqrt(N)) vs O(N).

  2. Simulation — pricing derivatives, modeling economic equilibria, verifiable AI inference at scale. Quantum simulation is exponentially faster for certain linear algebra (HHL algorithm).

  3. Privacy at scale — FHE bootstrapping is the most expensive operation in provable privacy. Quantum circuits over the same field could accelerate the bootstrap, making private computation over large state practical.

Why the field connection matters:

Trident operates over Goldilocks (p = 2^64 - 2^32 + 1). If quantum circuits operate over the same prime field, then:

  • quantum computation is natively provable (same STARK trace format)
  • no translation layer between “quantum result” and “proof”
  • the bounded execution model (no recursion, explicit bounds) maps directly to quantum circuits (both are finite, both are reversible)

This is the Rosetta Stone from the README — one field, four readings: cryptographic S-box, neural activation, FHE bootstrap, quantum gate. The lookup table that makes Tip5 secure also makes quantum circuits expressible in the same algebraic framework.

The practical timeline:

Today’s token programs (coin, card, auction, DAO) don’t need quantum. They’re provable classically in milliseconds. But as programs compose — a DAO that governs an AMM that prices derivatives that depend on private ML inference — the proving cost compounds. At some point the classical prover becomes the bottleneck, not the verifier. That’s when quantum proving matters.

The architecture decision is: build the field arithmetic now so that quantum acceleration slots in later without changing what a Trident program IS. The constraints that make programs provable today (bounded, field-native, no heap) are exactly the constraints that make them quantum-compilable tomorrow.

1 Like

Thanks for the elaboration. I think what you’re saying makes sense. I do foresee technical limitations but there is no reason they cannot be overcome.

In the long run we want the benefit of solutions to massive computational problems, and at some point the cheapest way to get there is through a quantum computer. At this point, we do not want to sacrifice objective verifiability of the result, meaning that that quantum computer must produce a proof in tandem with that computation.

The limitations I see come from Triton VM which is not made for quantum proving. Two problems in particular:

  1. Triton VM’s instruction set architecture contains irreversible instructions. So to run the VM, you need to delete information. Quantumly you cannot delete information without destroying your fragile state and so you will end up paying a computational overhead when you want to simulate Triton VM on a quantum computer. The good news: switch out the ISA for one that is invertible and you don’t have this overhead any more.

  2. ALI/DEEP-ALI (the protocol underlying Triton VM) does not support superposition states. And in fact, I am not sure it needs to – you’re welcome to keep the proof-in-the-making in a superposition state and collapse it to a valid classical proof simultaneously with reading out the result of the computation. That’s a huge overhead but theoretically possible. Alternatively to that, you could modify ALI/DEEP-ALI to admit proofs over superposition states. In this case, the proving overhead could be entirely classical.

1 Like