Trident: the provable language

I want to share a language I’ve been working on recently: Trident.

This is not production-ready software. It’s a community preview — meant for exploration and play. But I believe it contains ideas that could shape where Neptune goes next.

Honest Disclosure

I’ll be upfront: I would never have been able to write software this sophisticated on my own. It was mostly written by Claude with my guidance — the architecture, the design decisions, the triangulation strategy, and the vision were mine; the volume of code was the machine’s. So expect chaos. Expect bugs. Expect rough edges everywhere.

But Trident existing is better than Trident not existing. I hope it marks the beginning of a new era in Neptune development.

Gratitude

I want to thank the Triton and Neptune creators. I genuinely enjoyed building this. You inspired me. You gave me the path to verify compiler correctness. And through that process, I discovered some pieces I believe are essential for superintelligence.

A Note on Correctness

Of course it’s not possible to develop something this complex in two weeks and trust it blindly. So how do I know it produces working output? The honest answer: I can’t.

Instead, I used a triangulation technique. Three sources of TASM:

  1. Formally compiled — Trident compiler output
  2. Handwritten — Claude-authored TASM
  3. Neural compiled — neural compiler output

I ran the prover on real data and verified results with triton-verify. If at least one version works, the model can fix the other two. By the end I got results that are good enough to play with. I hope you enjoy them.

The Gold Standard

I’m proposing something here that I believe could define the future of Neptune: the Gold Standard.

This comes from a decade of my pain with tokens — going all the way back to Mastercoin in 2013. Every standard I’ve seen treats tokens as dumb ledger entries. The Gold Standard treats them as capability-bearing proof objects.

The core of it is the PLUMB framework — and this was a genuine aha moment for me: everything can be expressed as a token with a capability, because proofs compose. A token isn’t just a balance. It’s a proof that you can do something — own, stake, govern, access, compute, verify. And because STARK proofs compose, these capabilities compose. Tokens become living things.

But here’s the hard part, and I won’t sugarcoat it: proving really sophisticated programs on Neptune will eventually demand serious quantum computing capabilities. No shortcut possible. Understanding proving complexity will be essential to designing anything non-trivial on top of the Gold Standard. This is not a limitation to hide from — it’s a design constraint to embrace. And I hope Trident can help reason about that complexity before you hit it.

Trinity

I also built something weird — the thing that really drove me.

Trinity is, I believe, the first example of a provable program demonstrating that FHE, neural inference, LUT-based cryptographic hashing, Poseidon2, programmable bootstrapping, and quantum circuits can all execute inside one STARK trace — with data-dependent coupling between phases.

Basically, I managed to blend every piece of rocket science I’ve been researching into a single provable execution.

I won’t disclose the benchmark results here — I’ll leave that discovery for you. But I can reveal that using a GPU-based prover I built (not ready for release yet), the Trinity test demonstrates feasibility of private quantum neural networks today.

Where Trident Is Going

One more honest disclosure: Trident does not belong to Triton VM.

Triton VM is the first target, yes — and a great one. But Trident is powered by an intermediate representation, the architecture designed specifically to compile to other provable ecosystems and even more traditional blockchain and non-VM targets. Self-hosting will eventually be done on a more compact VM optimized for collective computational graphs that I’m working on. Trident is designed to be a language for verifiable computation broadly, not a language for one specific vimputer. Neptune is where it starts. It’s not where it ends.

I share this not to diminish the relationship with Neptune, but to be transparent about the trajectory. The work I’m doing here is real, the contribution is real, and I want this community to benefit from it fully — while knowing the full picture.


The legend says it’s a weapon from the future.
And this weapon can’t be held by those who can’t hold it.

cargo install trident-lang
trident --help
1 Like

Very cool project! I’m one of the founders of Neptune Cash, and I have written a big part of the consensus programs and underlying helper functions found in our “standard library” for Triton VM, tasm-lib.

The last months I’ve been busy fine-tuning Neptune Cash to make it much more performant for parties that manage thousands of UTXOs. The performance issues have IMO been solved, and I’m now working on exchange-related endpoints.

This mundane work has and continues to distract me from more visionary features like succinctness, smart contract integration/compiler/standardization etc, some of the work you might be lifting with Trident. I’m especially interested in a TASM compiler (since writing TASM by hand is slow and hard) and your standardization efforts with respect to fungible token contracts.

What can we neptune-core developers do for you to make trident more useful? Do you need new endpoints for smart contract interactions, or the publishing of new smart contracts?

In case you’re up for a big task, I helped write a compiler for a declarative smart contract language for financial contracts some years ago. The current version of the compiler on targets Ethereum as it compiles to EVM assembler. It would be cool if that could run on Neptune Cash, as I think that might be one of the places where Neptune could really shine.

See also: Announcing the Sword compiler - DEV Community

Two GPU prover projects have been built for Triton VM. I believe this is the best one:

I’m not sure it’s upgraded to Triton VM 2.0 though. So look out for that. The rewrite from 1.0 to 2.0 should be fairly easy though.

I had a read of the gold standard, the plumb framework, and the coin standard (tsp1). You are building on a lot of internalized knowledge that I’m sure was hard won to you but I’m afraid is also opaque to me. And the end-result is that I don’t understand most of it. The good news is that the things I do understand, I think are exactly right.

To help with our understanding process, how about you explain what goes on mechanically in a toy thought experiment scenario? For example:

  • A company, which is incorporated on the blockchain and not in any jurisdiction, holds a meeting. Holders of voting shares can cast votes on certain proposals.
  • The vote is cast in favor of a proposal to issue dividends proportionally to dividend share holders.
  • The dividends are paid out in NPT.

Which transactions are broadcast and by whom?

How do users track the state?

If I understand correctly, all token allocations live on one UTXO. Why not use native UTXOs with a specialized token type and use the mutator set for privacy?

Why would a quantum computer help to produce proofs faster or more cheaply?