JAM

Conformance Performance

Important Note

This leaderboard highlights performance differences between JAM implementations. All implementations are works in progress and none are fully conformant yet. The rankings serve to track relative performance improvements over time.

Performance Comparison

All implementations relative to PolkaJam(aggregate weighted scores - see methodology below)

Jan 26, 11:08 AM
87a46a9
1
PolkaJam (Recompiler)
Rust
1.5x faster1.83ms
2
SpaceJam
Rust
1.2x faster2.34ms
3
PolkaJamBaseline
Rust
baseline2.74ms
4
JAM DUNA
Go
2.1x slower5.86ms
5
Jamzilla
Go
2.1x slower5.88ms
6
Vinwolf
Rust
2.2x slower5.98ms
7
FastRoll
Rust
2.6x slower7.03ms
8
Strawberry
Go
3.1x slower8.37ms
9
JamZig
Zig
3.7x slower10.13ms
10
Jamixir
Elixir
4.5x slower12.38ms
11
JavaJAM
Java
4.8x slower13.30ms
12
Boka
Swift
8.7x slower23.91ms
13
TSJam
TypeScript
10.7x slower29.23ms
14
Typeberry
TypeScript
13.8x slower37.86ms
15
PyJAMaz
Python
13.4x slower36.74ms
16
GrayMatter
Elixir
16.6x slower45.44ms
17
JamPy
Python
19.8x slower54.29ms
18
New JAMneration
Go
24.7x slower67.81ms
19
JAM Forge
Scala
26.0x slower71.38ms
20
Gossamer JAM
Go
32.1x slower87.96ms
Linear scale • Lower is better
Percentiles:
P50
P90
P99
<1.2x
<2x
<10x
>50x

Performance Rankings

Baseline: PolkaJam(Score: 3.5)

RankTeamLanguageScoreP50 (ms)P90 (ms)Relative PerformanceTrend
1
PolkaJam (Recompiler)
Rust2.11.882.57
1.5x faster
2
SpaceJam
Rust2.62.363.15
1.2x faster
3
PolkaJam
Rust3.52.714.78
baseline
4
JAM DUNA
Go6.85.268.17
2.1x slower
5
Jamzilla
Go7.95.4510.31
2.1x slower
6
Vinwolf
Rust8.15.3710.54
2.2x slower
7
FastRoll
Rust8.86.2211.07
2.6x slower
8
Strawberry
Go10.17.7313.81
3.1x slower
9
JamZig
Zig14.64.7611.42
3.7x slower
10
Jamixir
Elixir17.27.5015.73
4.5x slower
11
JavaJAM
Java20.29.5425.21
4.8x slower
12
Boka
Swift32.715.9533.80
8.7x slower
13
TSJam
TypeScript43.019.6351.10
10.7x slower
14
Typeberry
TypeScript46.831.7570.09
13.8x slower
15
PyJAMaz
Python47.839.1649.18
13.4x slower
16
GrayMatter
Elixir65.122.3950.74
16.6x slower
17
JamPy
Python75.741.6599.15
19.8x slower
18
New JAMneration
Go91.351.23119.22
24.7x slower
19
JAM Forge
Scala119.342.41165.44
26.0x slower
20
Gossamer JAM
Go149.760.50175.28
32.1x slower

Audit Time Calculator

Time required for polkajam to complete audit

1PolkaJam (Recompiler)
2.0d
2SpaceJam
2.6d
3PolkaJam
3.0d
4JAM DUNA
6.4d
5Jamzilla
6.4d
6Vinwolf
6.5d
7FastRoll
7.7d
8Strawberry
9.2d
9JamZig
11.1d
10Jamixir
13.5d
11JavaJAM
14.5d
12Boka
26.2d
13TSJam
32.0d
14Typeberry
41.4d
15PyJAMaz
40.2d
16GrayMatter
49.7d
17JamPy
59.4d
18New JAMneration
74.2d
19JAM Forge
78.1d
20Gossamer JAM
96.2d

Note: These calculations show the real-world impact of performance differences on audit requirements.

Scoring Methodology

Weighted scoring system that considers full performance distribution. Our scoring system prioritizes consistent, predictable performance by weighing multiple statistical metrics:

Median (P50)
35%
Typical performance
90th Percentile
25%
Consistency
Mean
20%
Average
99th Percentile
10%
Worst case
Consistency
10%
Lower variance

How it works:

  1. 1. Performance measurements are based on the public W3F test vector traces
  2. 2. For each benchmark, we calculate a weighted score using the metrics above
  3. 3. We use geometric mean across all benchmarks to aggregate metrics
  4. 4. Teams are ranked by their final weighted score (lower is better)
  5. 5. Polkajam (interpreted) serves as the baseline (1.0x) for relative comparisons
Note: Only teams with data for all four benchmarks (Safrole, Fallback, Storage, Storage Light) are included in the overview. Zero values are excluded from calculations as they likely represent measurement errors.

Performance data updated regularly. Version: 0.7.2| Last updated: Jan 26, 2026, 11:08 AM| Source data from: Jan 25, 2026

Testing protocol conformance at scale. Learn more at jam-conformance | Commit 87a46a9 | View all clients