JAM

Conformance Performance

Important Note

This leaderboard highlights performance differences between JAM implementations. All implementations are works in progress and none are fully conformant yet. The rankings serve to track relative performance improvements over time.

Performance Comparison

All implementations relative to PolkaJam(aggregate weighted scores - see methodology below)

Oct 8, 12:11 PM
bd2c401
1
PolkaJam (Recompiler)
Rust
1.4x faster2.20ms
2
SpaceJam
Rust
1.1x slower3.43ms
3
PolkaJamBaseline
Rust
baseline3.18ms
4
TurboJam
C++
1.4x slower4.56ms
5
JavaJAM
Java
1.9x slower6.00ms
6
JAM DUNA
Go
2.0x slower6.26ms
7
Jamzilla
Go
4.1x slower13.15ms
8
FastRoll
Rust
5.6x slower17.95ms
9
Vinwolf
Rust
5.4x slower17.22ms
10
JamZig
Zig
5.6x slower17.90ms
11
Boka
Swift
10.9x slower34.50ms
12
TSJam
TypeScript
11.6x slower36.92ms
13
Jamixir
Elixir
14.5x slower46.02ms
14
JamPy
Python
25.1x slower79.68ms
15
Tessera
Python
33.3x slower105.81ms
16
PyJAMaz
Python
41.1x slower130.46ms
17
Typeberry
TypeScript
93.5x slower297.08ms
Logarithmic scale • Lower is better
Percentiles:
P50
P90
P99
<1.2x
<2x
<10x
>50x

Performance Rankings

Baseline: PolkaJam(Score: 4.0)

RankTeamLanguageScoreP50 (ms)P90 (ms)Relative PerformanceTrend
1
PolkaJam (Recompiler)
Rust2.52.103.23
1.4x faster
2
SpaceJam
Rust3.83.274.91
1.1x slower
3
PolkaJam
Rust4.02.945.44
baseline
4
TurboJam
C++5.93.848.54
1.4x slower
5
JavaJAM
Java8.04.8610.75
1.9x slower
6
JAM DUNA
Go8.15.1511.20
2.0x slower
7
Jamzilla
Go17.98.7118.08
4.1x slower
8
FastRoll
Rust23.112.3024.96
5.6x slower
9
Vinwolf
Rust24.57.5716.15
5.4x slower
10
JamZig
Zig26.36.7715.10
5.6x slower
11
Boka
Swift42.628.3758.12
10.9x slower
12
TSJam
TypeScript52.125.3166.92
11.6x slower
13
Jamixir
Elixir74.430.1586.83
14.5x slower
14
JamPy
Python106.953.95146.59
25.1x slower
15
Tessera
Python145.880.33184.73
33.3x slower
16
PyJAMaz
Python179.987.87192.24
41.1x slower
17
Typeberry
TypeScript355.2127.17245.58
93.5x slower

Audit Time Calculator

Time required for polkajam to complete audit

1PolkaJam (Recompiler)
2.1d
2SpaceJam
3.2d
3PolkaJam
3.0d
4TurboJam
4.3d
5JavaJAM
5.7d
6JAM DUNA
5.9d
7Jamzilla
12.4d
8FastRoll
16.9d
9Vinwolf
16.3d
10JamZig
16.9d
11Boka
32.6d
12TSJam
34.9d
13Jamixir
43.5d
14JamPy
75.2d
15Tessera
99.9d
16PyJAMaz
123.2d
17Typeberry
280.5d

Note: These calculations show the real-world impact of performance differences on audit requirements.

Scoring Methodology

Weighted scoring system that considers full performance distribution. Our scoring system prioritizes consistent, predictable performance by weighing multiple statistical metrics:

Median (P50)
35%
Typical performance
90th Percentile
25%
Consistency
Mean
20%
Average
99th Percentile
10%
Worst case
Consistency
10%
Lower variance

How it works:

  1. 1. Performance measurements are based on the public W3F test vector traces
  2. 2. For each benchmark, we calculate a weighted score using the metrics above
  3. 3. We use geometric mean across all benchmarks to aggregate metrics
  4. 4. Teams are ranked by their final weighted score (lower is better)
  5. 5. Polkajam (interpreted) serves as the baseline (1.0x) for relative comparisons
Note: Only teams with data for all four benchmarks (Safrole, Fallback, Storage, Storage Light) are included in the overview. Zero values are excluded from calculations as they likely represent measurement errors.

Performance data updated regularly. Version: 0.7.0| Last updated: Oct 8, 2025, 12:11 PM| Source data from: Oct 8, 2025

Testing protocol conformance at scale. Learn more at jam-conformance | Commit bd2c401 | View all clients