Scope

This document provides an interpretation of the UOS format used by Polkadot Vault. The upstream version of the published format has diverged significantly from the actual implementation, so this document represents the current state of the UOS format that is compatible with Polkadot Vault. It only applies to networks compatible with Polkadot Vault, i.e. Substrate-based networks. The document also describes special payloads used to maintain a Polkadot Vault instance.

Therefore, this document effectively describes the input and output format for QR codes used by Polkadot Vault.

Terminology

The Vault receives information over an air-gap as QR codes. These codes are read as u8 vectors and must always be parsed by the Vault before use.

QR codes can contain information that a user wants to sign with one of the Vault keys, or they may contain update information to ensure smooth operation of the Vault without the need for a reset or connection to the network.

QR code content types

  1. Transaction/extrinsic - a single transaction that is to be signed
  2. Bulk transactions - a set of transactions that are to be signed in a single session
  3. Message - a message that is to be signed with a key
  4. Chain metadata: up-to-date metadata allows the Vault to read transactions content
  5. Chain specs: adds new network to the Vault
  6. Metadata types: is used to update older versions runtime metadata (V13 and below)
  7. Key derivations: is used to import and export Vault key paths

QR code structure

QR code envelope has the following structure:

QR code prefixcontentending spacerpadding
4 bitsbyte-aligned content4 bitsremainder

QR code prefix always starts with 0x4 symbol indicating "raw" encoding.

Subsequent 2 bytes encode content length. Using this number, QR code parser can instantly extract content and disregard the rest of QR code.

Actual content is shifted by half-byte, otherwise it is a normal byte sequence.

Multiframe QR

The information transferred through QR channel into Vault is always enveloped in multiframe packages (although minimal number of multiframe packages is 1). There are two standards for the multiframe: RaptorQ erasure coding and legacy non-erasure multiframe. The type of envelope is determined by the first bit of the QR code data: 0 indicates legacy multiframe, 1 indicates RaptorQ

RaptorQ multipart payload

RaptorQ (RFC6330) is a variable rate (fountain) erasure code protocol with reference implementation in Rust

Wrapping content in RaptorQ protocol allows for arbitrary amounts of data to be transferred reliably within reasonable time. It is recommended to wrap all payloads into this type of envelope.

Each QR code in RaptorQ encoded multipart payload contains following parts:

bytes [0..4]bytes [4..]
0x80000000 || payload_sizeRaptorQ serialized packet
  • payload_size MUST contain payload size in bytes, represented as big-endian 32-bit unsigned integer.
  • payload_size MUST NOT exceed 7FFFFFFF
  • payload_size MUST be identical in all codes encoding the payload
  • payload_size and RaptorQ serialized packet MUST be stored by the Cold Vault, in no particular order, until their amount is sufficient to decode the payload.
  • Hot Wallet MUST continuously loop through all the frames showing each frame for at least 1/30 seconds (recommended frame rate: 4 FPS).
  • Cold Vault MUST be able to start scanning the Multipart Payload at any frame.
  • Cold Vault MUST NOT expect the frames to come in any particular order.
  • Cold Vault SHOULD show a progress indicator of how many frames it has successfully scanned out of the estimated minimum required amount.
  • Hot Wallet SHOULD generate sufficient number of recovery frames (recommended overhead: 100%; minimal reasonable overhead: square root of number of packets).
  • Payloads fitting in 1 frame SHOULD be shown without recovery frames as static image.

Once sufficient number of frames is collected, they could be processed into single payload and treated as data vector ("QR code content").

Legacy Multipart Payload

In real implementation, the Polkadot Vault ecosystem generalized all payloads as multipart messages.

bytes position[0][1..3][3..5][5..]
content00frame_countframe_indexdata
  • frame MUST the number of current frame, '0000' represented as big-endian 16-bit unsigned integer.
  • frame_count MUST the total number of frames, represented as big-endian 16-bit unsigned integer.
  • part_data MUST be stored by the Cold Vault, ordered by frame number, until all frames are scanned.
  • Hot Wallet MUST continuously loop through all the frames showing each frame for about 2 seconds.
  • Cold Vault MUST be able to start scanning the Multipart Payload at any frame.
  • Cold Vault MUST NOT expect the frames to come in any particular order.
  • Cold Vault SHOULD show a progress indicator of how many frames it has successfully scanned out of the total count.

Once all frames are combined, the part_data must be concatenated into a single binary blob and treated as data vector ("QR code content").

Informative content of QR code

Every QR code content starts with a prelude [0x53, 0x<encryption code>, 0x<payload code>].

0x53 is always expected and indicates Substrate-related content.

<encryption code> for signables indicates encryption algorithm that will be used to generate the signature:

0x00 Ed25519
0x01 Sr25519
0x02 Ecdsa

<encryption code> for updates indicates encryption algorithm that was used to sign the update:

0x00 Ed25519
0x01 Sr25519
0x02 Ecdsa
0xff unsigned

Derivations import and testing are always unsigned, with <encryption code> always 0xff.

Vault supports following <payload code> variants:

0x00 legacy mortal transaction
0x02 transaction (both mortal and immortal)
0x03 message
0x04 bulk transactions
0x80 load metadata update
0x81 load types update
0xc1 add specs update
0xde derivations import

Note: old UOS specified 0x00 as mortal transaction and 0x02 as immortal one, but currently both mortal and immortal transactions from polkadot-js are 0x02.

Shared QR code processing sequence:

  1. Read QR code, try interpreting it, and get the hexadecimal string from into Rust (hexadecimal string is getting changed to raw bytes soon). If QR code is not processable, nothing happens and the scanner keeps trying to catch a processable one.
  2. Analyze prelude: is it Substrate? is it a known payload type? If not, Vault always produces an error and suggests to scan supported payload.

Further processing is done based on the payload type.

Transaction

Transaction has the following structure:

preludepublic keySCALE-encoded call dataSCALE-encoded extensionsnetwork genesis hash

Public key is the key that can sign the transaction. Its length depends on the <encryption code> declared in transaction prelude:

EncryptionPublic key length, bytes
Ed2551932
Sr2551932
Ecdsa33

Call data is Vec<u8> representation of transaction content. Call data must be parsed by Vault prior to signature generation and becomes a part of signed blob. Within transaction, the call data is SCALE-encoded, i.e. effectively is prefixed with compact of its length in bytes.

Extensions contain data additional to the call data, and also are part of a signed blob. Typical extensions are Era, Nonce, metadata version, etc. Extensions content and order, in principle, can vary between the networks and metadata versions.

Network genesis hash determines the network in which the transaction is created. At the moment genesis hash is fixed-length 32 bytes.

Thus, the transaction structure could also be represented as:

preludepublic keycompact of call data lengthcall dataSCALE-encoded extensionsnetwork genesis hash

Bold-marked transaction pieces are used in the blob for which the signature is produced. If the blob is short, 257 bytes or below, the signature is produced for it as is. For blobs longer than 257 bytes, 32 byte hash (blake2_256) is signed instead. This is inherited from earlier Vault versions, and is currently compatible with polkadot-js.

Transaction parsing sequence

  1. Cut the QR data and get:

    • encryption (single u8 from prelude)
    • transaction author public key, its length matching the encryption (32 or 33 u8 immediately after the prelude)
    • network genesis hash (32 u8 at the end)
    • SCALE-encoded call data and SCALE-encoded extensions as a combined blob (everything that remains in between the transaction author public kay and the network genesis hash)

    If the data length is insufficient, Vault produces an error and suggests to load non-damaged transaction.

  2. Search the Vault database for the network specs (from the network genesis hash and encryption).

    If the network specs are not found, Vault shows:

    • public key and encryption of the transaction author key
    • error message, that suggests to add network with found genesis hash
  3. Search the Vault database for the address key (from the transaction author public key and encryption). Vault will try to interpret and display the transaction in any case. Signing will be possible only if the parsing is successful and the address key is known to Vault and is extended to the network in question.

    • Address key not found. Signing not possible. Output shows:

      • public key and encryption of the transaction author key
      • call and extensions parsing result
      • warning message, that suggests to add the address into Vault
    • Address key is found, but it is not extended to the network used. Signing not possible. Output shows:

      • detailed author key information (base58 representation, identicon, address details such as address being passworded etc)
      • call and extensions parsing result
      • warning message, that suggests extending the address into the network used
    • Address key is found and is extended to the network used. Vault will proceed to try and interpret the call and extensions. Detailed author information will be shown regardless of the parsing outcome. The signing will be allowed only if the parsing is successful.

  4. Separate the call and extensions. Call is prefixed by its length compact, the compact is cut off, the part with length that was indicated in the compact goes into call data, the part that remains goes into extensions data.

    If no compact is found or the length is insufficient, Vault produces an error that call and extensions could not be separated.

  5. Get the metadata set from the Vault database, by the network name from the network specs. Metadata is used to interpret extensions and then the call itself.

    If there are no metadata entries for the network at all, Vault produces an error and asks to load the metadata.

    RuntimeMetadata versions supported by Vault are V12, V13, and V14. The crucial feature of the V14 is that the metadata contains the description of the types used in the call and extensions production. V12 and V13 are legacy versions and provide only text identifiers for the types, and in order to use them, the supplemental types information is needed.

  6. Process the extensions.

    Vault already knows in which network the transaction was made, but does not yet know the metadata version. Metadata version must be one of the signable extensions. At the same time, the extensions and their order are recorded in the network metadata. Thus, all metadata entries from the set are checked, from newest to oldest, in an attempt to find metadata that both decodes the extensions and has a version that matches the metadata version decoded from the extensions.

    If processing extensions with a single metadata entry results in an error, the next metadata entry is tried. The errors would be displayed to user only if all attempts with existing metadata have failed.

    Typically, the extensions are quite stable in between the metadata versions and in between the networks, however, they can be and sometimes are different.

    In legacy metadata (RuntimeMetadata version being V12 and V13) extensions have identifiers only, and in Vault the extensions for V12 and V13 are hardcoded as:

    • Era era
    • Compact(u64) nonce
    • Compact(u128) tip
    • u32 metadata version
    • u32 tx version
    • H256 genesis hash
    • H256 block hash

    If the extensions could not be decoded as the standard set or not all extensions blob is used, the Vault rejects this metadata version and adds error into the error set.

    Metadata V14 has extensions with both identifiers and properly described types, and Vault decodes extensions as they are recorded in the metadata. For this, ExtrinsicMetadata part of the metadata RuntimeMetadataV14 is used. Vector signed_extensions in ExtrinsicMetadata is scanned twice, first for types in ty of the SignedExtensionMetadata and then for types in additional_signed of the SignedExtensionMetadata. The types, when resolved through the types database from the metadata, allow to cut correct length blobs from the whole SCALE-encoded extensions blob and decode them properly.

    If any of these small decodings fails, the metadata version gets rejected by the Vault and an error is added to the error set. Same happens if after all extensions are scanned, some part of extensions blob remains unused.

    There are some special extensions that must be treated separately. The identifier in SignedExtensionMetadata and ident segment of the type Path are used to trigger types interpretation as specially treated extensions. Each identifier is encountered twice, once for ty scan, and once for additional_signed scan. In some cases only one of those types has non-empty content, in some cases it is both. To distinguish the two, the type-associated path is used, which points to where the type is defined in Substrate code. Type-associated path has priority over the identifier.

    Path triggers:

    | Path | Type is interpreted as | | :- | :- | | Era | Era | | CheckNonce | Nonce | | ChargeTransactionPayment | tip, gets displayed as balance with decimals and unit corresponding to the network specs |

    Identifier triggers, are used if the path trigger was not activated:

    | Identifier | Type, if not empty and if there is no path trigger, is interpreted as | Note | | :- | :- | :- | | CheckSpecVersion | metadata version | gets checked with the metatada version from the metadata | | CheckTxVersion | tx version | | | CheckGenesis | network genesis hash | must match the genesis hash that was cut from the tail of the transaction | | CheckMortality | block hash | must match the genesis hash if the transaction is immortal; Era has same identifier, but is distinguished by the path | | CheckNonce | nonce | | | ChargeTransactionPayment | tip, gets displayed as balance with decimals and unit corresponding to the network specs |

    If the extension is not a special case, it is displayed as normal parser output and does not participate in deciding if the transaction could be signed.

    After all extensions are processed, the decoding must yield following extensions:

    • exactly one Era
    • exactly one Nonce <- this is not so currently, fix it
    • exactly one BlockHash
    • exactly one GenesisHash <- this is not so currently, fix it
    • exactly one metadata version

    If the extension set is different, this results in Vault error for this particular metadata version, this error goes into error set.

    The extensions in the metadata are checked on the metadata loading step, long before any transactions are even produced. Metadata with incomplete extensions causes a warning on load_metadata update generation step, and another one when an update with such metadata gets loaded into Vault. Nevertheless, such metadata loading into Vault is allowed, as there could be other uses for metadata except signable transaction signing. Probably.

    If the metadata version in extensions does not match the metadata version of the metadata used, this results in Vault error for this particular metadata version, this error goes into error set.

    If the extensions are completely decoded, with correct set of the special extensions and the metadata version from the extensions match the metadata version of the metadata used, the extensions are considered correctly parsed, and Vault can proceed to the call decoding.

    If all metadata entries from the Vault database were tested and no suitable solution is found, Vault produces an error stating that all attempts to decode extensions have failed. This could be used by variety of reasons (see above), but so far the most common one observed was users having the metadata in Vault not up-to-date with the metadata on chain. Thus, the error must have a recommendation to update the metadata first.

  7. Process the call data.

    After the metadata with correct version is established, it is used to parse the call data itself. Each call begins with u8 pallet index, this is the decoding entry point.

    For V14 metadata the correct pallet is found in the set of available ones in pallets field of RuntimeMetadataV14, by index field in corresponding PalletMetadata. The calls field of this PalletMetadata, if it is Some(_), contains PalletCallMetadata that provides the available calls enum described in types registry of the RuntimeMetadataV14. For each type in the registry, including this calls enum, encoded data size is determined, and the decoding is done according to the type.

    For V12 and V13 metadata the correct pallet is also found by scanning the available pallets and searching for correct pallet index. Then the call is found using the call index (second u8 of the call data). Each call has associated set of argument names and argument types, however, the argument type is just a text identifier. The type definitions are not in the metadata and transactions decoding requires supplemental types information. By default, the Vault contains types information that was constructed for Westend when Westend was still using V13 metadata and it was so far reasonably sufficient for simple transactions parsing. If the Vault does not find the type information in the database and has to decode the transaction using V12 or V13 metadata, error is produced, indicating that there are no types. Elsewise, for each encountered argument type the encoded data size is determined, and the decoding is done according to the argument type.

    There are types requiring special display:

    • calls (for cases when a call contains other calls)
    • numbers that are processed as the balances

    Calls in V14 parsing are distinguished by Call in ident segment of the type Path. Calls in V12 and V13 metadata are distinguished by any element of the set of calls type identifiers in string argument type.

    At the moment the numbers that should be displayed as balance in transactions with V14 metadata are determined by the type name type_name of the corresponding Field being:

    • Balance
    • T::Balance
    • BalanceOf<T>
    • ExtendedBalance
    • BalanceOf<T, I>
    • DepositBalance
    • PalletBalanceOf<T>

    Similar identifiers are used in V12 and V13, the checked value is the string argument type itself.

    There could be other instances when the number should be displayed as balance. However, sometimes the balance is not the balance in the units in the network specs, for example in the assets pallet. See issue #1050 and comments there for details.

    If no errors were encountered while parsing and all call data was used in the process, the transaction is considered parsed and is displayed to the user, either ready for signing (if all other checks have passed) or as read-only.

  8. If the user chooses to sign the transaction, the Vault produces QR code with signature, that should be read back into the hot side. As soon as the signature QR code is generated, the Vault considers the transaction signed.

    All signed transactions are entered in the history log, and could be seen and decoded again from the history log. Transactions not signed by the user do not go in the history log.

    If the key used for the transaction is passworded, user has three attempts to enter the password correctly. Each incorrect password entry is reflected in the history.

    In the time interval between Vault displaying the parsed transaction and the user approving it, the transaction details needed to generate the signature and history log details are temporarily stored in the database. The temporary storage gets cleared each time before and after use. Vault extracts the stored transaction data only if the database checksum stored in navigator state is same as the current checksum of the database. If the password is entered incorrectly, the database is updated with "wrong password" history entry, and the checksum in the state gets updated accordingly. Eventually, all transaction info can and will be moved into state itself and temporary storage will not be used.

Example

Alice makes transfer to Bob in Westend network.

Transaction:

530102d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27da40403008eaf04151687736326c9fea17e25fc5287613693c912909cb226aa4794f26a480700e8764817b501b8003223000005000000e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e538a7d7a0ac17eb6dd004578cb8e238c384a10f57c999a3fa1200409cd9b3f33e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e

PartMeaningByte position
53Substrate-related content0
01Sr25519 encryption algorithm1
02Transaction2
d435..a27d1Alice public key3..=34
a404..48172SCALE-encoded call data35..=76
a4Compact call data length, 4135
0403..48173Call data36..=76
04Pallet index 4 in metadata, entry point for decoding36
b501..3f334Extensions77..=153
e143..423e5Westend genesis hash154..=185
1

d43593c715fdd31c61141abd04a99fd6822c8558854ccde39a5684e7a56da27d

2

a40403008eaf04151687736326c9fea17e25fc5287613693c912909cb226aa4794f26a480700e8764817

3

0403008eaf04151687736326c9fea17e25fc5287613693c912909cb226aa4794f26a480700e8764817

4

b501b8003223000005000000e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e538a7d7a0ac17eb6dd004578cb8e238c384a10f57c999a3fa1200409cd9b3f33

5

e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e

Call content is parsed using Westend metadata, in this particular case westend9010

Call partMeaning
04Pallet index 4 (Balances) in metadata, entry point for decoding
03Method index 3 in pallet 4 (transfer_keep_alive), search in metadata what the method contains. Here it is MultiAddress for transfer destination and Compact(u128) balance.
00Enum variant in MultiAddress, AccountId
8eaf..6a486Associated AccountId data, Bob public key
0700e8764817Compact(u128) balance. Amount paid: 100000000000 or, with Westend decimals and unit, 100.000000000 mWND.
6

8eaf04151687736326c9fea17e25fc5287613693c912909cb226aa4794f26a48

Extensions content

Extensions partMeaning
b501Era: phase 27, period 64
b8Nonce: 46
00Tip: 0 pWND
32230000Metadata version: 9010
05000000Tx version: 5
e143..423e7Westend genesis hash
538a..3f338Block hash
7

e143f23803ac50e8f6f8e62695d1ce9e4e1d68aa36c1cd2cfd15340213f3423e

8

538a7d7a0ac17eb6dd004578cb8e238c384a10f57c999a3fa1200409cd9b3f33

Message

Message has the following structure:

prelude public key [u8] slice network genesis hash

[u8] slice is represented as String if all bytes are valid UTF-8. If not all bytes are valid UTF-8, the Vault produces an error.

It is critical that the message payloads are always clearly distinguishable from the transaction payloads, i.e. it is never possible to trick user to sign transaction posing as a message.

Current proposal is to enable message signing only with Sr25519 encryption algorithm, with designated signing context, different from the signing context used for transactions signing.

Bulk transactions

Bulk transactions is a SCALE-encoded TransactionBulk structure that consists of concatenated Vec<u8> transactions.

Bulking is a way to sign multiple translations at once and reduce the number of QR codes to scan.

Bulk transactions are processed in exactly the same way as single transactions.

Update

Update has following general structure:

preludeverifier public key (if signed)update payloadsignature (if signed)reserved tail

Note that the verifier public key and signature parts appear only in signed uploads. Preludes [0x53, 0xff, 0x<payload code>] are followed by the update payload.

Every time user receives an unsigned update, the Vault displays a warning that the update is not verified. Generally, the use of unsigned updates is discouraged.

For update signing it is recommended to use a dedicated key, not used for transactions. This way, if the signed data was not really the update data, but something else posing as the update data, the signature produced could not do any damage.

EncryptionPublic key length, bytesSignature length, bytes
Ed255193264
Sr255193264
Ecdsa3365
no encryption00

reserved tail currently is not used and is expected to be empty. It could be used later if the multisignatures are introduced for the updates. Expecting reserved tail in update processing is done to keep code continuity in case multisignatures introduction ever happens.

Because of the reserved tail, the update payload length has to be always exactly declared, so that the update payload part could be cut correctly from the update.

Detailed description of the update payloads and form in which they are used in update itself and for generating update signature, could be found in Rust module definitions::qr_transfers.

add_specs update payload, payload code c1

Introduces a new network to Vault, i.e. adds network specs to the Vault database.

Update payload is ContentAddSpecs in to_transfer() form, i.e. double SCALE-encoded NetworkSpecsToSend (second SCALE is to have the exact payload length).

Payload signature is generated for SCALE-encoded NetworkSpecsToSend.

Network specs are stored in dedicated SPECSTREE tree of the Vault database. Network specs identifier is NetworkSpecsKey, a key built from encryption used by the network and the network genesis hash. There could be networks with multiple encryption algorithms supported, thus the encryption is part of the key.

Some elements of the network specs could be slightly different for networks with the same genesis hash and different encryptions. There are:

  • Invariant specs, identical between all different encryptions:

    • name (network name as it appears in metadata)
    • base58 prefix

    The reason is that the network name is and the base58 prefix can be a part of the network metadata, and the network metadata is not encryption-specific.

  • Specs static for given encryption, that should not change over time once set:

    • decimals
    • unit

    To replace these, the user would need to remove the network and add it again, i.e. it won't be possible to do by accident.

  • Flexible display-related and convenience specs, that can change and could be changed by simply loading new ones over the old ones:

    • color and secondary color (both currently not used, but historically are there and may return at some point)
    • logo
    • path (default derivation path for network, //<network_name>)
    • title (network title as it gets displayed in the Vault)

load_metadata update payload, payload code 80

Loads metadata for a network already known to Vault, i.e. for a network with network specs in the Vault database.

Update payload is ContentLoadMeta in to_transfer() form, and consists of concatenated SCALE-encoded metadata Vec<u8> and network genesis hash (H256, always 32 bytes).

Same blob is used to generate the signature.

Network metadata is stored in dedicated METATREE tree of the Vault database. Network metadata identifier in is MetaKey, a key built from the network name and network metadata version.

Metadata suitable for Vault

Network metadata that can get into Vault and can be used by Vault only if it complies with following requirements:

  • metadata vector starts with b"meta" prelude
  • part of the metadata vector after b"meta" prelude is decodable as RuntimeMetadata
  • RuntimeMetadata version of the metadata is V12, V13 or V14
  • Metadata has System pallet
  • There is Version constant in System pallet
  • Version is decodable as RuntimeVersion
  • If the metadata contains base58 prefix, it must be decodable as u16 or u8

Additionally, if the metadata V14 is received, its associated extensions will be scanned and user will be warned if the extensions are incompatible with transactions signing.

Also in case of the metadata V14 the type of the encoded data stored in the Version constant is also stored in the metadata types registry and in principle could be different from RuntimeVersion above. At the moment, the type of the Version is hardcoded, and any other types would not be processed and would get rejected with an error.

load_types update payload, payload code 81

Load types information.

Type information is needed to decode transactions made in networks with metadata RuntimeMetadata version V12 or V13.

Most of the networks are already using RuntimeMetadata version V14, which has types information incorporated in the metadata itself.

The load_types update is expected to become obsolete soon.

Update payload is ContentLoadTypes in to_transfer(), i.e. double SCALE-encoded Vec<TypeEntry> (second SCALE is to have the exact payload length).

Payload signature is generated for SCALE-encoded Vec<TypeEntry>.

Types information is stored in SETTREE tree of the Vault database, under key TYPES.

Verifiers

Vault can accept both verified and non-verified updates, however, information once verified can not be replaced or updated by a weaker verifier without full Vault reset.

A verifier could be Some(_) with corresponding public key inside or None. All verifiers for the data follow trust on first use principle.

Vault uses:

  • a single general verifier
  • a network verifier for each of the networks introduced to the Vault

General verifier information is stored in SETTREE tree of the Vault database, under key GENERALVERIFIER. General verifier is always set to a value, be it Some(_) or None. Removing the general verifier means setting it to None. If no general verifier entry is found in the database, the database is considered corrupted and the Vault must be reset.

Network verifier information is stored in dedicated VERIFIERS tree of the Vault database. Network verifier identifier is VerifierKey, a key built from the network genesis hash. Same network verifier is used for network specs with any encryption algorithm and for network metadata. Network verifier could be valid or invalid. Valid network verifier could be general or custom. Verifiers installed as a result of an update are always valid. Invalid network verifier blocks the use of the network unless the Vault is reset, it appears if user marks custom verifier as no longer trusted.

Updating verifier could cause some data verified by the old verifier to be removed, to avoid confusion regarding which verifier has signed the data currently stored in the database. The data removed is called "hold", and user receives a warning if accepting new update would cause hold data to be removed.

General verifier

General verifier is the strongest and the most reliable verifier known to the Vault. General verifier could sign all kinds of updates. By default the Vault uses Parity-associated key as general verifier, but users can remove it and set their own. There could be only one general verifier at any time.

General verifier could be removed only by complete wipe of the Vault, through Remove general certificate button in the Settings. This will reset the Vault database to the default content and set the general verifier as None, that will be updated to the first verifier encountered by the Vault.

Expected usage for this is that the user removes old general verifier and immediately afterwards loads an update from the preferred source, thus setting the general verifier to the user-preferred value.

General verifier can be updated from None to Some(_) by accepting a verified update. This would result in removing "general hold", i.e.:

  • all network data (network specs and metadata) for the networks for which the verifier is set to the general one
  • types information

General verifier could not be changed from Some(_) to another, different Some(_) by simply accepting updates.

Note that if the general verifier is None, none of the custom verifiers could be Some(_). Similarly, if the verifier is recorded as custom in the database, its value can not be the same as the value of the general verifier. If found, those situations indicate the database corruption.

Custom verifiers

Custom verifiers could be used for network information that was verified, but not with the general verifier. There could be as many as needed custom verifiers at any time. Custom verifier is considered weaker than the general verifier.

Custom verifier set to None could be updated to:

  • Another custom verifier set to Some(_)
  • General verifier

Custom verifier set to Some(_) could be updated to general verifier.

These verifier updates can be done by accepting an update signed by a new verifier.

Any of the custom network verifier updates would result in removing "hold", i.e. all network specs entries (for all encryption algorithms on file) and all network metadata entries.

Common update processing sequence:

  1. Cut the QR data and get:

    • encryption used by verifier (single u8 from prelude)
    • (only if the update is signed, i.e. the encryption is not 0xff) update verifier public key, its length matching the encryption (32 or 33 u8 immediately after the prelude)
    • concatenated update payload, verifier signature (only if the update is signed) and reserved tail.

    If the data length is insufficient, Vault produces an error and suggests to load non-damaged update.

  2. Using the payload type from the prelude, determine the update payload length and cut payload from the concatenated verifier signature and reserved tail.

    If the data length is insufficient, Vault produces an error and suggests to load non-damaged update.

  3. (only if the update is signed, i.e. the encryption is not 0xff) Cut verifier signature, its length matching the encryption (64 or 65 u8 immediately after the update payload). Remaining data is reserved tail, currently it is not used.

    If the data length is insufficient, Vault produces an error and suggests to load non-damaged update.

  4. Verify the signature for the payload. If this fails, Vault produces an error indicating that the update has invalid signature.

add_specs processing sequence

  1. Update payload is transformed into ContentAddSpecs and the incoming NetworkSpecsToSend are retrieved, or the Vault produces an error indicating that the add_specs payload is damaged.

  2. Vault checks that there is no change in invariant specs occurring.

    If there are entries in the SPECSTREE of the Vault database with same genesis hash as in newly received specs (the encryption not necessarily matches), the Vault checks that the name and base58 prefix in the received specs are same as in the specs already in the Vault database.

  3. Vault checks the verifier entry for the received genesis hash.

    If there are no entries, i.e. the network is altogether new to the Vault, the specs could be added into the database. During the same database transaction the network verifier is set up:

    | add_specs update verification | General verifier in Vault database | Action | | :- | :- | :- | | unverified, 0xff update encryption code | None or Some(_) | (1) set network verifier to custom, None (regardless of the general verifier); (2) add specs | | verified by a | None | (1) set network verifier to general; (2) set general verifier to Some(a), process the general hold; (3) add specs | | verified by a | Some(b) | (1) set network verifier to custom, Some(a); (2) add specs | | verified by a | Some(a) | (1) set network verifier to general; (2) add specs |

    If there are entries, i.e. the network was known to the Vault at some point after the last Vault reset, the network verifier in the database and the verifier of the update are compared. The specs could be added in the database if

    1. there are no verifier mismatches encountered (i.e. verifier same or stronger)
    2. received data causes no change in specs static for encryption
    3. the specs are not yet in the database in exactly same form

    Note that if the exactly same specs as already in the database are received with updated verifier and the user accepts the update, the verifier will get updated and the specs will stay in the database.

    | add_specs update verification | Network verifier in Vault database | General verifier in Vault database | Action | | :- | :- | :- | :- | | unverified, 0xff update encryption code | custom, None | None | accept specs if good | | unverified, 0xff update encryption code | custom, None | Some(a) | accept specs if good | | unverified, 0xff update encryption code | general | None | accept specs if good | | unverified, 0xff update encryption code | general | Some(a) | error: update should have been signed by a | | verified by a | custom, None | None | (1) change network verifier to general, process the network hold; (2) set general verifier to Some(a), process the general hold; (3) accept specs if good | | verified by a | custom, None | Some(a) | (1) change network verifier to general, process the network hold; (2) accept specs if good | | verified by a | custom, None | Some(b) | (1) change network verifier to custom, Some(a), process the network hold; (2) accept specs if good | | verified by a | custom, Some(a) | Some(b) | accept specs if good | | verified by a | custom, Some(b) | Some(a) | (1) change network verifier to general, process the network hold; (2) accept specs if good | | verified by a | custom, Some(b) | Some(c) | error: update should have been signed by b or c |

    Before the NetworkSpecsToSend are added in the SPECSTREE, they get transformed into NetworkSpecs, and have the order field (display order in Vault network lists) added. Each new network specs entry gets added in the end of the list.

load_meta processing sequence

  1. Update payload is transformed into ContentLoadMeta, from which the metadata and the genesis hash are retrieved, or the Vault produces an error indicating that the load_metadata payload is damaged.

  2. Vault checks that the received metadata fulfills all Vault metadata requirements outlined above. Otherwise an error is produced indicating that the received metadata is invalid.

    Incoming MetaValues are produced, that contain network name, network metadata version and optional base58 prefix (if it is recorded in the metadata).

  3. Network genesis hash is used to generate VerifierKey and check if the network has an established network verifier in the Vault database. If there is no network verifier associated with genesis hash, an error is produced, indicating that the network metadata could be loaded only for networks introduced to Vault.

  4. SPECSTREE tree of the Vault database is scanned in search of entries with genesis hash matching the one received in payload.

    Vault accepts load_metadata updates only for the networks that have at least one network specs entry in the database.

    Note that if the verifier in step (3) above is found, it not necessarily means that the specs are found (for example, if a network verified by general verifier was removed by user).

    If the specs are found, the Vault checks that the network name and, if present, base58 prefix from the received metadata match the ones in network specs from the database. If the values do not match, the Vault produces an error.

  5. Vault compares the verifier of the received update and the verifier for the network from the database. The update verifier must be exactly the same as the verifier already in the database. If there is mismatch, Vault produces an error, indication that the load_metadata update for the network must be signed by the specified verifier (general or custom) or unsigned.

  6. If the update has passed all checks above, the Vault searches for the metadata entry in the METATREE of the Vault database, using network name and version from update to produce MetaKey.

    If the key is not found in the database, the metadata could be added.

    If the key is found in the database and metadata is exactly the same, the Vault produces an error indicating that the metadata is already in the database. This is expected to be quite common outcome.

    If the key is found in the database and the metadata is different, the Vault produces an error. Metadata must be not acceptable. This situation can occur if there was a silent metadata update or if the metadata is corrupted.

load_types processing sequence

  1. Update payload is transformed into ContentLoadTypes, from which the types description vector Vec<TypeEntry> is retrieved, or the Vault produces an error indicating that the load_types payload is damaged.

  2. load_types updates must be signed by the general verifier.

    | load_types update verification | General verifier in Vault database | Action | | :- | :- | :- | | unverified, 0xff update encryption code | None | load types if the types are not yet in the database | | verified by a | None | (1) set general verifier to Some(a), process the general hold; (2) load types, warn if the types are the same as before | | verified by a | Some(b) | reject types, error indicates that load_types requires general verifier signature | | verified by a | Some(a) | load types if the types are not yet in the database |

    If the load_types verifier is same as the general verifier in the database and the types are same as the types in the database, the Vault produces an error indicating that the types are already known.

    Each time the types are loaded, the Vault produces a warning. load_types is rare and quite unexpected operation.

Derivations import, payload code de

Derivations import has the following structure:

preludederivations import payload

Derivations import payload is a SCALE-encoded ExportAddrs structure.

It does not contain any private keys or seed phrases.

ExportAddrs structure holds the following information about each key:

  • name and public key of the seed the derived key belongs to
  • ss58 address of the derived key (h160 for ethereum based chains)
  • derivation path
  • encryption type
  • genesis hash of the network the key is used in

When processing derivations import, all data after prelude is transformed into ExportAddrs. Network genesis hash, encryption and derivations set are derived from it, or the Vault produces a warning indicating that the derivation import payload is corrupted.

Vault checks that the network for which the derivations are imported has network specs in the Vault database. If not, a warning is produced.

Vault checks that the derivation set contains only valid derivations. If any derivation is unsuitable, a warning is produced indicating this.

If the user accepts the derivations import, Vault generates a key for each valid derivation.

If one of the derived keys already exists, it gets ignored, i.e. no error is produced.

If there are two derivations with identical path within the payload, only one derived key is created.