referrerpolicy=no-referrer-when-downgrade

polkadot_node_core_prospective_parachains/fragment_chain/
mod.rs

1// Copyright (C) Parity Technologies (UK) Ltd.
2// This file is part of Polkadot.
3
4// Polkadot is free software: you can redistribute it and/or modify
5// it under the terms of the GNU General Public License as published by
6// the Free Software Foundation, either version 3 of the License, or
7// (at your option) any later version.
8
9// Polkadot is distributed in the hope that it will be useful,
10// but WITHOUT ANY WARRANTY; without even the implied warranty of
11// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
12// GNU General Public License for more details.
13
14// You should have received a copy of the GNU General Public License
15// along with Polkadot.  If not, see <http://www.gnu.org/licenses/>.
16
17//! Utility for managing parachain fragments not referenced by the relay-chain.
18//!
19//! # Overview
20//!
21//! The main type exposed by this module is the [`FragmentChain`].
22//!
23//! Each fragment chain is associated with a particular relay-parent (an active leaf) and has a
24//! [`Scope`], which contains the allowed relay parents (up to `allowed_ancestry_len`), the pending
25//! availability candidates and base constraints derived from the latest included candidate. Each
26//! parachain has a single `FragmentChain` for each active leaf where it's scheduled.
27//!
28//! A fragment chain consists mainly of the current best backable chain (we'll call this the best
29//! chain) and a storage of unconnected potential candidates (we'll call this the unconnected
30//! storage).
31//!
32//! The best chain contains all the candidates pending availability and a subsequent chain
33//! of candidates that have reached the backing quorum and are better than any other backable forks
34//! according to the fork selection rule (more on this rule later). It has a length of size at most
35//! `num_of_pending_candidates + num_of_assigned_cores_for_para`.
36//!
37//! The unconnected storage keeps a record of seconded/backable candidates that may be
38//! added to the best chain in the future.
39//!	Once a candidate is seconded, it becomes part of this unconnected storage.
40//! Only after it is backed it may be added to the best chain (but not necessarily). It's only
41//! added if it builds on the latest candidate in the chain and if there isn't a better backable
42//! candidate according to the fork selection rule.
43//!
44//! An important thing to note is that the candidates present in the unconnected storage may have
45//! any/no relationship between them. In other words, they may form N trees and may even form
46//! cycles. This is needed so that we may begin validating candidates for which we don't yet know
47//! their parent (so we may parallelize the backing process across different groups for elastic
48//! scaling) and so that we accept parachain forks.
49//!
50//! We accept parachain forks only if the fork selection rule allows for it. In other words, if we
51//! have a backed candidate, we begin seconding/validating a fork only if it has a lower candidate
52//! hash. Once both forks are backed, we discard the one with the higher candidate hash.
53//! We assume all validators pick the same fork according to the fork selection rule. If we decided
54//! to not accept parachain forks, candidates could end up getting only half of the backing votes or
55//! even less (for forks of larger arity). This would affect the validator rewards. Still, we don't
56//! guarantee that a fork-producing parachains will be able to fully use elastic scaling.
57//!
58//! Once a candidate is backed and becomes part of the best chain, we can trim from the
59//! unconnected storage candidates which constitute forks on the best chain and no longer have
60//! potential.
61//!
62//! This module also makes use of types provided by the Inclusion Emulator module, such as
63//! [`Fragment`] and [`Constraints`]. These perform the actual job of checking for validity of
64//! prospective fragments.
65//!
66//! # Fork choice rule
67//!
68//! The motivation for the fork choice rule is described in the previous chapter.
69//!
70//! The current rule is: choose the candidate with the lower candidate hash.
71//! The candidate hash is quite random and finding a candidate with a lower hash in order to favour
72//! it would essentially mean solving a proof of work problem.
73//!
74//! # Parachain cycles
75//!
76//! Parachains can create cycles, because:
77//!   1. There's no requirement that head-data is unique for a parachain. Furthermore, a parachain
78//!      is under no obligation to be acyclic, and this is mostly just because it's totally
79//!      inefficient to enforce it. Practical use-cases are acyclic, but there is still more than
80//!      one way to reach the same head-data.
81//!   2. and candidates only refer to their parent by its head-data. This whole issue could be
82//!      resolved by having candidates reference their parent by candidate hash.
83//!
84//! However, dealing with cycles increases complexity during the backing/inclusion process for no
85//! practical reason.
86//! These cycles may be accepted by fragment chains while candidates are part of the unconnected
87//! storage, but they will definitely not make it to the best chain.
88//!
89//! On the other hand, enforcing that a parachain will NEVER be acyclic would be very complicated
90//! (looping through the entire parachain's history on every new candidate or changing the candidate
91//! receipt to reference the parent's candidate hash).
92//!
93//! Therefore, we don't provide a guarantee that a cycle-producing parachain will work (although in
94//! practice they probably will if the cycle length is larger than the number of assigned cores
95//! multiplied by two).
96//!
97//! # Spam protection
98//!
99//! As long as the supplied number of candidates is bounded, [`FragmentChain`] complexity is
100//! bounded. This means that higher-level code needs to be selective about limiting the amount of
101//! candidates that are considered.
102//!
103//! Practically speaking, the collator-protocol will limit the number of fetched collations per
104//! core, to the number of claim queue assignments for the paraid on that core.
105//! Statement-distribution will not allow more than `scheduler_params.lookahead` seconded candidates
106//! at a relay parent per each validator in the backing group.
107//!
108//! The code in this module is not designed for speed or efficiency, but conceptual simplicity.
109//! Our assumption is that the amount of candidates and parachains we consider will be reasonably
110//! bounded and in practice will not exceed a few thousand at any time. This naive implementation
111//! will still perform fairly well under these conditions, despite being somewhat wasteful of
112//! memory.
113//!
114//! Still, the expensive candidate data (CandidateCommitments) are wrapped in an `Arc` and shared
115//! across fragment chains of the same para on different active leaves.
116
117#[cfg(test)]
118mod tests;
119
120use std::{
121	cmp::{min, Ordering},
122	collections::{
123		hash_map::{Entry, HashMap},
124		BTreeMap, HashSet, VecDeque,
125	},
126	sync::Arc,
127};
128
129use super::LOG_TARGET;
130use polkadot_node_subsystem::messages::Ancestors;
131use polkadot_node_subsystem_util::inclusion_emulator::{
132	self, validate_commitments, ConstraintModifications, Constraints, Fragment,
133	HypotheticalOrConcreteCandidate, ProspectiveCandidate, RelayChainBlockInfo,
134};
135use polkadot_primitives::{
136	BlockNumber, CandidateCommitments, CandidateHash,
137	CommittedCandidateReceiptV2 as CommittedCandidateReceipt, Hash, HeadData,
138	PersistedValidationData, ValidationCodeHash,
139};
140use thiserror::Error;
141
142/// Fragment chain related errors.
143#[derive(Debug, Clone, PartialEq, Error)]
144pub(crate) enum Error {
145	#[error("Candidate already known")]
146	CandidateAlreadyKnown,
147	#[error("Candidate's parent head is equal to its output head. Would introduce a cycle.")]
148	ZeroLengthCycle,
149	#[error("Candidate would introduce a cycle")]
150	Cycle,
151	#[error("Candidate would introduce two paths to the same output state")]
152	MultiplePaths,
153	#[error("Attempting to directly introduce a Backed candidate. It should first be introduced as Seconded")]
154	IntroduceBackedCandidate,
155	#[error("Relay parent {0:?} of the candidate precedes the relay parent {1:?} of a pending availability candidate")]
156	RelayParentPrecedesCandidatePendingAvailability(Hash, Hash),
157	#[error("Candidate would introduce a fork with a pending availability candidate: {0:?}")]
158	ForkWithCandidatePendingAvailability(CandidateHash),
159	#[error("Fork selection rule favours another candidate: {0:?}")]
160	ForkChoiceRule(CandidateHash),
161	#[error("Could not find parent of the candidate")]
162	ParentCandidateNotFound,
163	#[error("Could not compute candidate constraints: {0:?}")]
164	ComputeConstraints(inclusion_emulator::ModificationError),
165	#[error("Candidate violates constraints: {0:?}")]
166	CheckAgainstConstraints(inclusion_emulator::FragmentValidityError),
167	#[error("Relay parent would move backwards from the latest candidate in the chain")]
168	RelayParentMovedBackwards,
169	#[error(transparent)]
170	CandidateEntry(#[from] CandidateEntryError),
171	#[error("Relay parent {0:?} not in scope. Earliest relay parent allowed {1:?}")]
172	RelayParentNotInScope(Hash, Hash),
173}
174
175/// The rule for selecting between two backed candidate forks, when adding to the chain.
176/// All validators should adhere to this rule, in order to not lose out on rewards in case of
177/// forking parachains.
178fn fork_selection_rule(hash1: &CandidateHash, hash2: &CandidateHash) -> Ordering {
179	hash1.cmp(hash2)
180}
181
182/// Utility for storing candidates and information about them such as their relay-parents and their
183/// backing states. This does not assume any restriction on whether or not the candidates form a
184/// chain. Useful for storing all kinds of candidates.
185#[derive(Clone, Default)]
186pub(crate) struct CandidateStorage {
187	// Index from head data hash to candidate hashes with that head data as a parent. Useful for
188	// efficiency when responding to `ProspectiveValidationDataRequest`s or when trying to find a
189	// new candidate to push to a chain.
190	by_parent_head: HashMap<Hash, HashSet<CandidateHash>>,
191
192	// Index from head data hash to candidate hashes outputting that head data. For
193	// efficiency when responding to `ProspectiveValidationDataRequest`s.
194	by_output_head: HashMap<Hash, HashSet<CandidateHash>>,
195
196	// Index from candidate hash to fragment node.
197	by_candidate_hash: HashMap<CandidateHash, CandidateEntry>,
198}
199
200impl CandidateStorage {
201	/// Introduce a new pending availability candidate.
202	pub fn add_pending_availability_candidate(
203		&mut self,
204		candidate_hash: CandidateHash,
205		candidate: CommittedCandidateReceipt,
206		persisted_validation_data: PersistedValidationData,
207	) -> Result<(), Error> {
208		let entry = CandidateEntry::new(
209			candidate_hash,
210			candidate,
211			persisted_validation_data,
212			CandidateState::Backed,
213		)?;
214
215		self.add_candidate_entry(entry)
216	}
217
218	/// Return the number of stored candidates.
219	pub fn len(&self) -> usize {
220		self.by_candidate_hash.len()
221	}
222
223	/// Introduce a new candidate entry.
224	fn add_candidate_entry(&mut self, candidate: CandidateEntry) -> Result<(), Error> {
225		let candidate_hash = candidate.candidate_hash;
226		if self.by_candidate_hash.contains_key(&candidate_hash) {
227			return Err(Error::CandidateAlreadyKnown)
228		}
229
230		self.by_parent_head
231			.entry(candidate.parent_head_data_hash)
232			.or_default()
233			.insert(candidate_hash);
234		self.by_output_head
235			.entry(candidate.output_head_data_hash)
236			.or_default()
237			.insert(candidate_hash);
238		self.by_candidate_hash.insert(candidate_hash, candidate);
239
240		Ok(())
241	}
242
243	/// Remove a candidate from the store.
244	fn remove_candidate(&mut self, candidate_hash: &CandidateHash) {
245		if let Some(entry) = self.by_candidate_hash.remove(candidate_hash) {
246			if let Entry::Occupied(mut e) = self.by_parent_head.entry(entry.parent_head_data_hash) {
247				e.get_mut().remove(&candidate_hash);
248				if e.get().is_empty() {
249					e.remove();
250				}
251			}
252
253			if let Entry::Occupied(mut e) = self.by_output_head.entry(entry.output_head_data_hash) {
254				e.get_mut().remove(&candidate_hash);
255				if e.get().is_empty() {
256					e.remove();
257				}
258			}
259		}
260	}
261
262	/// Note that an existing candidate has been backed.
263	fn mark_backed(&mut self, candidate_hash: &CandidateHash) {
264		if let Some(entry) = self.by_candidate_hash.get_mut(candidate_hash) {
265			gum::trace!(target: LOG_TARGET, ?candidate_hash, "Candidate marked as backed");
266			entry.state = CandidateState::Backed;
267		} else {
268			gum::trace!(target: LOG_TARGET, ?candidate_hash, "Candidate not found while marking as backed");
269		}
270	}
271
272	/// Whether a candidate is contained within the storage already.
273	fn contains(&self, candidate_hash: &CandidateHash) -> bool {
274		self.by_candidate_hash.contains_key(candidate_hash)
275	}
276
277	/// Return an iterator over references to the stored candidates, in arbitrary order.
278	fn candidates(&self) -> impl Iterator<Item = &CandidateEntry> {
279		self.by_candidate_hash.values()
280	}
281
282	/// Try getting head-data by hash.
283	fn head_data_by_hash(&self, hash: &Hash) -> Option<&HeadData> {
284		// First, search for candidates outputting this head data and extract the head data
285		// from their commitments if they exist.
286		//
287		// Otherwise, search for candidates building upon this head data and extract the head data
288		// from their persisted validation data if they exist.
289		self.by_output_head
290			.get(hash)
291			.and_then(|m| m.iter().next())
292			.and_then(|a_candidate| self.by_candidate_hash.get(a_candidate))
293			.map(|e| &e.candidate.commitments.head_data)
294			.or_else(|| {
295				self.by_parent_head
296					.get(hash)
297					.and_then(|m| m.iter().next())
298					.and_then(|a_candidate| self.by_candidate_hash.get(a_candidate))
299					.map(|e| &e.candidate.persisted_validation_data.parent_head)
300			})
301	}
302
303	/// Returns the backed candidates which have the given head data hash as parent.
304	fn possible_backed_para_children<'a>(
305		&'a self,
306		parent_head_hash: &'a Hash,
307	) -> impl Iterator<Item = &'a CandidateEntry> + 'a {
308		let by_candidate_hash = &self.by_candidate_hash;
309		self.by_parent_head
310			.get(parent_head_hash)
311			.into_iter()
312			.flat_map(|hashes| hashes.iter())
313			.filter_map(move |h| {
314				by_candidate_hash.get(h).and_then(|candidate| {
315					(candidate.state == CandidateState::Backed).then_some(candidate)
316				})
317			})
318	}
319}
320
321/// The state of a candidate.
322///
323/// Candidates aren't even considered until they've at least been seconded.
324#[derive(Debug, PartialEq, Clone)]
325enum CandidateState {
326	/// The candidate has been seconded.
327	Seconded,
328	/// The candidate has been completely backed by the group.
329	Backed,
330}
331
332#[derive(Debug, Clone, PartialEq, Error)]
333/// Possible errors when construcing a candidate entry.
334pub enum CandidateEntryError {
335	#[error("Candidate does not match the persisted validation data provided alongside it")]
336	PersistedValidationDataMismatch,
337	#[error("Candidate's parent head is equal to its output head. Would introduce a cycle")]
338	ZeroLengthCycle,
339}
340
341#[derive(Debug, Clone)]
342/// Representation of a candidate into the [`CandidateStorage`].
343pub(crate) struct CandidateEntry {
344	candidate_hash: CandidateHash,
345	parent_head_data_hash: Hash,
346	output_head_data_hash: Hash,
347	relay_parent: Hash,
348	candidate: Arc<ProspectiveCandidate>,
349	state: CandidateState,
350}
351
352impl CandidateEntry {
353	/// Create a new seconded candidate entry.
354	pub fn new_seconded(
355		candidate_hash: CandidateHash,
356		candidate: CommittedCandidateReceipt,
357		persisted_validation_data: PersistedValidationData,
358	) -> Result<Self, CandidateEntryError> {
359		Self::new(candidate_hash, candidate, persisted_validation_data, CandidateState::Seconded)
360	}
361
362	pub fn hash(&self) -> CandidateHash {
363		self.candidate_hash
364	}
365
366	fn new(
367		candidate_hash: CandidateHash,
368		candidate: CommittedCandidateReceipt,
369		persisted_validation_data: PersistedValidationData,
370		state: CandidateState,
371	) -> Result<Self, CandidateEntryError> {
372		if persisted_validation_data.hash() != candidate.descriptor.persisted_validation_data_hash()
373		{
374			return Err(CandidateEntryError::PersistedValidationDataMismatch)
375		}
376
377		let parent_head_data_hash = persisted_validation_data.parent_head.hash();
378		let output_head_data_hash = candidate.commitments.head_data.hash();
379
380		if parent_head_data_hash == output_head_data_hash {
381			return Err(CandidateEntryError::ZeroLengthCycle)
382		}
383
384		Ok(Self {
385			candidate_hash,
386			parent_head_data_hash,
387			output_head_data_hash,
388			relay_parent: candidate.descriptor.relay_parent(),
389			state,
390			candidate: Arc::new(ProspectiveCandidate {
391				commitments: candidate.commitments,
392				persisted_validation_data,
393				pov_hash: candidate.descriptor.pov_hash(),
394				validation_code_hash: candidate.descriptor.validation_code_hash(),
395			}),
396		})
397	}
398}
399
400impl HypotheticalOrConcreteCandidate for CandidateEntry {
401	fn commitments(&self) -> Option<&CandidateCommitments> {
402		Some(&self.candidate.commitments)
403	}
404
405	fn persisted_validation_data(&self) -> Option<&PersistedValidationData> {
406		Some(&self.candidate.persisted_validation_data)
407	}
408
409	fn validation_code_hash(&self) -> Option<ValidationCodeHash> {
410		Some(self.candidate.validation_code_hash)
411	}
412
413	fn parent_head_data_hash(&self) -> Hash {
414		self.parent_head_data_hash
415	}
416
417	fn output_head_data_hash(&self) -> Option<Hash> {
418		Some(self.output_head_data_hash)
419	}
420
421	fn relay_parent(&self) -> Hash {
422		self.relay_parent
423	}
424
425	fn candidate_hash(&self) -> CandidateHash {
426		self.candidate_hash
427	}
428}
429
430/// A candidate existing on-chain but pending availability, for special treatment
431/// in the [`Scope`].
432#[derive(Debug, Clone)]
433pub(crate) struct PendingAvailability {
434	/// The candidate hash.
435	pub candidate_hash: CandidateHash,
436	/// The block info of the relay parent.
437	pub relay_parent: RelayChainBlockInfo,
438}
439
440/// The scope of a [`FragmentChain`].
441#[derive(Debug, Clone)]
442pub(crate) struct Scope {
443	/// The relay parent we're currently building on top of.
444	relay_parent: RelayChainBlockInfo,
445	/// The other relay parents candidates are allowed to build upon, mapped by the block number.
446	ancestors: BTreeMap<BlockNumber, RelayChainBlockInfo>,
447	/// The other relay parents candidates are allowed to build upon, mapped by the block hash.
448	ancestors_by_hash: HashMap<Hash, RelayChainBlockInfo>,
449	/// The candidates pending availability at this block.
450	pending_availability: Vec<PendingAvailability>,
451	/// The base constraints derived from the latest included candidate.
452	base_constraints: Constraints,
453	/// Maximum length of the best backable chain (including candidates pending availability).
454	max_backable_len: usize,
455}
456
457/// An error variant indicating that ancestors provided to a scope
458/// had unexpected order.
459#[derive(Debug)]
460pub(crate) struct UnexpectedAncestor {
461	/// The block number that this error occurred at.
462	/// Allow as dead code, but it's being read in logs.
463	#[allow(dead_code)]
464	pub number: BlockNumber,
465	/// The previous seen block number, which did not match `number`.
466	/// Allow as dead code, but it's being read in logs.
467	#[allow(dead_code)]
468	pub prev: BlockNumber,
469}
470
471impl Scope {
472	/// Define a new [`Scope`].
473	///
474	/// `max_backable_len` should be the maximum length of the best backable chain (excluding
475	/// pending availability candidates).
476	///
477	/// Ancestors should be in reverse order, starting with the parent
478	/// of the `relay_parent`, and proceeding backwards in block number
479	/// increments of 1. Ancestors not following these conditions will be
480	/// rejected.
481	///
482	/// This function will only consume ancestors up to the `min_relay_parent_number` of
483	/// the `base_constraints`.
484	///
485	/// Only ancestors whose children have the same session as the relay-parent's
486	/// children should be provided.
487	///
488	/// It is allowed to provide zero ancestors.
489	pub fn with_ancestors(
490		relay_parent: RelayChainBlockInfo,
491		base_constraints: Constraints,
492		pending_availability: Vec<PendingAvailability>,
493		max_backable_len: usize,
494		ancestors: impl IntoIterator<Item = RelayChainBlockInfo>,
495	) -> Result<Self, UnexpectedAncestor> {
496		let mut ancestors_map = BTreeMap::new();
497		let mut ancestors_by_hash = HashMap::new();
498		{
499			let mut prev = relay_parent.number;
500			for ancestor in ancestors {
501				if prev == 0 {
502					return Err(UnexpectedAncestor { number: ancestor.number, prev })
503				} else if ancestor.number != prev - 1 {
504					return Err(UnexpectedAncestor { number: ancestor.number, prev })
505				} else if prev == base_constraints.min_relay_parent_number {
506					break
507				} else {
508					prev = ancestor.number;
509					ancestors_by_hash.insert(ancestor.hash, ancestor.clone());
510					ancestors_map.insert(ancestor.number, ancestor);
511				}
512			}
513		}
514
515		Ok(Scope {
516			relay_parent,
517			base_constraints,
518			max_backable_len: max_backable_len + pending_availability.len(),
519			pending_availability,
520			ancestors: ancestors_map,
521			ancestors_by_hash,
522		})
523	}
524
525	/// Get the earliest relay-parent allowed in the scope of the fragment chain.
526	pub fn earliest_relay_parent(&self) -> RelayChainBlockInfo {
527		self.ancestors
528			.iter()
529			.next()
530			.map(|(_, v)| v.clone())
531			.unwrap_or_else(|| self.relay_parent.clone())
532	}
533
534	/// Get the relay ancestor of the fragment chain by hash.
535	pub fn ancestor(&self, hash: &Hash) -> Option<RelayChainBlockInfo> {
536		if hash == &self.relay_parent.hash {
537			return Some(self.relay_parent.clone())
538		}
539
540		self.ancestors_by_hash.get(hash).map(|info| info.clone())
541	}
542
543	/// Get the base constraints of the scope
544	pub fn base_constraints(&self) -> &Constraints {
545		&self.base_constraints
546	}
547
548	/// Whether the candidate in question is one pending availability in this scope.
549	fn get_pending_availability(
550		&self,
551		candidate_hash: &CandidateHash,
552	) -> Option<&PendingAvailability> {
553		self.pending_availability.iter().find(|c| &c.candidate_hash == candidate_hash)
554	}
555}
556
557#[cfg_attr(test, derive(Clone))]
558/// A node that is part of a `BackedChain`. It holds constraints based on the ancestors in the
559/// chain.
560struct FragmentNode {
561	fragment: Fragment,
562	candidate_hash: CandidateHash,
563	cumulative_modifications: ConstraintModifications,
564	parent_head_data_hash: Hash,
565	output_head_data_hash: Hash,
566}
567
568impl FragmentNode {
569	fn relay_parent(&self) -> Hash {
570		self.fragment.relay_parent().hash
571	}
572}
573
574impl From<&FragmentNode> for CandidateEntry {
575	fn from(node: &FragmentNode) -> Self {
576		// We don't need to perform the checks done in `CandidateEntry::new()`, since a
577		// `FragmentNode` always comes from a `CandidateEntry`
578		Self {
579			candidate_hash: node.candidate_hash,
580			parent_head_data_hash: node.parent_head_data_hash,
581			output_head_data_hash: node.output_head_data_hash,
582			candidate: node.fragment.candidate_clone(),
583			relay_parent: node.relay_parent(),
584			// A fragment node is always backed.
585			state: CandidateState::Backed,
586		}
587	}
588}
589
590/// A candidate chain of backed/backable candidates.
591/// Includes the candidates pending availability and candidates which may be backed on-chain.
592#[derive(Default)]
593#[cfg_attr(test, derive(Clone))]
594struct BackedChain {
595	// Holds the candidate chain.
596	chain: Vec<FragmentNode>,
597	// Index from head data hash to the candidate hash with that head data as a parent.
598	// Only contains the candidates present in the `chain`.
599	by_parent_head: HashMap<Hash, CandidateHash>,
600	// Index from head data hash to the candidate hash outputting that head data.
601	// Only contains the candidates present in the `chain`.
602	by_output_head: HashMap<Hash, CandidateHash>,
603	// A set of the candidate hashes in the `chain`.
604	candidates: HashSet<CandidateHash>,
605}
606
607impl BackedChain {
608	fn push(&mut self, candidate: FragmentNode) {
609		self.candidates.insert(candidate.candidate_hash);
610		self.by_parent_head
611			.insert(candidate.parent_head_data_hash, candidate.candidate_hash);
612		self.by_output_head
613			.insert(candidate.output_head_data_hash, candidate.candidate_hash);
614		self.chain.push(candidate);
615	}
616
617	fn clear(&mut self) -> Vec<FragmentNode> {
618		self.by_parent_head.clear();
619		self.by_output_head.clear();
620		self.candidates.clear();
621
622		std::mem::take(&mut self.chain)
623	}
624
625	fn revert_to_parent_hash<'a>(
626		&'a mut self,
627		parent_head_data_hash: &Hash,
628	) -> impl Iterator<Item = FragmentNode> + 'a {
629		let mut found_index = None;
630		for index in 0..self.chain.len() {
631			let node = &self.chain[index];
632
633			if found_index.is_some() {
634				self.by_parent_head.remove(&node.parent_head_data_hash);
635				self.by_output_head.remove(&node.output_head_data_hash);
636				self.candidates.remove(&node.candidate_hash);
637			} else if &node.output_head_data_hash == parent_head_data_hash {
638				found_index = Some(index);
639			}
640		}
641
642		if let Some(index) = found_index {
643			self.chain.drain(min(index + 1, self.chain.len())..)
644		} else {
645			// Don't remove anything, but use drain to satisfy the compiler.
646			self.chain.drain(0..0)
647		}
648	}
649
650	fn contains(&self, hash: &CandidateHash) -> bool {
651		self.candidates.contains(hash)
652	}
653}
654
655/// This is the fragment chain specific to an active leaf.
656///
657/// It holds the current best backable candidate chain, as well as potential candidates
658/// which could become connected to the chain in the future or which could even overwrite the
659/// existing chain.
660#[cfg_attr(test, derive(Clone))]
661pub(crate) struct FragmentChain {
662	// The current scope, which dictates the on-chain operating constraints that all future
663	// candidates must adhere to.
664	scope: Scope,
665
666	// The current best chain of backable candidates. It only contains candidates which build on
667	// top of each other and which have reached the backing quorum. In the presence of potential
668	// forks, this chain will pick a fork according to the `fork_selection_rule`.
669	best_chain: BackedChain,
670
671	// The potential candidate storage. Contains candidates which are not yet part of the `chain`
672	// but may become in the future. These can form any tree shape as well as contain any
673	// unconnected candidates for which we don't know the parent.
674	unconnected: CandidateStorage,
675}
676
677impl FragmentChain {
678	/// Create a new [`FragmentChain`] with the given scope and populate it with the candidates
679	/// pending availability.
680	pub fn init(scope: Scope, mut candidates_pending_availability: CandidateStorage) -> Self {
681		let mut fragment_chain = Self {
682			scope,
683			best_chain: BackedChain::default(),
684			unconnected: CandidateStorage::default(),
685		};
686
687		// We only need to populate the best backable chain. Candidates pending availability must
688		// form a chain with the latest included head.
689		fragment_chain.populate_chain(&mut candidates_pending_availability);
690
691		fragment_chain
692	}
693
694	/// Populate the [`FragmentChain`] given the new candidates pending availability and the
695	/// optional previous fragment chain (of the previous relay parent).
696	pub fn populate_from_previous(&mut self, prev_fragment_chain: &FragmentChain) {
697		let mut prev_storage = prev_fragment_chain.unconnected.clone();
698
699		for candidate in prev_fragment_chain.best_chain.chain.iter() {
700			// If they used to be pending availability, don't add them. This is fine
701			// because:
702			// - if they still are pending availability, they have already been added to the new
703			//   storage.
704			// - if they were included, no point in keeping them.
705			//
706			// This cannot happen for the candidates in the unconnected storage. The pending
707			// availability candidates will always be part of the best chain.
708			if prev_fragment_chain
709				.scope
710				.get_pending_availability(&candidate.candidate_hash)
711				.is_none()
712			{
713				let _ = prev_storage.add_candidate_entry(candidate.into());
714			}
715		}
716
717		// First populate the best backable chain.
718		self.populate_chain(&mut prev_storage);
719
720		// Now that we picked the best backable chain, trim the forks generated by candidates which
721		// are not present in the best chain.
722		self.trim_uneligible_forks(&mut prev_storage, None);
723
724		// Finally, keep any candidates which haven't been trimmed but still have potential.
725		self.populate_unconnected_potential_candidates(prev_storage);
726	}
727
728	/// Get the scope of the [`FragmentChain`].
729	pub fn scope(&self) -> &Scope {
730		&self.scope
731	}
732
733	/// Returns the number of candidates in the best backable chain.
734	pub fn best_chain_len(&self) -> usize {
735		self.best_chain.chain.len()
736	}
737
738	/// Returns the number of candidates in unconnected potential storage.
739	pub fn unconnected_len(&self) -> usize {
740		self.unconnected.len()
741	}
742
743	/// Whether the candidate exists as part of the unconnected potential candidates.
744	pub fn contains_unconnected_candidate(&self, candidate: &CandidateHash) -> bool {
745		self.unconnected.contains(candidate)
746	}
747
748	/// Return a vector of the chain's candidate hashes, in-order.
749	pub fn best_chain_vec(&self) -> Vec<CandidateHash> {
750		self.best_chain.chain.iter().map(|candidate| candidate.candidate_hash).collect()
751	}
752
753	/// Return a vector of the unconnected potential candidate hashes, in arbitrary order.
754	pub fn unconnected(&self) -> impl Iterator<Item = &CandidateEntry> {
755		self.unconnected.candidates()
756	}
757
758	/// Return whether this candidate is backed in this chain or the unconnected storage.
759	pub fn is_candidate_backed(&self, hash: &CandidateHash) -> bool {
760		self.best_chain.candidates.contains(hash) ||
761			matches!(
762				self.unconnected.by_candidate_hash.get(hash),
763				Some(candidate) if candidate.state == CandidateState::Backed
764			)
765	}
766
767	/// Mark a candidate as backed. This can trigger a recreation of the best backable chain.
768	pub fn candidate_backed(&mut self, newly_backed_candidate: &CandidateHash) {
769		// Already backed.
770		if self.best_chain.candidates.contains(newly_backed_candidate) {
771			return
772		}
773		let Some(parent_head_hash) = self
774			.unconnected
775			.by_candidate_hash
776			.get(newly_backed_candidate)
777			.map(|entry| entry.parent_head_data_hash)
778		else {
779			// Candidate is not in unconnected storage.
780			return
781		};
782
783		// Mark the candidate hash.
784		self.unconnected.mark_backed(newly_backed_candidate);
785
786		// Revert to parent_head_hash
787		if !self.revert_to(&parent_head_hash) {
788			// If nothing was reverted, there is nothing we can do for now.
789			return
790		}
791
792		let mut prev_storage = std::mem::take(&mut self.unconnected);
793
794		// Populate the chain.
795		self.populate_chain(&mut prev_storage);
796
797		// Now that we picked the best backable chain, trim the forks generated by candidates
798		// which are not present in the best chain. We can start trimming from this candidate
799		// onwards.
800		self.trim_uneligible_forks(&mut prev_storage, Some(parent_head_hash));
801
802		// Finally, keep any candidates which haven't been trimmed but still have potential.
803		self.populate_unconnected_potential_candidates(prev_storage);
804	}
805
806	/// Checks if this candidate could be added in the future to this chain.
807	/// This will return `Error::CandidateAlreadyKnown` if the candidate is already in the chain or
808	/// the unconnected candidate storage.
809	pub fn can_add_candidate_as_potential(
810		&self,
811		candidate: &impl HypotheticalOrConcreteCandidate,
812	) -> Result<(), Error> {
813		let candidate_hash = candidate.candidate_hash();
814
815		if self.best_chain.contains(&candidate_hash) || self.unconnected.contains(&candidate_hash) {
816			return Err(Error::CandidateAlreadyKnown)
817		}
818
819		self.check_potential(candidate)
820	}
821
822	/// Try adding a seconded candidate, if the candidate has potential. It will never be added to
823	/// the chain directly in the seconded state, it will only be part of the unconnected storage.
824	pub fn try_adding_seconded_candidate(
825		&mut self,
826		candidate: &CandidateEntry,
827	) -> Result<(), Error> {
828		if candidate.state == CandidateState::Backed {
829			return Err(Error::IntroduceBackedCandidate);
830		}
831
832		self.can_add_candidate_as_potential(candidate)?;
833
834		// This clone is cheap, as it uses an Arc for the expensive stuff.
835		// We can't consume the candidate because other fragment chains may use it also.
836		self.unconnected.add_candidate_entry(candidate.clone())?;
837
838		Ok(())
839	}
840
841	/// Try getting the full head data associated with this hash.
842	pub fn get_head_data_by_hash(&self, head_data_hash: &Hash) -> Option<HeadData> {
843		// First, see if this is the head data of the latest included candidate.
844		let required_parent = &self.scope.base_constraints().required_parent;
845		if &required_parent.hash() == head_data_hash {
846			return Some(required_parent.clone())
847		}
848
849		// Cheaply check if the head data is in the best backable chain.
850		let has_head_data_in_chain = self
851			.best_chain
852			.by_parent_head
853			.get(head_data_hash)
854			.or_else(|| self.best_chain.by_output_head.get(head_data_hash))
855			.is_some();
856
857		if has_head_data_in_chain {
858			return self.best_chain.chain.iter().find_map(|candidate| {
859				if &candidate.parent_head_data_hash == head_data_hash {
860					Some(
861						candidate
862							.fragment
863							.candidate()
864							.persisted_validation_data
865							.parent_head
866							.clone(),
867					)
868				} else if &candidate.output_head_data_hash == head_data_hash {
869					Some(candidate.fragment.candidate().commitments.head_data.clone())
870				} else {
871					None
872				}
873			});
874		}
875
876		// Lastly, try getting the head data from the unconnected candidates.
877		self.unconnected.head_data_by_hash(head_data_hash).cloned()
878	}
879
880	/// Select `count` candidates after the given `ancestors` which can be backed on chain next.
881	///
882	/// The intention of the `ancestors` is to allow queries on the basis of
883	/// one or more candidates which were previously pending availability becoming
884	/// available or candidates timing out.
885	pub fn find_backable_chain(
886		&self,
887		ancestors: Ancestors,
888		count: u32,
889	) -> Vec<(CandidateHash, Hash)> {
890		if count == 0 {
891			return vec![]
892		}
893		let base_pos = self.find_ancestor_path(ancestors);
894
895		let actual_end_index =
896			std::cmp::min(base_pos + (count as usize), self.best_chain.chain.len());
897		let mut res = Vec::with_capacity(actual_end_index - base_pos);
898
899		for elem in &self.best_chain.chain[base_pos..actual_end_index] {
900			// Only supply candidates which are not yet pending availability. `ancestors` should
901			// have already contained them, but check just in case.
902			if self.scope.get_pending_availability(&elem.candidate_hash).is_none() {
903				res.push((elem.candidate_hash, elem.relay_parent()));
904			} else {
905				break
906			}
907		}
908
909		res
910	}
911
912	// Tries to orders the ancestors into a viable path from root to the last one.
913	// Stops when the ancestors are all used or when a node in the chain is not present in the
914	// ancestor set. Returns the index in the chain were the search stopped.
915	fn find_ancestor_path(&self, mut ancestors: Ancestors) -> usize {
916		if self.best_chain.chain.is_empty() {
917			return 0;
918		}
919
920		for (index, candidate) in self.best_chain.chain.iter().enumerate() {
921			if !ancestors.remove(&candidate.candidate_hash) {
922				return index
923			}
924		}
925
926		// This means that we found the entire chain in the ancestor set. There won't be anything
927		// left to back.
928		self.best_chain.chain.len()
929	}
930
931	// Return the earliest relay parent a new candidate can have in order to be added to the chain
932	// right now. This is the relay parent of the last candidate in the chain.
933	// The value returned may not be valid if we want to add a candidate pending availability, which
934	// may have a relay parent which is out of scope. Special handling is needed in that case.
935	// `None` is returned if the candidate's relay parent info cannot be found.
936	fn earliest_relay_parent(&self) -> Option<RelayChainBlockInfo> {
937		if let Some(last_candidate) = self.best_chain.chain.last() {
938			self.scope.ancestor(&last_candidate.relay_parent()).or_else(|| {
939				// if the relay-parent is out of scope _and_ it is in the chain,
940				// it must be a candidate pending availability.
941				self.scope
942					.get_pending_availability(&last_candidate.candidate_hash)
943					.map(|c| c.relay_parent.clone())
944			})
945		} else {
946			Some(self.scope.earliest_relay_parent())
947		}
948	}
949
950	// Return the earliest relay parent a potential candidate may have for it to ever be added to
951	// the chain. This is the relay parent of the last candidate pending availability or the
952	// earliest relay parent in scope.
953	fn earliest_relay_parent_pending_availability(&self) -> RelayChainBlockInfo {
954		self.best_chain
955			.chain
956			.iter()
957			.rev()
958			.find_map(|candidate| {
959				self.scope
960					.get_pending_availability(&candidate.candidate_hash)
961					.map(|c| c.relay_parent.clone())
962			})
963			.unwrap_or_else(|| self.scope.earliest_relay_parent())
964	}
965
966	// Populate the unconnected potential candidate storage starting from a previous storage.
967	fn populate_unconnected_potential_candidates(&mut self, old_storage: CandidateStorage) {
968		for candidate in old_storage.by_candidate_hash.into_values() {
969			// Sanity check, all pending availability candidates should be already present in the
970			// chain.
971			if self.scope.get_pending_availability(&candidate.candidate_hash).is_some() {
972				continue
973			}
974
975			match self.can_add_candidate_as_potential(&candidate) {
976				Ok(()) => {
977					let _ = self.unconnected.add_candidate_entry(candidate);
978				},
979				// Swallow these errors as they can legitimately happen when pruning stale
980				// candidates.
981				Err(_) => {},
982			};
983		}
984	}
985
986	// Check whether a candidate outputting this head data would introduce a cycle or multiple paths
987	// to the same state. Trivial 0-length cycles are checked in `CandidateEntry::new`.
988	fn check_cycles_or_invalid_tree(&self, output_head_hash: &Hash) -> Result<(), Error> {
989		// this should catch a cycle where this candidate would point back to the parent of some
990		// candidate in the chain.
991		if self.best_chain.by_parent_head.contains_key(output_head_hash) {
992			return Err(Error::Cycle)
993		}
994
995		// multiple paths to the same state, which can't happen for a chain.
996		if self.best_chain.by_output_head.contains_key(output_head_hash) {
997			return Err(Error::MultiplePaths)
998		}
999
1000		Ok(())
1001	}
1002
1003	// Checks the potential of a candidate to be added to the chain now or in the future.
1004	// It works both with concrete candidates for which we have the full PVD and committed receipt,
1005	// but also does some more basic checks for incomplete candidates (before even fetching them).
1006	fn check_potential(
1007		&self,
1008		candidate: &impl HypotheticalOrConcreteCandidate,
1009	) -> Result<(), Error> {
1010		let relay_parent = candidate.relay_parent();
1011		let parent_head_hash = candidate.parent_head_data_hash();
1012
1013		// trivial 0-length cycle.
1014		if let Some(output_head_hash) = candidate.output_head_data_hash() {
1015			if parent_head_hash == output_head_hash {
1016				return Err(Error::ZeroLengthCycle)
1017			}
1018		}
1019
1020		// Check if the relay parent is in scope.
1021		let Some(relay_parent) = self.scope.ancestor(&relay_parent) else {
1022			return Err(Error::RelayParentNotInScope(
1023				relay_parent,
1024				self.scope.earliest_relay_parent().hash,
1025			))
1026		};
1027
1028		// Check if the relay parent moved backwards from the latest candidate pending availability.
1029		let earliest_rp_of_pending_availability = self.earliest_relay_parent_pending_availability();
1030		if relay_parent.number < earliest_rp_of_pending_availability.number {
1031			return Err(Error::RelayParentPrecedesCandidatePendingAvailability(
1032				relay_parent.hash,
1033				earliest_rp_of_pending_availability.hash,
1034			))
1035		}
1036
1037		// If it's a fork with a backed candidate in the current chain.
1038		if let Some(other_candidate) = self.best_chain.by_parent_head.get(&parent_head_hash) {
1039			if self.scope().get_pending_availability(other_candidate).is_some() {
1040				// Cannot accept a fork with a candidate pending availability.
1041				return Err(Error::ForkWithCandidatePendingAvailability(*other_candidate))
1042			}
1043
1044			// If the candidate is backed and in the current chain, accept only a candidate
1045			// according to the fork selection rule.
1046			if fork_selection_rule(other_candidate, &candidate.candidate_hash()) == Ordering::Less {
1047				return Err(Error::ForkChoiceRule(*other_candidate))
1048			}
1049		}
1050
1051		// Try seeing if the parent candidate is in the current chain or if it is the latest
1052		// included candidate. If so, get the constraints the candidate must satisfy.
1053		let (is_unconnected, constraints, maybe_min_relay_parent_number) =
1054			if let Some(parent_candidate) = self.best_chain.by_output_head.get(&parent_head_hash) {
1055				let Some(parent_candidate) =
1056					self.best_chain.chain.iter().find(|c| &c.candidate_hash == parent_candidate)
1057				else {
1058					// Should never really happen.
1059					return Err(Error::ParentCandidateNotFound)
1060				};
1061
1062				(
1063					false,
1064					self.scope
1065						.base_constraints
1066						.apply_modifications(&parent_candidate.cumulative_modifications)
1067						.map_err(Error::ComputeConstraints)?,
1068					self.scope.ancestor(&parent_candidate.relay_parent()).map(|rp| rp.number),
1069				)
1070			} else if self.scope.base_constraints.required_parent.hash() == parent_head_hash {
1071				// It builds on the latest included candidate.
1072				(false, self.scope.base_constraints.clone(), None)
1073			} else {
1074				// The parent is not yet part of the chain
1075				(true, self.scope.base_constraints.clone(), None)
1076			};
1077
1078		// Check for cycles or invalid tree transitions.
1079		if let Some(ref output_head_hash) = candidate.output_head_data_hash() {
1080			self.check_cycles_or_invalid_tree(output_head_hash)?;
1081		}
1082
1083		// Check against constraints if we have a full concrete candidate.
1084		if let (Some(commitments), Some(pvd), Some(validation_code_hash)) = (
1085			candidate.commitments(),
1086			candidate.persisted_validation_data(),
1087			candidate.validation_code_hash(),
1088		) {
1089			if is_unconnected {
1090				// If the parent is not yet part of the chain, we can check the commitments only
1091				// if we have the full candidate.
1092				return validate_commitments(
1093					&self.scope.base_constraints,
1094					&relay_parent,
1095					commitments,
1096					&validation_code_hash,
1097				)
1098				.map_err(Error::CheckAgainstConstraints)
1099			}
1100			Fragment::check_against_constraints(
1101				&relay_parent,
1102				&constraints,
1103				commitments,
1104				&validation_code_hash,
1105				pvd,
1106			)
1107			.map_err(Error::CheckAgainstConstraints)?;
1108		}
1109
1110		if relay_parent.number < constraints.min_relay_parent_number {
1111			return Err(Error::RelayParentMovedBackwards)
1112		}
1113
1114		if let Some(earliest_rp) = maybe_min_relay_parent_number {
1115			if relay_parent.number < earliest_rp {
1116				return Err(Error::RelayParentMovedBackwards)
1117			}
1118		}
1119
1120		Ok(())
1121	}
1122
1123	// Once the backable chain was populated, trim the forks generated by candidates which
1124	// are not present in the best chain. Fan this out into a full breadth-first search.
1125	// If `starting_point` is `Some()`, start the search from the candidates having this parent head
1126	// hash.
1127	fn trim_uneligible_forks(&self, storage: &mut CandidateStorage, starting_point: Option<Hash>) {
1128		// Start out with the candidates in the chain. They are all valid candidates.
1129		let mut queue: VecDeque<_> = if let Some(starting_point) = starting_point {
1130			[(starting_point, true)].into_iter().collect()
1131		} else {
1132			if self.best_chain.chain.is_empty() {
1133				[(self.scope.base_constraints.required_parent.hash(), true)]
1134					.into_iter()
1135					.collect()
1136			} else {
1137				self.best_chain.chain.iter().map(|c| (c.parent_head_data_hash, true)).collect()
1138			}
1139		};
1140		// To make sure that cycles don't make us loop forever, keep track of the visited parent
1141		// heads.
1142		let mut visited = HashSet::new();
1143
1144		while let Some((parent, parent_has_potential)) = queue.pop_front() {
1145			visited.insert(parent);
1146
1147			let Some(children) = storage.by_parent_head.get(&parent) else { continue };
1148			// Cannot remove while iterating so store them here temporarily.
1149			let mut to_remove = vec![];
1150
1151			for child_hash in children.iter() {
1152				let Some(child) = storage.by_candidate_hash.get(child_hash) else { continue };
1153
1154				// Already visited this parent. Either is a cycle or multiple paths that lead to the
1155				// same candidate. Either way, stop this branch to avoid looping forever.
1156				if visited.contains(&child.output_head_data_hash) {
1157					continue
1158				}
1159
1160				// Only keep a candidate if its full ancestry was already kept as potential and this
1161				// candidate itself has potential.
1162				if parent_has_potential && self.check_potential(child).is_ok() {
1163					queue.push_back((child.output_head_data_hash, true));
1164				} else {
1165					// Otherwise, remove this candidate and continue looping for its children, but
1166					// mark the parent's potential as `false`. We only want to remove its
1167					// children.
1168					to_remove.push(*child_hash);
1169					queue.push_back((child.output_head_data_hash, false));
1170				}
1171			}
1172
1173			for hash in to_remove {
1174				storage.remove_candidate(&hash);
1175			}
1176		}
1177	}
1178
1179	// Populate the fragment chain with candidates from the supplied `CandidateStorage`.
1180	// Can be called by the constructor or when backing a new candidate.
1181	// When this is called, it may cause the previous chain to be completely erased or it may add
1182	// more than one candidate.
1183	fn populate_chain(&mut self, storage: &mut CandidateStorage) {
1184		let mut cumulative_modifications =
1185			if let Some(last_candidate) = self.best_chain.chain.last() {
1186				last_candidate.cumulative_modifications.clone()
1187			} else {
1188				ConstraintModifications::identity()
1189			};
1190		let Some(mut earliest_rp) = self.earliest_relay_parent() else { return };
1191
1192		loop {
1193			if self.best_chain.chain.len() >= self.scope.max_backable_len {
1194				break;
1195			}
1196
1197			let child_constraints =
1198				match self.scope.base_constraints.apply_modifications(&cumulative_modifications) {
1199					Err(e) => {
1200						gum::debug!(
1201							target: LOG_TARGET,
1202							new_parent_head = ?cumulative_modifications.required_parent,
1203							err = ?e,
1204							"Failed to apply modifications",
1205						);
1206
1207						break
1208					},
1209					Ok(c) => c,
1210				};
1211
1212			let required_head_hash = child_constraints.required_parent.hash();
1213
1214			// Select the few possible backed/backable children which can be added to the chain
1215			// right now.
1216			let possible_children = storage
1217				.possible_backed_para_children(&required_head_hash)
1218				.filter_map(|candidate| {
1219					// Only select a candidate if:
1220					// 1. it does not introduce a fork or a cycle.
1221					// 2. parent hash is correct.
1222					// 3. relay-parent does not move backwards.
1223					// 4. all non-pending-availability candidates have relay-parent in scope.
1224					// 5. candidate outputs fulfill constraints
1225
1226					let pending = self.scope.get_pending_availability(&candidate.candidate_hash);
1227					let Some(relay_parent) = pending
1228						.map(|p| p.relay_parent.clone())
1229						.or_else(|| self.scope.ancestor(&candidate.relay_parent))
1230					else {
1231						return None
1232					};
1233
1234					if self.check_cycles_or_invalid_tree(&candidate.output_head_data_hash).is_err()
1235					{
1236						return None
1237					}
1238
1239					// require: candidates don't move backwards
1240					// and only pending availability candidates can be out-of-scope.
1241					//
1242					// earliest_rp can be before the earliest relay parent in the scope
1243					// when the parent is a pending availability candidate as well, but
1244					// only other pending candidates can have a relay parent out of scope.
1245					let min_relay_parent_number = pending
1246						.map(|p| match self.best_chain.chain.len() {
1247							0 => p.relay_parent.number,
1248							_ => earliest_rp.number,
1249						})
1250						.unwrap_or_else(|| earliest_rp.number);
1251
1252					if relay_parent.number < min_relay_parent_number {
1253						return None // relay parent moved backwards.
1254					}
1255
1256					// don't add candidates if they're already present in the chain.
1257					// this can never happen, as candidates can only be duplicated if there's a
1258					// cycle and we shouldn't have allowed for a cycle to be chained.
1259					if self.best_chain.contains(&candidate.candidate_hash) {
1260						return None
1261					}
1262
1263					let fragment = {
1264						let mut constraints = child_constraints.clone();
1265						if let Some(ref p) = pending {
1266							// overwrite for candidates pending availability as a special-case.
1267							constraints.min_relay_parent_number = p.relay_parent.number;
1268						}
1269
1270						let f = Fragment::new(
1271							relay_parent.clone(),
1272							constraints,
1273							// It's cheap to clone because it's wrapped in an Arc
1274							candidate.candidate.clone(),
1275						);
1276
1277						match f {
1278							Ok(f) => f,
1279							Err(e) => {
1280								gum::debug!(
1281									target: LOG_TARGET,
1282									err = ?e,
1283									?relay_parent,
1284									candidate_hash = ?candidate.candidate_hash,
1285									"Failed to instantiate fragment",
1286								);
1287
1288								return None
1289							},
1290						}
1291					};
1292
1293					Some((
1294						fragment,
1295						candidate.candidate_hash,
1296						candidate.output_head_data_hash,
1297						candidate.parent_head_data_hash,
1298					))
1299				});
1300
1301			// Choose the best candidate.
1302			let best_candidate =
1303				possible_children.min_by(|(_, ref child1, _, _), (_, ref child2, _, _)| {
1304					// Always pick a candidate pending availability as best.
1305					if self.scope.get_pending_availability(child1).is_some() {
1306						Ordering::Less
1307					} else if self.scope.get_pending_availability(child2).is_some() {
1308						Ordering::Greater
1309					} else {
1310						// Otherwise, use the fork selection rule.
1311						fork_selection_rule(child1, child2)
1312					}
1313				});
1314
1315			if let Some((fragment, candidate_hash, output_head_data_hash, parent_head_data_hash)) =
1316				best_candidate
1317			{
1318				// Remove the candidate from storage.
1319				storage.remove_candidate(&candidate_hash);
1320
1321				// Update the cumulative constraint modifications.
1322				cumulative_modifications.stack(fragment.constraint_modifications());
1323				// Update the earliest rp
1324				earliest_rp = fragment.relay_parent().clone();
1325
1326				let node = FragmentNode {
1327					fragment,
1328					candidate_hash,
1329					parent_head_data_hash,
1330					output_head_data_hash,
1331					cumulative_modifications: cumulative_modifications.clone(),
1332				};
1333
1334				// Add the candidate to the chain now.
1335				self.best_chain.push(node);
1336			} else {
1337				break
1338			}
1339		}
1340	}
1341
1342	// Revert the best backable chain so that the last candidate will be one outputting the given
1343	// `parent_head_hash`. If the `parent_head_hash` is exactly the required parent of the base
1344	// constraints (builds on the latest included candidate), revert the entire chain.
1345	// Return false if we couldn't find the parent head hash.
1346	fn revert_to(&mut self, parent_head_hash: &Hash) -> bool {
1347		let mut removed_items = None;
1348		if &self.scope.base_constraints.required_parent.hash() == parent_head_hash {
1349			removed_items = Some(self.best_chain.clear());
1350		}
1351
1352		if removed_items.is_none() && self.best_chain.by_output_head.contains_key(parent_head_hash)
1353		{
1354			removed_items = Some(self.best_chain.revert_to_parent_hash(parent_head_hash).collect());
1355		}
1356
1357		let Some(removed_items) = removed_items else { return false };
1358
1359		// Even if it's empty, we need to return true, because we'll be able to add a new candidate
1360		// to the chain.
1361		for node in &removed_items {
1362			let _ = self.unconnected.add_candidate_entry(node.into());
1363		}
1364		true
1365	}
1366}