Trait scale_info::prelude::marker::Send
1.0.0 · source · pub unsafe auto trait Send { }
Expand description
Types that can be transferred across thread boundaries.
This trait is automatically implemented when the compiler determines it’s appropriate.
An example of a non-Send
type is the reference-counting pointer
rc::Rc
. If two threads attempt to clone Rc
s that point to the same
reference-counted value, they might try to update the reference count at the
same time, which is undefined behavior because Rc
doesn’t use atomic
operations. Its cousin sync::Arc
does use atomic operations (incurring
some overhead) and thus is Send
.
See the Nomicon and the Sync
trait for more details.
Implementors§
impl !Send for Arguments<'_>
impl !Send for LocalWaker
impl !Send for Args
impl !Send for ArgsOs
impl Send for bytes::bytes::Bytes
impl Send for BytesMut
impl Send for scale_info::prelude::string::Drain<'_>
impl Send for core::ffi::c_str::Bytes<'_>
impl Send for Waker
impl<'a> Send for IoSlice<'a>
impl<'a> Send for IoSliceMut<'a>
impl<'a, T, O> Send for bitvec::slice::iter::Iter<'a, T, O>
impl<'a, T, O> Send for bitvec::slice::iter::IterMut<'a, T, O>
impl<'a, T, const CAP: usize> Send for arrayvec::arrayvec::Drain<'a, T, CAP>where
T: Send,
impl<Dyn> Send for DynMetadata<Dyn>where
Dyn: ?Sized,
impl<M, T, O> Send for BitRef<'_, M, T, O>
impl<T> !Send for *const Twhere
T: ?Sized,
impl<T> !Send for *mut Twhere
T: ?Sized,
impl<T> !Send for MappedMutexGuard<'_, T>where
T: ?Sized,
impl<T> !Send for MappedRwLockReadGuard<'_, T>where
T: ?Sized,
impl<T> !Send for MappedRwLockWriteGuard<'_, T>where
T: ?Sized,
impl<T> !Send for MutexGuard<'_, T>where
T: ?Sized,
impl<T> !Send for ReentrantLockGuard<'_, T>where
T: ?Sized,
impl<T> !Send for RwLockReadGuard<'_, T>where
T: ?Sized,
impl<T> !Send for RwLockWriteGuard<'_, T>where
T: ?Sized,
impl<T> !Send for NonNull<T>where
T: ?Sized,
NonNull
pointers are not Send
because the data they reference may be aliased.
impl<T> Send for BitSpanError<T>where
T: BitStore,
impl<T> Send for &T
impl<T> Send for MisalignError<T>
impl<T> Send for ThinBox<T>
ThinBox<T>
is Send
if T
is Send
because the data is owned.
impl<T> Send for scale_info::prelude::collections::linked_list::Iter<'_, T>where
T: Sync,
impl<T> Send for scale_info::prelude::collections::linked_list::IterMut<'_, T>where
T: Send,
impl<T> Send for NonZero<T>where
T: ZeroablePrimitive + Send,
impl<T> Send for AtomicPtr<T>
impl<T> Send for Receiver<T>where
T: Send,
impl<T> Send for Sender<T>where
T: Send,
impl<T> Send for SyncSender<T>where
T: Send,
impl<T> Send for Mutex<T>
impl<T> Send for OnceLock<T>where
T: Send,
impl<T> Send for ReentrantLock<T>
impl<T> Send for RwLock<T>
impl<T> Send for Cell<T>
impl<T> Send for RefCell<T>
impl<T> Send for ChunksExactMut<'_, T>where
T: Send,
impl<T> Send for ChunksMut<'_, T>where
T: Send,
impl<T> Send for core::slice::iter::Iter<'_, T>where
T: Sync,
impl<T> Send for core::slice::iter::IterMut<'_, T>where
T: Send,
impl<T> Send for RChunksExactMut<'_, T>where
T: Send,
impl<T> Send for RChunksMut<'_, T>where
T: Send,
impl<T> Send for JoinHandle<T>
impl<T, A> !Send for Rc<T, A>
impl<T, A> !Send for scale_info::prelude::rc::Weak<T, A>
impl<T, A> Send for scale_info::prelude::collections::linked_list::Cursor<'_, T, A>
impl<T, A> Send for scale_info::prelude::collections::linked_list::CursorMut<'_, T, A>
impl<T, A> Send for LinkedList<T, A>
impl<T, A> Send for scale_info::prelude::collections::vec_deque::Drain<'_, T, A>
impl<T, A> Send for Arc<T, A>
impl<T, A> Send for scale_info::prelude::sync::Weak<T, A>
impl<T, A> Send for scale_info::prelude::vec::Drain<'_, T, A>
impl<T, A> Send for scale_info::prelude::vec::IntoIter<T, A>
impl<T, O> Send for bitvec::boxed::iter::IntoIter<T, O>
impl<T, O> Send for BitBox<T, O>
impl<T, O> Send for BitSlice<T, O>
§Bit-Slice Thread Safety
This allows bit-slice references to be moved across thread boundaries only when
the underlying T
element can tolerate concurrency.
All BitSlice
references, shared or exclusive, are only threadsafe if the T
element type is Send
, because any given bit-slice reference may only have
partial control of a memory element that is also being shared by a bit-slice
reference on another thread. As such, this is never implemented for Cell<U>
,
but always implemented for AtomicU
and U
for a given unsigned integer type
U
.
Atomic integers safely handle concurrent writes, cells do not allow concurrency
at all, so the only missing piece is &mut BitSlice<_, U: Unsigned>
. This is
handled by the aliasing system that the mutable splitters employ: a mutable
reference to an unsynchronized bit-slice can only cross threads when no other
handle is able to exist to the elements it governs. Splitting a mutable
bit-slice causes the split halves to change over to either atomics or cells, so
concurrency is either safe or impossible.