wasmtime/
store.rs

1//! Wasmtime's "store" type
2//!
3//! This module, and its submodules, contain the `Store` type and various types
4//! used to interact with it. At first glance this is a pretty confusing module
5//! where you need to know the difference between:
6//!
7//! * `Store<T>`
8//! * `StoreContext<T>`
9//! * `StoreContextMut<T>`
10//! * `AsContext`
11//! * `AsContextMut`
12//! * `StoreInner<T>`
13//! * `StoreOpaque`
14//! * `StoreData`
15//!
16//! There's... quite a lot going on here, and it's easy to be confused. This
17//! comment is ideally going to serve the purpose of clarifying what all these
18//! types are for and why they're motivated.
19//!
20//! First it's important to know what's "internal" and what's "external". Almost
21//! everything above is defined as `pub`, but only some of the items are
22//! reexported to the outside world to be usable from this crate. Otherwise all
23//! items are `pub` within this `store` module, and the `store` module is
24//! private to the `wasmtime` crate. Notably `Store<T>`, `StoreContext<T>`,
25//! `StoreContextMut<T>`, `AsContext`, and `AsContextMut` are all public
26//! interfaces to the `wasmtime` crate. You can think of these as:
27//!
28//! * `Store<T>` - an owned reference to a store, the "root of everything"
29//! * `StoreContext<T>` - basically `&StoreInner<T>`
30//! * `StoreContextMut<T>` - more-or-less `&mut StoreInner<T>` with caveats.
31//!   Explained later.
32//! * `AsContext` - similar to `AsRef`, but produces `StoreContext<T>`
33//! * `AsContextMut` - similar to `AsMut`, but produces `StoreContextMut<T>`
34//!
35//! Next comes the internal structure of the `Store<T>` itself. This looks like:
36//!
37//! * `Store<T>` - this type is just a pointer large. It's primarily just
38//!   intended to be consumed by the outside world. Note that the "just a
39//!   pointer large" is a load-bearing implementation detail in Wasmtime. This
40//!   enables it to store a pointer to its own trait object which doesn't need
41//!   to change over time.
42//!
43//! * `StoreInner<T>` - the first layer of the contents of a `Store<T>`, what's
44//!   stored inside the `Box`. This is the general Rust pattern when one struct
45//!   is a layer over another. The surprising part, though, is that this is
46//!   further subdivided. This structure only contains things which actually
47//!   need `T` itself. The downside of this structure is that it's always
48//!   generic and means that code is monomorphized into consumer crates. We
49//!   strive to have things be as monomorphic as possible in `wasmtime` so this
50//!   type is not heavily used.
51//!
52//! * `StoreOpaque` - this is the primary contents of the `StoreInner<T>` type.
53//!   Stored inline in the outer type the "opaque" here means that it's a
54//!   "store" but it doesn't have access to the `T`. This is the primary
55//!   "internal" reference that Wasmtime uses since `T` is rarely needed by the
56//!   internals of Wasmtime.
57//!
58//! * `StoreData` - this is a final helper struct stored within `StoreOpaque`.
59//!   All references of Wasm items into a `Store` are actually indices into a
60//!   table in this structure, and the `StoreData` being separate makes it a bit
61//!   easier to manage/define/work with. There's no real fundamental reason this
62//!   is split out, although sometimes it's useful to have separate borrows into
63//!   these tables than the `StoreOpaque`.
64//!
65//! A major caveat with these representations is that the internal `&mut
66//! StoreInner<T>` is never handed out publicly to consumers of this crate, only
67//! through a wrapper of `StoreContextMut<'_, T>`. The reason for this is that
68//! we want to provide mutable, but not destructive, access to the contents of a
69//! `Store`. For example if a `StoreInner<T>` were replaced with some other
70//! `StoreInner<T>` then that would drop live instances, possibly those
71//! currently executing beneath the current stack frame. This would not be a
72//! safe operation.
73//!
74//! This means, though, that the `wasmtime` crate, which liberally uses `&mut
75//! StoreOpaque` internally, has to be careful to never actually destroy the
76//! contents of `StoreOpaque`. This is an invariant that we, as the authors of
77//! `wasmtime`, must uphold for the public interface to be safe.
78
79use crate::linker::Definition;
80use crate::module::BareModuleInfo;
81use crate::{module::ModuleRegistry, Engine, Module, Trap, Val, ValRaw};
82use anyhow::{anyhow, bail, Result};
83use std::cell::UnsafeCell;
84use std::collections::HashMap;
85use std::convert::TryFrom;
86use std::fmt;
87use std::future::Future;
88use std::marker;
89use std::mem::{self, ManuallyDrop};
90use std::ops::{Deref, DerefMut};
91use std::pin::Pin;
92use std::ptr;
93use std::sync::atomic::AtomicU64;
94use std::sync::Arc;
95use std::task::{Context, Poll};
96use wasmtime_runtime::{
97    InstanceAllocationRequest, InstanceAllocator, InstanceHandle, ModuleInfo,
98    OnDemandInstanceAllocator, SignalHandler, StorePtr, VMCallerCheckedFuncRef, VMContext,
99    VMExternRef, VMExternRefActivationsTable, VMRuntimeLimits, VMSharedSignatureIndex,
100    VMTrampoline, WasmFault,
101};
102
103mod context;
104pub use self::context::*;
105mod data;
106pub use self::data::*;
107
108/// A [`Store`] is a collection of WebAssembly instances and host-defined state.
109///
110/// All WebAssembly instances and items will be attached to and refer to a
111/// [`Store`]. For example instances, functions, globals, and tables are all
112/// attached to a [`Store`]. Instances are created by instantiating a
113/// [`Module`](crate::Module) within a [`Store`].
114///
115/// A [`Store`] is intended to be a short-lived object in a program. No form
116/// of GC is implemented at this time so once an instance is created within a
117/// [`Store`] it will not be deallocated until the [`Store`] itself is dropped.
118/// This makes [`Store`] unsuitable for creating an unbounded number of
119/// instances in it because [`Store`] will never release this memory. It's
120/// recommended to have a [`Store`] correspond roughly to the lifetime of a "main
121/// instance" that an embedding is interested in executing.
122///
123/// ## Type parameter `T`
124///
125/// Each [`Store`] has a type parameter `T` associated with it. This `T`
126/// represents state defined by the host. This state will be accessible through
127/// the [`Caller`](crate::Caller) type that host-defined functions get access
128/// to. This `T` is suitable for storing `Store`-specific information which
129/// imported functions may want access to.
130///
131/// The data `T` can be accessed through methods like [`Store::data`] and
132/// [`Store::data_mut`].
133///
134/// ## Stores, contexts, oh my
135///
136/// Most methods in Wasmtime take something of the form
137/// [`AsContext`](crate::AsContext) or [`AsContextMut`](crate::AsContextMut) as
138/// the first argument. These two traits allow ergonomically passing in the
139/// context you currently have to any method. The primary two sources of
140/// contexts are:
141///
142/// * `Store<T>`
143/// * `Caller<'_, T>`
144///
145/// corresponding to what you create and what you have access to in a host
146/// function. You can also explicitly acquire a [`StoreContext`] or
147/// [`StoreContextMut`] and pass that around as well.
148///
149/// Note that all methods on [`Store`] are mirrored onto [`StoreContext`],
150/// [`StoreContextMut`], and [`Caller`](crate::Caller). This way no matter what
151/// form of context you have you can call various methods, create objects, etc.
152///
153/// ## Stores and `Default`
154///
155/// You can create a store with default configuration settings using
156/// `Store::default()`. This will create a brand new [`Engine`] with default
157/// configuration (see [`Config`](crate::Config) for more information).
158pub struct Store<T> {
159    // for comments about `ManuallyDrop`, see `Store::into_data`
160    inner: ManuallyDrop<Box<StoreInner<T>>>,
161}
162
163#[derive(Copy, Clone, Debug)]
164/// Passed to the argument of [`Store::call_hook`] to indicate a state transition in
165/// the WebAssembly VM.
166pub enum CallHook {
167    /// Indicates the VM is calling a WebAssembly function, from the host.
168    CallingWasm,
169    /// Indicates the VM is returning from a WebAssembly function, to the host.
170    ReturningFromWasm,
171    /// Indicates the VM is calling a host function, from WebAssembly.
172    CallingHost,
173    /// Indicates the VM is returning from a host function, to WebAssembly.
174    ReturningFromHost,
175}
176
177impl CallHook {
178    /// Indicates the VM is entering host code (exiting WebAssembly code)
179    pub fn entering_host(&self) -> bool {
180        match self {
181            CallHook::ReturningFromWasm | CallHook::CallingHost => true,
182            _ => false,
183        }
184    }
185    /// Indicates the VM is exiting host code (entering WebAssembly code)
186    pub fn exiting_host(&self) -> bool {
187        match self {
188            CallHook::ReturningFromHost | CallHook::CallingWasm => true,
189            _ => false,
190        }
191    }
192}
193
194/// Internal contents of a `Store<T>` that live on the heap.
195///
196/// The members of this struct are those that need to be generic over `T`, the
197/// store's internal type storage. Otherwise all things that don't rely on `T`
198/// should go into `StoreOpaque`.
199pub struct StoreInner<T> {
200    /// Generic metadata about the store that doesn't need access to `T`.
201    inner: StoreOpaque,
202
203    limiter: Option<ResourceLimiterInner<T>>,
204    call_hook: Option<CallHookInner<T>>,
205    epoch_deadline_behavior: EpochDeadline<T>,
206    // for comments about `ManuallyDrop`, see `Store::into_data`
207    data: ManuallyDrop<T>,
208}
209
210enum ResourceLimiterInner<T> {
211    Sync(Box<dyn FnMut(&mut T) -> &mut (dyn crate::ResourceLimiter) + Send + Sync>),
212    #[cfg(feature = "async")]
213    Async(Box<dyn FnMut(&mut T) -> &mut (dyn crate::ResourceLimiterAsync) + Send + Sync>),
214}
215
216/// An object that can take callbacks when the runtime enters or exits hostcalls.
217#[cfg(feature = "async")]
218#[async_trait::async_trait]
219pub trait CallHookHandler<T>: Send {
220    /// A callback to run when wasmtime is about to enter a host call, or when about to
221    /// exit the hostcall.
222    async fn handle_call_event(&self, t: &mut T, ch: CallHook) -> Result<()>;
223}
224
225enum CallHookInner<T> {
226    Sync(Box<dyn FnMut(&mut T, CallHook) -> Result<()> + Send + Sync>),
227    #[cfg(feature = "async")]
228    Async(Box<dyn CallHookHandler<T> + Send + Sync>),
229}
230
231// Forward methods on `StoreOpaque` to also being on `StoreInner<T>`
232impl<T> Deref for StoreInner<T> {
233    type Target = StoreOpaque;
234    fn deref(&self) -> &Self::Target {
235        &self.inner
236    }
237}
238
239impl<T> DerefMut for StoreInner<T> {
240    fn deref_mut(&mut self) -> &mut Self::Target {
241        &mut self.inner
242    }
243}
244
245/// Monomorphic storage for a `Store<T>`.
246///
247/// This structure contains the bulk of the metadata about a `Store`. This is
248/// used internally in Wasmtime when dependence on the `T` of `Store<T>` isn't
249/// necessary, allowing code to be monomorphic and compiled into the `wasmtime`
250/// crate itself.
251pub struct StoreOpaque {
252    // This `StoreOpaque` structure has references to itself. These aren't
253    // immediately evident, however, so we need to tell the compiler that it
254    // contains self-references. This notably suppresses `noalias` annotations
255    // when this shows up in compiled code because types of this structure do
256    // indeed alias itself. An example of this is `default_callee` holds a
257    // `*mut dyn Store` to the address of this `StoreOpaque` itself, indeed
258    // aliasing!
259    //
260    // It's somewhat unclear to me at this time if this is 100% sufficient to
261    // get all the right codegen in all the right places. For example does
262    // `Store` need to internally contain a `Pin<Box<StoreInner<T>>>`? Do the
263    // contexts need to contain `Pin<&mut StoreInner<T>>`? I'm not familiar
264    // enough with `Pin` to understand if it's appropriate here (we do, for
265    // example want to allow movement in and out of `data: T`, just not movement
266    // of most of the other members). It's also not clear if using `Pin` in a
267    // few places buys us much other than a bunch of `unsafe` that we already
268    // sort of hand-wave away.
269    //
270    // In any case this seems like a good mid-ground for now where we're at
271    // least telling the compiler something about all the aliasing happening
272    // within a `Store`.
273    _marker: marker::PhantomPinned,
274
275    engine: Engine,
276    runtime_limits: VMRuntimeLimits,
277    instances: Vec<StoreInstance>,
278    signal_handler: Option<Box<SignalHandler<'static>>>,
279    externref_activations_table: VMExternRefActivationsTable,
280    modules: ModuleRegistry,
281
282    // See documentation on `StoreOpaque::lookup_trampoline` for what these
283    // fields are doing.
284    host_trampolines: HashMap<VMSharedSignatureIndex, VMTrampoline>,
285    host_func_trampolines_registered: usize,
286
287    // Numbers of resources instantiated in this store, and their limits
288    instance_count: usize,
289    instance_limit: usize,
290    memory_count: usize,
291    memory_limit: usize,
292    table_count: usize,
293    table_limit: usize,
294    /// An adjustment to add to the fuel consumed value in `runtime_limits` above
295    /// to get the true amount of fuel consumed.
296    fuel_adj: i64,
297    #[cfg(feature = "async")]
298    async_state: AsyncState,
299    out_of_gas_behavior: OutOfGas,
300    /// Indexed data within this `Store`, used to store information about
301    /// globals, functions, memories, etc.
302    ///
303    /// Note that this is `ManuallyDrop` because it needs to be dropped before
304    /// `rooted_host_funcs` below. This structure contains pointers which are
305    /// otherwise kept alive by the `Arc` references in `rooted_host_funcs`.
306    store_data: ManuallyDrop<StoreData>,
307    default_caller: InstanceHandle,
308
309    /// Used to optimzed wasm->host calls when the host function is defined with
310    /// `Func::new` to avoid allocating a new vector each time a function is
311    /// called.
312    hostcall_val_storage: Vec<Val>,
313    /// Same as `hostcall_val_storage`, but for the direction of the host
314    /// calling wasm.
315    wasm_val_raw_storage: Vec<ValRaw>,
316
317    /// A list of lists of definitions which have been used to instantiate
318    /// within this `Store`.
319    ///
320    /// Note that not all instantiations end up pushing to this list. At the
321    /// time of this writing only the `InstancePre<T>` type will push to this
322    /// list. Pushes to this list are typically accompanied with
323    /// `HostFunc::to_func_store_rooted` to clone an `Arc` here once which
324    /// preserves a strong reference to the `Arc` for each `HostFunc` stored
325    /// within the list of `Definition`s.
326    ///
327    /// Note that this is `ManuallyDrop` as it must be dropped after
328    /// `store_data` above, where the function pointers are stored.
329    rooted_host_funcs: ManuallyDrop<Vec<Arc<[Definition]>>>,
330}
331
332#[cfg(feature = "async")]
333struct AsyncState {
334    current_suspend: UnsafeCell<*const wasmtime_fiber::Suspend<Result<()>, (), Result<()>>>,
335    current_poll_cx: UnsafeCell<*mut Context<'static>>,
336}
337
338// Lots of pesky unsafe cells and pointers in this structure. This means we need
339// to declare explicitly that we use this in a threadsafe fashion.
340#[cfg(feature = "async")]
341unsafe impl Send for AsyncState {}
342#[cfg(feature = "async")]
343unsafe impl Sync for AsyncState {}
344
345/// An RAII type to automatically mark a region of code as unsafe for GC.
346pub(crate) struct AutoAssertNoGc<T>
347where
348    T: std::ops::DerefMut<Target = StoreOpaque>,
349{
350    #[cfg(debug_assertions)]
351    prev_okay: bool,
352    store: T,
353}
354
355impl<T> AutoAssertNoGc<T>
356where
357    T: std::ops::DerefMut<Target = StoreOpaque>,
358{
359    pub fn new(mut store: T) -> Self {
360        drop(&mut store);
361        #[cfg(debug_assertions)]
362        {
363            let prev_okay = store.externref_activations_table.set_gc_okay(false);
364            return AutoAssertNoGc { store, prev_okay };
365        }
366        #[cfg(not(debug_assertions))]
367        {
368            return AutoAssertNoGc { store };
369        }
370    }
371}
372
373impl<T> std::ops::Deref for AutoAssertNoGc<T>
374where
375    T: std::ops::DerefMut<Target = StoreOpaque>,
376{
377    type Target = T;
378
379    fn deref(&self) -> &Self::Target {
380        &self.store
381    }
382}
383
384impl<T> std::ops::DerefMut for AutoAssertNoGc<T>
385where
386    T: std::ops::DerefMut<Target = StoreOpaque>,
387{
388    fn deref_mut(&mut self) -> &mut Self::Target {
389        &mut self.store
390    }
391}
392
393impl<T> Drop for AutoAssertNoGc<T>
394where
395    T: std::ops::DerefMut<Target = StoreOpaque>,
396{
397    fn drop(&mut self) {
398        #[cfg(debug_assertions)]
399        {
400            self.store
401                .externref_activations_table
402                .set_gc_okay(self.prev_okay);
403        }
404    }
405}
406
407/// Used to associate instances with the store.
408///
409/// This is needed to track if the instance was allocated explicitly with the on-demand
410/// instance allocator.
411struct StoreInstance {
412    handle: InstanceHandle,
413    // Stores whether or not to use the on-demand allocator to deallocate the instance
414    ondemand: bool,
415}
416
417#[derive(Copy, Clone)]
418enum OutOfGas {
419    Trap,
420    InjectFuel {
421        injection_count: u64,
422        fuel_to_inject: u64,
423    },
424}
425
426/// What to do when the engine epoch reaches the deadline for a Store
427/// during execution of a function using that store.
428#[derive(Default)]
429enum EpochDeadline<T> {
430    /// Return early with a trap.
431    #[default]
432    Trap,
433    /// Call a custom deadline handler.
434    Callback(Box<dyn FnMut(StoreContextMut<T>) -> Result<u64> + Send + Sync>),
435    /// Extend the deadline by the specified number of ticks after
436    /// yielding to the async executor loop.
437    #[cfg(feature = "async")]
438    YieldAndExtendDeadline { delta: u64 },
439}
440
441impl<T> Store<T> {
442    /// Creates a new [`Store`] to be associated with the given [`Engine`] and
443    /// `data` provided.
444    ///
445    /// The created [`Store`] will place no additional limits on the size of
446    /// linear memories or tables at runtime. Linear memories and tables will
447    /// be allowed to grow to any upper limit specified in their definitions.
448    /// The store will limit the number of instances, linear memories, and
449    /// tables created to 10,000. This can be overridden with the
450    /// [`Store::limiter`] configuration method.
451    pub fn new(engine: &Engine, data: T) -> Self {
452        // Wasmtime uses the callee argument to host functions to learn about
453        // the original pointer to the `Store` itself, allowing it to
454        // reconstruct a `StoreContextMut<T>`. When we initially call a `Func`,
455        // however, there's no "callee" to provide. To fix this we allocate a
456        // single "default callee" for the entire `Store`. This is then used as
457        // part of `Func::call` to guarantee that the `callee: *mut VMContext`
458        // is never null.
459        let default_callee = {
460            let module = Arc::new(wasmtime_environ::Module::default());
461            let shim = BareModuleInfo::empty(module).into_traitobj();
462            OnDemandInstanceAllocator::default()
463                .allocate(InstanceAllocationRequest {
464                    host_state: Box::new(()),
465                    imports: Default::default(),
466                    store: StorePtr::empty(),
467                    runtime_info: &shim,
468                })
469                .expect("failed to allocate default callee")
470        };
471
472        let mut inner = Box::new(StoreInner {
473            inner: StoreOpaque {
474                _marker: marker::PhantomPinned,
475                engine: engine.clone(),
476                runtime_limits: Default::default(),
477                instances: Vec::new(),
478                signal_handler: None,
479                externref_activations_table: VMExternRefActivationsTable::new(),
480                modules: ModuleRegistry::default(),
481                host_trampolines: HashMap::default(),
482                host_func_trampolines_registered: 0,
483                instance_count: 0,
484                instance_limit: crate::DEFAULT_INSTANCE_LIMIT,
485                memory_count: 0,
486                memory_limit: crate::DEFAULT_MEMORY_LIMIT,
487                table_count: 0,
488                table_limit: crate::DEFAULT_TABLE_LIMIT,
489                fuel_adj: 0,
490                #[cfg(feature = "async")]
491                async_state: AsyncState {
492                    current_suspend: UnsafeCell::new(ptr::null()),
493                    current_poll_cx: UnsafeCell::new(ptr::null_mut()),
494                },
495                out_of_gas_behavior: OutOfGas::Trap,
496                store_data: ManuallyDrop::new(StoreData::new()),
497                default_caller: default_callee,
498                hostcall_val_storage: Vec::new(),
499                wasm_val_raw_storage: Vec::new(),
500                rooted_host_funcs: ManuallyDrop::new(Vec::new()),
501            },
502            limiter: None,
503            call_hook: None,
504            epoch_deadline_behavior: EpochDeadline::Trap,
505            data: ManuallyDrop::new(data),
506        });
507
508        // Once we've actually allocated the store itself we can configure the
509        // trait object pointer of the default callee. Note the erasure of the
510        // lifetime here into `'static`, so in general usage of this trait
511        // object must be strictly bounded to the `Store` itself, and is a
512        // variant that we have to maintain throughout Wasmtime.
513        unsafe {
514            let traitobj = std::mem::transmute::<
515                *mut (dyn wasmtime_runtime::Store + '_),
516                *mut (dyn wasmtime_runtime::Store + 'static),
517            >(&mut *inner);
518            inner.default_caller.set_store(traitobj);
519        }
520
521        Self {
522            inner: ManuallyDrop::new(inner),
523        }
524    }
525
526    /// Access the underlying data owned by this `Store`.
527    #[inline]
528    pub fn data(&self) -> &T {
529        self.inner.data()
530    }
531
532    /// Access the underlying data owned by this `Store`.
533    #[inline]
534    pub fn data_mut(&mut self) -> &mut T {
535        self.inner.data_mut()
536    }
537
538    /// Consumes this [`Store`], destroying it, and returns the underlying data.
539    pub fn into_data(mut self) -> T {
540        // This is an unsafe operation because we want to avoid having a runtime
541        // check or boolean for whether the data is actually contained within a
542        // `Store`. The data itself is stored as `ManuallyDrop` since we're
543        // manually managing the memory here, and there's also a `ManuallyDrop`
544        // around the `Box<StoreInner<T>>`. The way this works though is a bit
545        // tricky, so here's how things get dropped appropriately:
546        //
547        // * When a `Store<T>` is normally dropped, the custom destructor for
548        //   `Store<T>` will drop `T`, then the `self.inner` field. The
549        //   rustc-glue destructor runs for `Box<StoreInner<T>>` which drops
550        //   `StoreInner<T>`. This cleans up all internal fields and doesn't
551        //   touch `T` because it's wrapped in `ManuallyDrop`.
552        //
553        // * When calling this method we skip the top-level destructor for
554        //   `Store<T>` with `mem::forget`. This skips both the destructor for
555        //   `T` and the destructor for `StoreInner<T>`. We do, however, run the
556        //   destructor for `Box<StoreInner<T>>` which, like above, will skip
557        //   the destructor for `T` since it's `ManuallyDrop`.
558        //
559        // In both cases all the other fields of `StoreInner<T>` should all get
560        // dropped, and the manual management of destructors is basically
561        // between this method and `Drop for Store<T>`. Note that this also
562        // means that `Drop for StoreInner<T>` cannot access `self.data`, so
563        // there is a comment indicating this as well.
564        unsafe {
565            let mut inner = ManuallyDrop::take(&mut self.inner);
566            std::mem::forget(self);
567            ManuallyDrop::take(&mut inner.data)
568        }
569    }
570
571    /// Configures the [`ResourceLimiter`] used to limit resource creation
572    /// within this [`Store`].
573    ///
574    /// Whenever resources such as linear memory, tables, or instances are
575    /// allocated the `limiter` specified here is invoked with the store's data
576    /// `T` and the returned [`ResourceLimiter`] is used to limit the operation
577    /// being allocated. The returned [`ResourceLimiter`] is intended to live
578    /// within the `T` itself, for example by storing a
579    /// [`StoreLimits`](crate::StoreLimits).
580    ///
581    /// Note that this limiter is only used to limit the creation/growth of
582    /// resources in the future, this does not retroactively attempt to apply
583    /// limits to the [`Store`].
584    ///
585    /// # Examples
586    ///
587    /// ```
588    /// use wasmtime::*;
589    ///
590    /// struct MyApplicationState {
591    ///     my_state: u32,
592    ///     limits: StoreLimits,
593    /// }
594    ///
595    /// let engine = Engine::default();
596    /// let my_state = MyApplicationState {
597    ///     my_state: 42,
598    ///     limits: StoreLimitsBuilder::new()
599    ///         .memory_size(1 << 20 /* 1 MB */)
600    ///         .instances(2)
601    ///         .build(),
602    /// };
603    /// let mut store = Store::new(&engine, my_state);
604    /// store.limiter(|state| &mut state.limits);
605    ///
606    /// // Creation of smaller memories is allowed
607    /// Memory::new(&mut store, MemoryType::new(1, None)).unwrap();
608    ///
609    /// // Creation of a larger memory, however, will exceed the 1MB limit we've
610    /// // configured
611    /// assert!(Memory::new(&mut store, MemoryType::new(1000, None)).is_err());
612    ///
613    /// // The number of instances in this store is limited to 2, so the third
614    /// // instance here should fail.
615    /// let module = Module::new(&engine, "(module)").unwrap();
616    /// assert!(Instance::new(&mut store, &module, &[]).is_ok());
617    /// assert!(Instance::new(&mut store, &module, &[]).is_ok());
618    /// assert!(Instance::new(&mut store, &module, &[]).is_err());
619    /// ```
620    ///
621    /// [`ResourceLimiter`]: crate::ResourceLimiter
622    pub fn limiter(
623        &mut self,
624        mut limiter: impl FnMut(&mut T) -> &mut (dyn crate::ResourceLimiter) + Send + Sync + 'static,
625    ) {
626        // Apply the limits on instances, tables, and memory given by the limiter:
627        let inner = &mut self.inner;
628        let (instance_limit, table_limit, memory_limit) = {
629            let l = limiter(&mut inner.data);
630            (l.instances(), l.tables(), l.memories())
631        };
632        let innermost = &mut inner.inner;
633        innermost.instance_limit = instance_limit;
634        innermost.table_limit = table_limit;
635        innermost.memory_limit = memory_limit;
636
637        // Save the limiter accessor function:
638        inner.limiter = Some(ResourceLimiterInner::Sync(Box::new(limiter)));
639    }
640
641    /// Configures the [`ResourceLimiterAsync`](crate::ResourceLimiterAsync)
642    /// used to limit resource creation within this [`Store`].
643    ///
644    /// This method is an asynchronous variant of the [`Store::limiter`] method
645    /// where the embedder can block the wasm request for more resources with
646    /// host `async` execution of futures.
647    ///
648    /// By using a [`ResourceLimiterAsync`](`crate::ResourceLimiterAsync`)
649    /// with a [`Store`], you can no longer use
650    /// [`Memory::new`](`crate::Memory::new`),
651    /// [`Memory::grow`](`crate::Memory::grow`),
652    /// [`Table::new`](`crate::Table::new`), and
653    /// [`Table::grow`](`crate::Table::grow`). Instead, you must use their
654    /// `async` variants: [`Memory::new_async`](`crate::Memory::new_async`),
655    /// [`Memory::grow_async`](`crate::Memory::grow_async`),
656    /// [`Table::new_async`](`crate::Table::new_async`), and
657    /// [`Table::grow_async`](`crate::Table::grow_async`).
658    ///
659    /// Note that this limiter is only used to limit the creation/growth of
660    /// resources in the future, this does not retroactively attempt to apply
661    /// limits to the [`Store`]. Additionally this must be used with an async
662    /// [`Store`] configured via
663    /// [`Config::async_support`](crate::Config::async_support).
664    #[cfg(feature = "async")]
665    #[cfg_attr(nightlydoc, doc(cfg(feature = "async")))]
666    pub fn limiter_async(
667        &mut self,
668        mut limiter: impl FnMut(&mut T) -> &mut (dyn crate::ResourceLimiterAsync)
669            + Send
670            + Sync
671            + 'static,
672    ) {
673        debug_assert!(self.inner.async_support());
674        // Apply the limits on instances, tables, and memory given by the limiter:
675        let inner = &mut self.inner;
676        let (instance_limit, table_limit, memory_limit) = {
677            let l = limiter(&mut inner.data);
678            (l.instances(), l.tables(), l.memories())
679        };
680        let innermost = &mut inner.inner;
681        innermost.instance_limit = instance_limit;
682        innermost.table_limit = table_limit;
683        innermost.memory_limit = memory_limit;
684
685        // Save the limiter accessor function:
686        inner.limiter = Some(ResourceLimiterInner::Async(Box::new(limiter)));
687    }
688
689    #[cfg_attr(nightlydoc, doc(cfg(feature = "async")))]
690    /// Configures an async function that runs on calls and returns between
691    /// WebAssembly and host code. For the non-async equivalent of this method,
692    /// see [`Store::call_hook`].
693    ///
694    /// The function is passed a [`CallHook`] argument, which indicates which
695    /// state transition the VM is making.
696    ///
697    /// This function's future may return a [`Trap`]. If a trap is returned
698    /// when an import was called, it is immediately raised as-if the host
699    /// import had returned the trap. If a trap is returned after wasm returns
700    /// to the host then the wasm function's result is ignored and this trap is
701    /// returned instead.
702    ///
703    /// After this function returns a trap, it may be called for subsequent
704    /// returns to host or wasm code as the trap propagates to the root call.
705    #[cfg(feature = "async")]
706    pub fn call_hook_async(&mut self, hook: impl CallHookHandler<T> + Send + Sync + 'static) {
707        self.inner.call_hook = Some(CallHookInner::Async(Box::new(hook)));
708    }
709
710    /// Configure a function that runs on calls and returns between WebAssembly
711    /// and host code.
712    ///
713    /// The function is passed a [`CallHook`] argument, which indicates which
714    /// state transition the VM is making.
715    ///
716    /// This function may return a [`Trap`]. If a trap is returned when an
717    /// import was called, it is immediately raised as-if the host import had
718    /// returned the trap. If a trap is returned after wasm returns to the host
719    /// then the wasm function's result is ignored and this trap is returned
720    /// instead.
721    ///
722    /// After this function returns a trap, it may be called for subsequent returns
723    /// to host or wasm code as the trap propagates to the root call.
724    pub fn call_hook(
725        &mut self,
726        hook: impl FnMut(&mut T, CallHook) -> Result<()> + Send + Sync + 'static,
727    ) {
728        self.inner.call_hook = Some(CallHookInner::Sync(Box::new(hook)));
729    }
730
731    /// Returns the [`Engine`] that this store is associated with.
732    pub fn engine(&self) -> &Engine {
733        self.inner.engine()
734    }
735
736    /// Perform garbage collection of `ExternRef`s.
737    ///
738    /// Note that it is not required to actively call this function. GC will
739    /// automatically happen when internal buffers fill up. This is provided if
740    /// fine-grained control over the GC is desired.
741    pub fn gc(&mut self) {
742        self.inner.gc()
743    }
744
745    /// Returns the amount of fuel consumed by this store's execution so far.
746    ///
747    /// If fuel consumption is not enabled via
748    /// [`Config::consume_fuel`](crate::Config::consume_fuel) then this
749    /// function will return `None`. Also note that fuel, if enabled, must be
750    /// originally configured via [`Store::add_fuel`].
751    pub fn fuel_consumed(&self) -> Option<u64> {
752        self.inner.fuel_consumed()
753    }
754
755    /// Adds fuel to this [`Store`] for wasm to consume while executing.
756    ///
757    /// For this method to work fuel consumption must be enabled via
758    /// [`Config::consume_fuel`](crate::Config::consume_fuel). By default a
759    /// [`Store`] starts with 0 fuel for wasm to execute with (meaning it will
760    /// immediately trap). This function must be called for the store to have
761    /// some fuel to allow WebAssembly to execute.
762    ///
763    /// Most WebAssembly instructions consume 1 unit of fuel. Some
764    /// instructions, such as `nop`, `drop`, `block`, and `loop`, consume 0
765    /// units, as any execution cost associated with them involves other
766    /// instructions which do consume fuel.
767    ///
768    /// Note that at this time when fuel is entirely consumed it will cause
769    /// wasm to trap. More usages of fuel are planned for the future.
770    ///
771    /// # Errors
772    ///
773    /// This function will return an error if fuel consumption is not enabled via
774    /// [`Config::consume_fuel`](crate::Config::consume_fuel).
775    pub fn add_fuel(&mut self, fuel: u64) -> Result<()> {
776        self.inner.add_fuel(fuel)
777    }
778
779    /// Synthetically consumes fuel from this [`Store`].
780    ///
781    /// For this method to work fuel consumption must be enabled via
782    /// [`Config::consume_fuel`](crate::Config::consume_fuel).
783    ///
784    /// WebAssembly execution will automatically consume fuel but if so desired
785    /// the embedder can also consume fuel manually to account for relative
786    /// costs of host functions, for example.
787    ///
788    /// This function will attempt to consume `fuel` units of fuel from within
789    /// this store. If the remaining amount of fuel allows this then `Ok(N)` is
790    /// returned where `N` is the amount of remaining fuel. Otherwise an error
791    /// is returned and no fuel is consumed.
792    ///
793    /// # Errors
794    ///
795    /// This function will return an error either if fuel consumption is not
796    /// enabled via [`Config::consume_fuel`](crate::Config::consume_fuel) or if
797    /// `fuel` exceeds the amount of remaining fuel within this store.
798    pub fn consume_fuel(&mut self, fuel: u64) -> Result<u64> {
799        self.inner.consume_fuel(fuel)
800    }
801
802    /// Configures a [`Store`] to generate a [`Trap`] whenever it runs out of
803    /// fuel.
804    ///
805    /// When a [`Store`] is configured to consume fuel with
806    /// [`Config::consume_fuel`](crate::Config::consume_fuel) this method will
807    /// configure what happens when fuel runs out. Specifically a WebAssembly
808    /// trap will be raised and the current execution of WebAssembly will be
809    /// aborted.
810    ///
811    /// This is the default behavior for running out of fuel.
812    pub fn out_of_fuel_trap(&mut self) {
813        self.inner.out_of_fuel_trap()
814    }
815
816    /// Configures a [`Store`] to yield execution of async WebAssembly code
817    /// periodically.
818    ///
819    /// When a [`Store`] is configured to consume fuel with
820    /// [`Config::consume_fuel`](crate::Config::consume_fuel) this method will
821    /// configure what happens when fuel runs out. Specifically executing
822    /// WebAssembly will be suspended and control will be yielded back to the
823    /// caller. This is only suitable with use of a store associated with an [async
824    /// config](crate::Config::async_support) because only then are futures used and yields
825    /// are possible.
826    ///
827    /// The purpose of this behavior is to ensure that futures which represent
828    /// execution of WebAssembly do not execute too long inside their
829    /// `Future::poll` method. This allows for some form of cooperative
830    /// multitasking where WebAssembly will voluntarily yield control
831    /// periodically (based on fuel consumption) back to the running thread.
832    ///
833    /// Note that futures returned by this crate will automatically flag
834    /// themselves to get re-polled if a yield happens. This means that
835    /// WebAssembly will continue to execute, just after giving the host an
836    /// opportunity to do something else.
837    ///
838    /// The `fuel_to_inject` parameter indicates how much fuel should be
839    /// automatically re-injected after fuel runs out. This is how much fuel
840    /// will be consumed between yields of an async future.
841    ///
842    /// The `injection_count` parameter indicates how many times this fuel will
843    /// be injected. Multiplying the two parameters is the total amount of fuel
844    /// this store is allowed before wasm traps.
845    ///
846    /// # Panics
847    ///
848    /// This method will panic if it is not called on a store associated with an [async
849    /// config](crate::Config::async_support).
850    pub fn out_of_fuel_async_yield(&mut self, injection_count: u64, fuel_to_inject: u64) {
851        self.inner
852            .out_of_fuel_async_yield(injection_count, fuel_to_inject)
853    }
854
855    /// Sets the epoch deadline to a certain number of ticks in the future.
856    ///
857    /// When the Wasm guest code is compiled with epoch-interruption
858    /// instrumentation
859    /// ([`Config::epoch_interruption()`](crate::Config::epoch_interruption)),
860    /// and when the `Engine`'s epoch is incremented
861    /// ([`Engine::increment_epoch()`](crate::Engine::increment_epoch))
862    /// past a deadline, execution can be configured to either trap or
863    /// yield and then continue.
864    ///
865    /// This deadline is always set relative to the current epoch:
866    /// `delta_beyond_current` ticks in the future. The deadline can
867    /// be set explicitly via this method, or refilled automatically
868    /// on a yield if configured via
869    /// [`epoch_deadline_async_yield_and_update()`](Store::epoch_deadline_async_yield_and_update). After
870    /// this method is invoked, the deadline is reached when
871    /// [`Engine::increment_epoch()`] has been invoked at least
872    /// `ticks_beyond_current` times.
873    ///
874    /// By default a store will trap immediately with an epoch deadline of 0
875    /// (which has always "elapsed"). This method is required to be configured
876    /// for stores with epochs enabled to some future epoch deadline.
877    ///
878    /// See documentation on
879    /// [`Config::epoch_interruption()`](crate::Config::epoch_interruption)
880    /// for an introduction to epoch-based interruption.
881    pub fn set_epoch_deadline(&mut self, ticks_beyond_current: u64) {
882        self.inner.set_epoch_deadline(ticks_beyond_current);
883    }
884
885    /// Configures epoch-deadline expiration to trap.
886    ///
887    /// When epoch-interruption-instrumented code is executed on this
888    /// store and the epoch deadline is reached before completion,
889    /// with the store configured in this way, execution will
890    /// terminate with a trap as soon as an epoch check in the
891    /// instrumented code is reached.
892    ///
893    /// This behavior is the default if the store is not otherwise
894    /// configured via
895    /// [`epoch_deadline_trap()`](Store::epoch_deadline_trap),
896    /// [`epoch_deadline_callback()`](Store::epoch_deadline_callback) or
897    /// [`epoch_deadline_async_yield_and_update()`](Store::epoch_deadline_async_yield_and_update).
898    ///
899    /// This setting is intended to allow for coarse-grained
900    /// interruption, but not a deterministic deadline of a fixed,
901    /// finite interval. For deterministic interruption, see the
902    /// "fuel" mechanism instead.
903    ///
904    /// Note that when this is used it's required to call
905    /// [`Store::set_epoch_deadline`] or otherwise wasm will always immediately
906    /// trap.
907    ///
908    /// See documentation on
909    /// [`Config::epoch_interruption()`](crate::Config::epoch_interruption)
910    /// for an introduction to epoch-based interruption.
911    pub fn epoch_deadline_trap(&mut self) {
912        self.inner.epoch_deadline_trap();
913    }
914
915    /// Configures epoch-deadline expiration to invoke a custom callback
916    /// function.
917    ///
918    /// When epoch-interruption-instrumented code is executed on this
919    /// store and the epoch deadline is reached before completion, the
920    /// provided callback function is invoked.
921    ///
922    /// This function should return a positive `delta`, which is used to
923    /// update the new epoch, setting it to the current epoch plus
924    /// `delta` ticks. Alternatively, the callback may return an error,
925    /// which will terminate execution.
926    ///
927    /// This setting is intended to allow for coarse-grained
928    /// interruption, but not a deterministic deadline of a fixed,
929    /// finite interval. For deterministic interruption, see the
930    /// "fuel" mechanism instead.
931    ///
932    /// See documentation on
933    /// [`Config::epoch_interruption()`](crate::Config::epoch_interruption)
934    /// for an introduction to epoch-based interruption.
935    pub fn epoch_deadline_callback(
936        &mut self,
937        callback: impl FnMut(StoreContextMut<T>) -> Result<u64> + Send + Sync + 'static,
938    ) {
939        self.inner.epoch_deadline_callback(Box::new(callback));
940    }
941
942    #[cfg_attr(nightlydoc, doc(cfg(feature = "async")))]
943    /// Configures epoch-deadline expiration to yield to the async
944    /// caller and the update the deadline.
945    ///
946    /// When epoch-interruption-instrumented code is executed on this
947    /// store and the epoch deadline is reached before completion,
948    /// with the store configured in this way, execution will yield
949    /// (the future will return `Pending` but re-awake itself for
950    /// later execution) and, upon resuming, the store will be
951    /// configured with an epoch deadline equal to the current epoch
952    /// plus `delta` ticks.
953    ///
954    /// This setting is intended to allow for cooperative timeslicing
955    /// of multiple CPU-bound Wasm guests in different stores, all
956    /// executing under the control of an async executor. To drive
957    /// this, stores should be configured to "yield and update"
958    /// automatically with this function, and some external driver (a
959    /// thread that wakes up periodically, or a timer
960    /// signal/interrupt) should call
961    /// [`Engine::increment_epoch()`](crate::Engine::increment_epoch).
962    ///
963    /// See documentation on
964    /// [`Config::epoch_interruption()`](crate::Config::epoch_interruption)
965    /// for an introduction to epoch-based interruption.
966    #[cfg(feature = "async")]
967    pub fn epoch_deadline_async_yield_and_update(&mut self, delta: u64) {
968        self.inner.epoch_deadline_async_yield_and_update(delta);
969    }
970}
971
972impl<'a, T> StoreContext<'a, T> {
973    pub(crate) fn async_support(&self) -> bool {
974        self.0.async_support()
975    }
976
977    /// Returns the underlying [`Engine`] this store is connected to.
978    pub fn engine(&self) -> &Engine {
979        self.0.engine()
980    }
981
982    /// Access the underlying data owned by this `Store`.
983    ///
984    /// Same as [`Store::data`].
985    pub fn data(&self) -> &'a T {
986        self.0.data()
987    }
988
989    /// Returns the fuel consumed by this store.
990    ///
991    /// For more information see [`Store::fuel_consumed`].
992    pub fn fuel_consumed(&self) -> Option<u64> {
993        self.0.fuel_consumed()
994    }
995}
996
997impl<'a, T> StoreContextMut<'a, T> {
998    /// Access the underlying data owned by this `Store`.
999    ///
1000    /// Same as [`Store::data`].
1001    pub fn data(&self) -> &T {
1002        self.0.data()
1003    }
1004
1005    /// Access the underlying data owned by this `Store`.
1006    ///
1007    /// Same as [`Store::data_mut`].
1008    pub fn data_mut(&mut self) -> &mut T {
1009        self.0.data_mut()
1010    }
1011
1012    /// Returns the underlying [`Engine`] this store is connected to.
1013    pub fn engine(&self) -> &Engine {
1014        self.0.engine()
1015    }
1016
1017    /// Perform garbage collection of `ExternRef`s.
1018    ///
1019    /// Same as [`Store::gc`].
1020    pub fn gc(&mut self) {
1021        self.0.gc()
1022    }
1023
1024    /// Returns the fuel consumed by this store.
1025    ///
1026    /// For more information see [`Store::fuel_consumed`].
1027    pub fn fuel_consumed(&self) -> Option<u64> {
1028        self.0.fuel_consumed()
1029    }
1030
1031    /// Inject more fuel into this store to be consumed when executing wasm code.
1032    ///
1033    /// For more information see [`Store::add_fuel`]
1034    pub fn add_fuel(&mut self, fuel: u64) -> Result<()> {
1035        self.0.add_fuel(fuel)
1036    }
1037
1038    /// Synthetically consume fuel from this store.
1039    ///
1040    /// For more information see [`Store::consume_fuel`]
1041    pub fn consume_fuel(&mut self, fuel: u64) -> Result<u64> {
1042        self.0.consume_fuel(fuel)
1043    }
1044
1045    /// Configures this `Store` to trap whenever fuel runs out.
1046    ///
1047    /// For more information see [`Store::out_of_fuel_trap`]
1048    pub fn out_of_fuel_trap(&mut self) {
1049        self.0.out_of_fuel_trap()
1050    }
1051
1052    /// Configures this `Store` to yield while executing futures whenever fuel
1053    /// runs out.
1054    ///
1055    /// For more information see [`Store::out_of_fuel_async_yield`]
1056    pub fn out_of_fuel_async_yield(&mut self, injection_count: u64, fuel_to_inject: u64) {
1057        self.0
1058            .out_of_fuel_async_yield(injection_count, fuel_to_inject)
1059    }
1060
1061    /// Sets the epoch deadline to a certain number of ticks in the future.
1062    ///
1063    /// For more information see [`Store::set_epoch_deadline`].
1064    pub fn set_epoch_deadline(&mut self, ticks_beyond_current: u64) {
1065        self.0.set_epoch_deadline(ticks_beyond_current);
1066    }
1067
1068    /// Configures epoch-deadline expiration to trap.
1069    ///
1070    /// For more information see [`Store::epoch_deadline_trap`].
1071    pub fn epoch_deadline_trap(&mut self) {
1072        self.0.epoch_deadline_trap();
1073    }
1074
1075    #[cfg_attr(nightlydoc, doc(cfg(feature = "async")))]
1076    /// Configures epoch-deadline expiration to yield to the async
1077    /// caller and the update the deadline.
1078    ///
1079    /// For more information see
1080    /// [`Store::epoch_deadline_async_yield_and_update`].
1081    #[cfg(feature = "async")]
1082    pub fn epoch_deadline_async_yield_and_update(&mut self, delta: u64) {
1083        self.0.epoch_deadline_async_yield_and_update(delta);
1084    }
1085}
1086
1087impl<T> StoreInner<T> {
1088    #[inline]
1089    fn data(&self) -> &T {
1090        &self.data
1091    }
1092
1093    #[inline]
1094    fn data_mut(&mut self) -> &mut T {
1095        &mut self.data
1096    }
1097
1098    pub fn call_hook(&mut self, s: CallHook) -> Result<()> {
1099        match &mut self.call_hook {
1100            Some(CallHookInner::Sync(hook)) => hook(&mut self.data, s),
1101
1102            #[cfg(feature = "async")]
1103            Some(CallHookInner::Async(handler)) => unsafe {
1104                Ok(self
1105                    .inner
1106                    .async_cx()
1107                    .ok_or_else(|| anyhow!("couldn't grab async_cx for call hook"))?
1108                    .block_on(handler.handle_call_event(&mut self.data, s).as_mut())??)
1109            },
1110
1111            None => Ok(()),
1112        }
1113    }
1114}
1115
1116#[doc(hidden)]
1117impl StoreOpaque {
1118    pub fn id(&self) -> StoreId {
1119        self.store_data.id()
1120    }
1121
1122    pub fn bump_resource_counts(&mut self, module: &Module) -> Result<()> {
1123        fn bump(slot: &mut usize, max: usize, amt: usize, desc: &str) -> Result<()> {
1124            let new = slot.saturating_add(amt);
1125            if new > max {
1126                bail!(
1127                    "resource limit exceeded: {} count too high at {}",
1128                    desc,
1129                    new
1130                );
1131            }
1132            *slot = new;
1133            Ok(())
1134        }
1135
1136        let module = module.env_module();
1137        let memories = module.memory_plans.len() - module.num_imported_memories;
1138        let tables = module.table_plans.len() - module.num_imported_tables;
1139
1140        bump(&mut self.instance_count, self.instance_limit, 1, "instance")?;
1141        bump(
1142            &mut self.memory_count,
1143            self.memory_limit,
1144            memories,
1145            "memory",
1146        )?;
1147        bump(&mut self.table_count, self.table_limit, tables, "table")?;
1148
1149        Ok(())
1150    }
1151
1152    #[inline]
1153    pub fn async_support(&self) -> bool {
1154        cfg!(feature = "async") && self.engine().config().async_support
1155    }
1156
1157    #[inline]
1158    pub fn engine(&self) -> &Engine {
1159        &self.engine
1160    }
1161
1162    #[inline]
1163    pub fn store_data(&self) -> &StoreData {
1164        &self.store_data
1165    }
1166
1167    #[inline]
1168    pub fn store_data_mut(&mut self) -> &mut StoreData {
1169        &mut self.store_data
1170    }
1171
1172    #[inline]
1173    pub(crate) fn modules(&self) -> &ModuleRegistry {
1174        &self.modules
1175    }
1176
1177    #[inline]
1178    pub(crate) fn modules_mut(&mut self) -> &mut ModuleRegistry {
1179        &mut self.modules
1180    }
1181
1182    pub unsafe fn add_instance(&mut self, handle: InstanceHandle, ondemand: bool) -> InstanceId {
1183        self.instances.push(StoreInstance {
1184            handle: handle.clone(),
1185            ondemand,
1186        });
1187        InstanceId(self.instances.len() - 1)
1188    }
1189
1190    pub fn instance(&self, id: InstanceId) -> &InstanceHandle {
1191        &self.instances[id.0].handle
1192    }
1193
1194    pub fn instance_mut(&mut self, id: InstanceId) -> &mut InstanceHandle {
1195        &mut self.instances[id.0].handle
1196    }
1197
1198    #[cfg_attr(not(target_os = "linux"), allow(dead_code))] // not used on all platforms
1199    pub fn set_signal_handler(&mut self, handler: Option<Box<SignalHandler<'static>>>) {
1200        self.signal_handler = handler;
1201    }
1202
1203    #[inline]
1204    pub fn runtime_limits(&self) -> &VMRuntimeLimits {
1205        &self.runtime_limits
1206    }
1207
1208    #[inline]
1209    pub fn externref_activations_table(&mut self) -> &mut VMExternRefActivationsTable {
1210        &mut self.externref_activations_table
1211    }
1212
1213    pub fn gc(&mut self) {
1214        // For this crate's API, we ensure that `set_stack_canary` invariants
1215        // are upheld for all host-->Wasm calls.
1216        unsafe { wasmtime_runtime::gc(&self.modules, &mut self.externref_activations_table) }
1217    }
1218
1219    /// Looks up the corresponding `VMTrampoline` which can be used to enter
1220    /// wasm given an anyfunc function pointer.
1221    ///
1222    /// This is a somewhat complicated implementation at this time, unfortnately.
1223    /// Trampolines are a sort of side-channel of information which is
1224    /// specifically juggled by the `wasmtime` crate in a careful fashion. The
1225    /// sources for trampolines are:
1226    ///
1227    /// * Compiled modules - each compiled module has a trampoline for all
1228    ///   signatures of functions that escape the module (e.g. exports and
1229    ///   `ref.func`-able functions)
1230    /// * `Func::new` - host-defined functions with a dynamic signature get an
1231    ///   on-the-fly-compiled trampoline (e.g. JIT-compiled as part of the
1232    ///   `Func::new` call).
1233    /// * `Func::wrap` - host-defined functions where the trampoline is
1234    ///   monomorphized in Rust and compiled by LLVM.
1235    ///
1236    /// The purpose of this function is that given some wasm function pointer we
1237    /// need to find the trampoline for it. For compiled wasm modules this is
1238    /// pretty easy, the code pointer of the function pointer will point us
1239    /// at a wasm module which has a table of trampolines-by-type that we can
1240    /// lookup.
1241    ///
1242    /// If this lookup fails, however, then we're trying to get the trampoline
1243    /// for a wasm function pointer defined by the host. The trampoline isn't
1244    /// actually stored in the wasm function pointer itself so we need
1245    /// side-channels of information. To achieve this a lazy scheme is
1246    /// implemented here based on the assumption that most trampoline lookups
1247    /// happen for wasm-defined functions, not host-defined functions.
1248    ///
1249    /// The `Store` already has a list of all functions in
1250    /// `self.store_data().funcs`, it's just not indexed in a nice fashion by
1251    /// type index or similar. To solve this there's an internal map in each
1252    /// store, `host_trampolines`, which maps from a type index to the
1253    /// store-owned trampoline. The actual population of this map, however, is
1254    /// deferred to this function itself.
1255    ///
1256    /// Most of the time we are looking up a Wasm function's trampoline when
1257    /// calling this function, and we don't want to make insertion of a host
1258    /// function into the store more expensive than it has to be. We could
1259    /// update the `host_trampolines` whenever a host function is inserted into
1260    /// the store, but this is a relatively expensive hash map insertion.
1261    /// Instead the work is deferred until we actually look up that trampoline
1262    /// in this method.
1263    ///
1264    /// This all means that if the lookup of the trampoline fails within
1265    /// `self.host_trampolines` we lazily populate `self.host_trampolines` by
1266    /// iterating over `self.store_data().funcs`, inserting trampolines as we
1267    /// go. If we find the right trampoline then it's returned.
1268    pub fn lookup_trampoline(&mut self, anyfunc: &VMCallerCheckedFuncRef) -> VMTrampoline {
1269        // First try to see if the `anyfunc` belongs to any module. Each module
1270        // has its own map of trampolines-per-type-index and the code pointer in
1271        // the `anyfunc` will enable us to quickly find a module.
1272        if let Some(trampoline) = self.modules.lookup_trampoline(anyfunc) {
1273            return trampoline;
1274        }
1275
1276        // Next consult the list of store-local host trampolines. This is
1277        // primarily populated by functions created by `Func::new` or similar
1278        // creation functions, host-defined functions.
1279        if let Some(trampoline) = self.host_trampolines.get(&anyfunc.type_index) {
1280            return *trampoline;
1281        }
1282
1283        // If no trampoline was found then it means that it hasn't been loaded
1284        // into `host_trampolines` yet. Skip over all the ones we've looked at
1285        // so far and start inserting into `self.host_trampolines`, returning
1286        // the actual trampoline once found.
1287        for f in self
1288            .store_data
1289            .funcs()
1290            .skip(self.host_func_trampolines_registered)
1291        {
1292            self.host_func_trampolines_registered += 1;
1293            self.host_trampolines.insert(f.sig_index(), f.trampoline());
1294            if f.sig_index() == anyfunc.type_index {
1295                return f.trampoline();
1296            }
1297        }
1298
1299        // If reached this is a bug in Wasmtime. Lookup of a trampoline should
1300        // only happen for wasm functions or host functions, all of which should
1301        // be indexed by the above.
1302        panic!("trampoline missing")
1303    }
1304
1305    /// Yields the async context, assuming that we are executing on a fiber and
1306    /// that fiber is not in the process of dying. This function will return
1307    /// None in the latter case (the fiber is dying), and panic if
1308    /// `async_support()` is false.
1309    #[cfg(feature = "async")]
1310    #[inline]
1311    pub fn async_cx(&self) -> Option<AsyncCx> {
1312        debug_assert!(self.async_support());
1313
1314        let poll_cx_box_ptr = self.async_state.current_poll_cx.get();
1315        if poll_cx_box_ptr.is_null() {
1316            return None;
1317        }
1318
1319        let poll_cx_inner_ptr = unsafe { *poll_cx_box_ptr };
1320        if poll_cx_inner_ptr.is_null() {
1321            return None;
1322        }
1323
1324        Some(AsyncCx {
1325            current_suspend: self.async_state.current_suspend.get(),
1326            current_poll_cx: poll_cx_box_ptr,
1327        })
1328    }
1329
1330    pub fn fuel_consumed(&self) -> Option<u64> {
1331        if !self.engine.config().tunables.consume_fuel {
1332            return None;
1333        }
1334        let consumed = unsafe { *self.runtime_limits.fuel_consumed.get() };
1335        Some(u64::try_from(self.fuel_adj + consumed).unwrap())
1336    }
1337
1338    fn out_of_fuel_trap(&mut self) {
1339        self.out_of_gas_behavior = OutOfGas::Trap;
1340    }
1341
1342    fn out_of_fuel_async_yield(&mut self, injection_count: u64, fuel_to_inject: u64) {
1343        assert!(
1344            self.async_support(),
1345            "cannot use `out_of_fuel_async_yield` without enabling async support in the config"
1346        );
1347        self.out_of_gas_behavior = OutOfGas::InjectFuel {
1348            injection_count,
1349            fuel_to_inject,
1350        };
1351    }
1352
1353    /// Yields execution to the caller on out-of-gas or epoch interruption.
1354    ///
1355    /// This only works on async futures and stores, and assumes that we're
1356    /// executing on a fiber. This will yield execution back to the caller once.
1357    #[cfg(feature = "async")]
1358    fn async_yield_impl(&mut self) -> Result<()> {
1359        // Small future that yields once and then returns ()
1360        #[derive(Default)]
1361        struct Yield {
1362            yielded: bool,
1363        }
1364
1365        impl Future for Yield {
1366            type Output = ();
1367
1368            fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<()> {
1369                if self.yielded {
1370                    Poll::Ready(())
1371                } else {
1372                    // Flag ourselves as yielded to return next time, and also
1373                    // flag the waker that we're already ready to get
1374                    // re-enqueued for another poll.
1375                    self.yielded = true;
1376                    cx.waker().wake_by_ref();
1377                    Poll::Pending
1378                }
1379            }
1380        }
1381
1382        let mut future = Yield::default();
1383
1384        // When control returns, we have a `Result<()>` passed
1385        // in from the host fiber. If this finished successfully then
1386        // we were resumed normally via a `poll`, so keep going.  If
1387        // the future was dropped while we were yielded, then we need
1388        // to clean up this fiber. Do so by raising a trap which will
1389        // abort all wasm and get caught on the other side to clean
1390        // things up.
1391        unsafe {
1392            self.async_cx()
1393                .expect("attempted to pull async context during shutdown")
1394                .block_on(Pin::new_unchecked(&mut future))
1395        }
1396    }
1397
1398    fn add_fuel(&mut self, fuel: u64) -> Result<()> {
1399        anyhow::ensure!(
1400            self.engine().config().tunables.consume_fuel,
1401            "fuel is not configured in this store"
1402        );
1403
1404        // Fuel is stored as an i64, so we need to cast it. If the provided fuel
1405        // value overflows that just assume that i64::max will suffice. Wasm
1406        // execution isn't fast enough to burn through i64::max fuel in any
1407        // reasonable amount of time anyway.
1408        let fuel = i64::try_from(fuel).unwrap_or(i64::max_value());
1409        let adj = self.fuel_adj;
1410        let consumed_ptr = unsafe { &mut *self.runtime_limits.fuel_consumed.get() };
1411
1412        match (consumed_ptr.checked_sub(fuel), adj.checked_add(fuel)) {
1413            // If we succesfully did arithmetic without overflowing then we can
1414            // just update our fields.
1415            (Some(consumed), Some(adj)) => {
1416                self.fuel_adj = adj;
1417                *consumed_ptr = consumed;
1418            }
1419
1420            // Otherwise something overflowed. Make sure that we preserve the
1421            // amount of fuel that's already consumed, but otherwise assume that
1422            // we were given infinite fuel.
1423            _ => {
1424                self.fuel_adj = i64::max_value();
1425                *consumed_ptr = (*consumed_ptr + adj) - i64::max_value();
1426            }
1427        }
1428
1429        Ok(())
1430    }
1431
1432    fn consume_fuel(&mut self, fuel: u64) -> Result<u64> {
1433        let consumed_ptr = unsafe { &mut *self.runtime_limits.fuel_consumed.get() };
1434        match i64::try_from(fuel)
1435            .ok()
1436            .and_then(|fuel| consumed_ptr.checked_add(fuel))
1437        {
1438            Some(consumed) if consumed <= 0 => {
1439                *consumed_ptr = consumed;
1440                Ok(u64::try_from(-consumed).unwrap())
1441            }
1442            _ => bail!("not enough fuel remaining in store"),
1443        }
1444    }
1445
1446    #[inline]
1447    pub fn signal_handler(&self) -> Option<*const SignalHandler<'static>> {
1448        let handler = self.signal_handler.as_ref()?;
1449        Some(&**handler as *const _)
1450    }
1451
1452    #[inline]
1453    pub fn vmruntime_limits(&self) -> *mut VMRuntimeLimits {
1454        &self.runtime_limits as *const VMRuntimeLimits as *mut VMRuntimeLimits
1455    }
1456
1457    pub unsafe fn insert_vmexternref_without_gc(&mut self, r: VMExternRef) {
1458        self.externref_activations_table.insert_without_gc(r);
1459    }
1460
1461    #[inline]
1462    pub fn default_caller(&self) -> *mut VMContext {
1463        self.default_caller.vmctx_ptr()
1464    }
1465
1466    pub fn traitobj(&self) -> *mut dyn wasmtime_runtime::Store {
1467        self.default_caller.store()
1468    }
1469
1470    /// Takes the cached `Vec<Val>` stored internally across hostcalls to get
1471    /// used as part of calling the host in a `Func::new` method invocation.
1472    #[inline]
1473    pub fn take_hostcall_val_storage(&mut self) -> Vec<Val> {
1474        mem::take(&mut self.hostcall_val_storage)
1475    }
1476
1477    /// Restores the vector previously taken by `take_hostcall_val_storage`
1478    /// above back into the store, allowing it to be used in the future for the
1479    /// next wasm->host call.
1480    #[inline]
1481    pub fn save_hostcall_val_storage(&mut self, storage: Vec<Val>) {
1482        if storage.capacity() > self.hostcall_val_storage.capacity() {
1483            self.hostcall_val_storage = storage;
1484        }
1485    }
1486
1487    /// Same as `take_hostcall_val_storage`, but for the direction of the host
1488    /// calling wasm.
1489    #[inline]
1490    pub fn take_wasm_val_raw_storage(&mut self) -> Vec<ValRaw> {
1491        mem::take(&mut self.wasm_val_raw_storage)
1492    }
1493
1494    /// Same as `save_hostcall_val_storage`, but for the direction of the host
1495    /// calling wasm.
1496    #[inline]
1497    pub fn save_wasm_val_raw_storage(&mut self, storage: Vec<ValRaw>) {
1498        if storage.capacity() > self.wasm_val_raw_storage.capacity() {
1499            self.wasm_val_raw_storage = storage;
1500        }
1501    }
1502
1503    pub(crate) fn push_rooted_funcs(&mut self, funcs: Arc<[Definition]>) {
1504        self.rooted_host_funcs.push(funcs);
1505    }
1506
1507    /// Translates a WebAssembly fault at the native `pc` and native `addr` to a
1508    /// WebAssembly-relative fault.
1509    ///
1510    /// This function may abort the process if `addr` is not found to actually
1511    /// reside in any linear memory. In such a situation it means that the
1512    /// segfault was erroneously caught by Wasmtime and is possibly indicative
1513    /// of a code generator bug.
1514    ///
1515    /// This function returns `None` for dynamically-bounds-checked-memories
1516    /// with spectre mitigations enabled since the hardware fault address is
1517    /// always zero in these situations which means that the trapping context
1518    /// doesn't have enough information to report the fault address.
1519    pub(crate) fn wasm_fault(&self, pc: usize, addr: usize) -> Option<WasmFault> {
1520        // Explicitly bounds-checked memories with spectre-guards enabled will
1521        // cause out-of-bounds accesses to get routed to address 0, so allow
1522        // wasm instructions to fault on the null address.
1523        if addr == 0 {
1524            return None;
1525        }
1526
1527        // Search all known instances in this store for this address. Note that
1528        // this is probably not the speediest way to do this. Traps, however,
1529        // are generally not expected to be super fast and additionally stores
1530        // probably don't have all that many instances or memories.
1531        //
1532        // If this loop becomes hot in the future, however, it should be
1533        // possible to precompute maps about linear memories in a store and have
1534        // a quicker lookup.
1535        let mut fault = None;
1536        for instance in self.instances.iter() {
1537            if let Some(f) = instance.handle.wasm_fault(addr) {
1538                assert!(fault.is_none());
1539                fault = Some(f);
1540            }
1541        }
1542        if fault.is_some() {
1543            return fault;
1544        }
1545
1546        eprintln!(
1547            "\
1548Wasmtime caught a segfault for a wasm program because the faulting instruction
1549is allowed to segfault due to how linear memories are implemented. The address
1550that was accessed, however, is not known to any linear memory in use within this
1551Store. This may be indicative of a critical bug in Wasmtime's code generation
1552because all addresses which are known to be reachable from wasm won't reach this
1553message.
1554
1555    pc:      0x{pc:x}
1556    address: 0x{addr:x}
1557
1558This is a possible security issue because WebAssembly has accessed something it
1559shouldn't have been able to. Other accesses may have succeeded and this one just
1560happened to be caught. The process will now be aborted to prevent this damage
1561from going any further and to alert what's going on. If this is a security
1562issue please reach out to the Wasmtime team via its security policy
1563at https://bytecodealliance.org/security.
1564"
1565        );
1566        std::process::abort();
1567    }
1568}
1569
1570impl<T> StoreContextMut<'_, T> {
1571    /// Executes a synchronous computation `func` asynchronously on a new fiber.
1572    ///
1573    /// This function will convert the synchronous `func` into an asynchronous
1574    /// future. This is done by running `func` in a fiber on a separate native
1575    /// stack which can be suspended and resumed from.
1576    ///
1577    /// Most of the nitty-gritty here is how we juggle the various contexts
1578    /// necessary to suspend the fiber later on and poll sub-futures. It's hoped
1579    /// that the various comments are illuminating as to what's going on here.
1580    #[cfg(feature = "async")]
1581    pub(crate) async fn on_fiber<R>(
1582        &mut self,
1583        func: impl FnOnce(&mut StoreContextMut<'_, T>) -> R + Send,
1584    ) -> Result<R>
1585    where
1586        T: Send,
1587    {
1588        let config = self.engine().config();
1589        debug_assert!(self.0.async_support());
1590        debug_assert!(config.async_stack_size > 0);
1591
1592        let mut slot = None;
1593        let future = {
1594            let current_poll_cx = self.0.async_state.current_poll_cx.get();
1595            let current_suspend = self.0.async_state.current_suspend.get();
1596            let stack = self.engine().allocator().allocate_fiber_stack()?;
1597
1598            let engine = self.engine().clone();
1599            let slot = &mut slot;
1600            let fiber = wasmtime_fiber::Fiber::new(stack, move |keep_going, suspend| {
1601                // First check and see if we were interrupted/dropped, and only
1602                // continue if we haven't been.
1603                keep_going?;
1604
1605                // Configure our store's suspension context for the rest of the
1606                // execution of this fiber. Note that a raw pointer is stored here
1607                // which is only valid for the duration of this closure.
1608                // Consequently we at least replace it with the previous value when
1609                // we're done. This reset is also required for correctness because
1610                // otherwise our value will overwrite another active fiber's value.
1611                // There should be a test that segfaults in `async_functions.rs` if
1612                // this `Replace` is removed.
1613                unsafe {
1614                    let _reset = Reset(current_suspend, *current_suspend);
1615                    *current_suspend = suspend;
1616
1617                    *slot = Some(func(self));
1618                    Ok(())
1619                }
1620            })?;
1621
1622            // Once we have the fiber representing our synchronous computation, we
1623            // wrap that in a custom future implementation which does the
1624            // translation from the future protocol to our fiber API.
1625            FiberFuture {
1626                fiber,
1627                current_poll_cx,
1628                engine,
1629            }
1630        };
1631        future.await?;
1632
1633        return Ok(slot.unwrap());
1634
1635        struct FiberFuture<'a> {
1636            fiber: wasmtime_fiber::Fiber<'a, Result<()>, (), Result<()>>,
1637            current_poll_cx: *mut *mut Context<'static>,
1638            engine: Engine,
1639        }
1640
1641        // This is surely the most dangerous `unsafe impl Send` in the entire
1642        // crate. There are two members in `FiberFuture` which cause it to not
1643        // be `Send`. One is `current_poll_cx` and is entirely uninteresting.
1644        // This is just used to manage `Context` pointers across `await` points
1645        // in the future, and requires raw pointers to get it to happen easily.
1646        // Nothing too weird about the `Send`-ness, values aren't actually
1647        // crossing threads.
1648        //
1649        // The really interesting piece is `fiber`. Now the "fiber" here is
1650        // actual honest-to-god Rust code which we're moving around. What we're
1651        // doing is the equivalent of moving our thread's stack to another OS
1652        // thread. Turns out we, in general, have no idea what's on the stack
1653        // and would generally have no way to verify that this is actually safe
1654        // to do!
1655        //
1656        // Thankfully, though, Wasmtime has the power. Without being glib it's
1657        // actually worth examining what's on the stack. It's unfortunately not
1658        // super-local to this function itself. Our closure to `Fiber::new` runs
1659        // `func`, which is given to us from the outside. Thankfully, though, we
1660        // have tight control over this. Usage of `on_fiber` is typically done
1661        // *just* before entering WebAssembly itself, so we'll have a few stack
1662        // frames of Rust code (all in Wasmtime itself) before we enter wasm.
1663        //
1664        // Once we've entered wasm, well then we have a whole bunch of wasm
1665        // frames on the stack. We've got this nifty thing called Cranelift,
1666        // though, which allows us to also have complete control over everything
1667        // on the stack!
1668        //
1669        // Finally, when wasm switches back to the fiber's starting pointer
1670        // (this future we're returning) then it means wasm has reentered Rust.
1671        // Suspension can only happen via the `block_on` function of an
1672        // `AsyncCx`. This, conveniently, also happens entirely in Wasmtime
1673        // controlled code!
1674        //
1675        // There's an extremely important point that should be called out here.
1676        // User-provided futures **are not on the stack** during suspension
1677        // points. This is extremely crucial because we in general cannot reason
1678        // about Send/Sync for stack-local variables since rustc doesn't analyze
1679        // them at all. With our construction, though, we are guaranteed that
1680        // Wasmtime owns all stack frames between the stack of a fiber and when
1681        // the fiber suspends (and it could move across threads). At this time
1682        // the only user-provided piece of data on the stack is the future
1683        // itself given to us. Lo-and-behold as you might notice the future is
1684        // required to be `Send`!
1685        //
1686        // What this all boils down to is that we, as the authors of Wasmtime,
1687        // need to be extremely careful that on the async fiber stack we only
1688        // store Send things. For example we can't start using `Rc` willy nilly
1689        // by accident and leave a copy in TLS somewhere. (similarly we have to
1690        // be ready for TLS to change while we're executing wasm code between
1691        // suspension points).
1692        //
1693        // While somewhat onerous it shouldn't be too too hard (the TLS bit is
1694        // the hardest bit so far). This does mean, though, that no user should
1695        // ever have to worry about the `Send`-ness of Wasmtime. If rustc says
1696        // it's ok, then it's ok.
1697        //
1698        // With all that in mind we unsafely assert here that wasmtime is
1699        // correct. We declare the fiber as only containing Send data on its
1700        // stack, despite not knowing for sure at compile time that this is
1701        // correct. That's what `unsafe` in Rust is all about, though, right?
1702        unsafe impl Send for FiberFuture<'_> {}
1703
1704        impl Future for FiberFuture<'_> {
1705            type Output = Result<()>;
1706
1707            fn poll(mut self: Pin<&mut Self>, cx: &mut Context) -> Poll<Self::Output> {
1708                // We need to carry over this `cx` into our fiber's runtime
1709                // for when it tries to poll sub-futures that are created. Doing
1710                // this must be done unsafely, however, since `cx` is only alive
1711                // for this one singular function call. Here we do a `transmute`
1712                // to extend the lifetime of `Context` so it can be stored in
1713                // our `Store`, and then we replace the current polling context
1714                // with this one.
1715                //
1716                // Note that the replace is done for weird situations where
1717                // futures might be switching contexts and there's multiple
1718                // wasmtime futures in a chain of futures.
1719                //
1720                // On exit from this function, though, we reset the polling
1721                // context back to what it was to signify that `Store` no longer
1722                // has access to this pointer.
1723                unsafe {
1724                    let _reset = Reset(self.current_poll_cx, *self.current_poll_cx);
1725                    *self.current_poll_cx =
1726                        std::mem::transmute::<&mut Context<'_>, *mut Context<'static>>(cx);
1727
1728                    // After that's set up we resume execution of the fiber, which
1729                    // may also start the fiber for the first time. This either
1730                    // returns `Ok` saying the fiber finished (yay!) or it returns
1731                    // `Err` with the payload passed to `suspend`, which in our case
1732                    // is `()`. If `Err` is returned that means the fiber polled a
1733                    // future but it said "Pending", so we propagate that here.
1734                    match self.fiber.resume(Ok(())) {
1735                        Ok(result) => Poll::Ready(result),
1736                        Err(()) => Poll::Pending,
1737                    }
1738                }
1739            }
1740        }
1741
1742        // Dropping futures is pretty special in that it means the future has
1743        // been requested to be cancelled. Here we run the risk of dropping an
1744        // in-progress fiber, and if we were to do nothing then the fiber would
1745        // leak all its owned stack resources.
1746        //
1747        // To handle this we implement `Drop` here and, if the fiber isn't done,
1748        // resume execution of the fiber saying "hey please stop you're
1749        // interrupted". Our `Trap` created here (which has the stack trace
1750        // of whomever dropped us) will then get propagated in whatever called
1751        // `block_on`, and the idea is that the trap propagates all the way back
1752        // up to the original fiber start, finishing execution.
1753        //
1754        // We don't actually care about the fiber's return value here (no one's
1755        // around to look at it), we just assert the fiber finished to
1756        // completion.
1757        impl Drop for FiberFuture<'_> {
1758            fn drop(&mut self) {
1759                if !self.fiber.done() {
1760                    let result = self.fiber.resume(Err(anyhow!("future dropped")));
1761                    // This resumption with an error should always complete the
1762                    // fiber. While it's technically possible for host code to catch
1763                    // the trap and re-resume, we'd ideally like to signal that to
1764                    // callers that they shouldn't be doing that.
1765                    debug_assert!(result.is_ok());
1766                }
1767
1768                unsafe {
1769                    self.engine
1770                        .allocator()
1771                        .deallocate_fiber_stack(self.fiber.stack());
1772                }
1773            }
1774        }
1775    }
1776}
1777
1778#[cfg(feature = "async")]
1779pub struct AsyncCx {
1780    current_suspend: *mut *const wasmtime_fiber::Suspend<Result<()>, (), Result<()>>,
1781    current_poll_cx: *mut *mut Context<'static>,
1782}
1783
1784#[cfg(feature = "async")]
1785impl AsyncCx {
1786    /// Blocks on the asynchronous computation represented by `future` and
1787    /// produces the result here, in-line.
1788    ///
1789    /// This function is designed to only work when it's currently executing on
1790    /// a native fiber. This fiber provides the ability for us to handle the
1791    /// future's `Pending` state as "jump back to whomever called the fiber in
1792    /// an asynchronous fashion and propagate `Pending`". This tight coupling
1793    /// with `on_fiber` below is what powers the asynchronicity of calling wasm.
1794    /// Note that the asynchronous part only applies to host functions, wasm
1795    /// itself never really does anything asynchronous at this time.
1796    ///
1797    /// This function takes a `future` and will (appear to) synchronously wait
1798    /// on the result. While this function is executing it will fiber switch
1799    /// to-and-from the original frame calling `on_fiber` which should be a
1800    /// guarantee due to how async stores are configured.
1801    ///
1802    /// The return value here is either the output of the future `T`, or a trap
1803    /// which represents that the asynchronous computation was cancelled. It is
1804    /// not recommended to catch the trap and try to keep executing wasm, so
1805    /// we've tried to liberally document this.
1806    pub unsafe fn block_on<U>(
1807        &self,
1808        mut future: Pin<&mut (dyn Future<Output = U> + Send)>,
1809    ) -> Result<U> {
1810        // Take our current `Suspend` context which was configured as soon as
1811        // our fiber started. Note that we must load it at the front here and
1812        // save it on our stack frame. While we're polling the future other
1813        // fibers may be started for recursive computations, and the current
1814        // suspend context is only preserved at the edges of the fiber, not
1815        // during the fiber itself.
1816        //
1817        // For a little bit of extra safety we also replace the current value
1818        // with null to try to catch any accidental bugs on our part early.
1819        // This is all pretty unsafe so we're trying to be careful...
1820        //
1821        // Note that there should be a segfaulting test  in `async_functions.rs`
1822        // if this `Reset` is removed.
1823        let suspend = *self.current_suspend;
1824        let _reset = Reset(self.current_suspend, suspend);
1825        *self.current_suspend = ptr::null();
1826        assert!(!suspend.is_null());
1827
1828        loop {
1829            let future_result = {
1830                let poll_cx = *self.current_poll_cx;
1831                let _reset = Reset(self.current_poll_cx, poll_cx);
1832                *self.current_poll_cx = ptr::null_mut();
1833                assert!(!poll_cx.is_null());
1834                future.as_mut().poll(&mut *poll_cx)
1835            };
1836
1837            match future_result {
1838                Poll::Ready(t) => break Ok(t),
1839                Poll::Pending => {}
1840            }
1841
1842            let before = wasmtime_runtime::TlsRestore::take();
1843            let res = (*suspend).suspend(());
1844            before.replace();
1845            res?;
1846        }
1847    }
1848}
1849
1850unsafe impl<T> wasmtime_runtime::Store for StoreInner<T> {
1851    fn vmruntime_limits(&self) -> *mut VMRuntimeLimits {
1852        <StoreOpaque>::vmruntime_limits(self)
1853    }
1854
1855    fn epoch_ptr(&self) -> *const AtomicU64 {
1856        self.engine.epoch_counter() as *const _
1857    }
1858
1859    fn externref_activations_table(
1860        &mut self,
1861    ) -> (
1862        &mut VMExternRefActivationsTable,
1863        &dyn wasmtime_runtime::ModuleInfoLookup,
1864    ) {
1865        let inner = &mut self.inner;
1866        (&mut inner.externref_activations_table, &inner.modules)
1867    }
1868
1869    fn memory_growing(
1870        &mut self,
1871        current: usize,
1872        desired: usize,
1873        maximum: Option<usize>,
1874    ) -> Result<bool, anyhow::Error> {
1875        match self.limiter {
1876            Some(ResourceLimiterInner::Sync(ref mut limiter)) => {
1877                Ok(limiter(&mut self.data).memory_growing(current, desired, maximum))
1878            }
1879            #[cfg(feature = "async")]
1880            Some(ResourceLimiterInner::Async(ref mut limiter)) => unsafe {
1881                Ok(self
1882                    .inner
1883                    .async_cx()
1884                    .expect("ResourceLimiterAsync requires async Store")
1885                    .block_on(
1886                        limiter(&mut self.data)
1887                            .memory_growing(current, desired, maximum)
1888                            .as_mut(),
1889                    )?)
1890            },
1891            None => Ok(true),
1892        }
1893    }
1894
1895    fn memory_grow_failed(&mut self, error: &anyhow::Error) {
1896        match self.limiter {
1897            Some(ResourceLimiterInner::Sync(ref mut limiter)) => {
1898                limiter(&mut self.data).memory_grow_failed(error)
1899            }
1900            #[cfg(feature = "async")]
1901            Some(ResourceLimiterInner::Async(ref mut limiter)) => {
1902                limiter(&mut self.data).memory_grow_failed(error)
1903            }
1904            None => {}
1905        }
1906    }
1907
1908    fn table_growing(
1909        &mut self,
1910        current: u32,
1911        desired: u32,
1912        maximum: Option<u32>,
1913    ) -> Result<bool, anyhow::Error> {
1914        // Need to borrow async_cx before the mut borrow of the limiter.
1915        // self.async_cx() panicks when used with a non-async store, so
1916        // wrap this in an option.
1917        #[cfg(feature = "async")]
1918        let async_cx = if self.async_support() {
1919            Some(self.async_cx().unwrap())
1920        } else {
1921            None
1922        };
1923
1924        match self.limiter {
1925            Some(ResourceLimiterInner::Sync(ref mut limiter)) => {
1926                Ok(limiter(&mut self.data).table_growing(current, desired, maximum))
1927            }
1928            #[cfg(feature = "async")]
1929            Some(ResourceLimiterInner::Async(ref mut limiter)) => unsafe {
1930                Ok(async_cx
1931                    .expect("ResourceLimiterAsync requires async Store")
1932                    .block_on(
1933                        limiter(&mut self.data)
1934                            .table_growing(current, desired, maximum)
1935                            .as_mut(),
1936                    )?)
1937            },
1938            None => Ok(true),
1939        }
1940    }
1941
1942    fn table_grow_failed(&mut self, error: &anyhow::Error) {
1943        match self.limiter {
1944            Some(ResourceLimiterInner::Sync(ref mut limiter)) => {
1945                limiter(&mut self.data).table_grow_failed(error)
1946            }
1947            #[cfg(feature = "async")]
1948            Some(ResourceLimiterInner::Async(ref mut limiter)) => {
1949                limiter(&mut self.data).table_grow_failed(error)
1950            }
1951            None => {}
1952        }
1953    }
1954
1955    fn out_of_gas(&mut self) -> Result<(), anyhow::Error> {
1956        return match &mut self.out_of_gas_behavior {
1957            OutOfGas::Trap => Err(Trap::OutOfFuel.into()),
1958            #[cfg(feature = "async")]
1959            OutOfGas::InjectFuel {
1960                injection_count,
1961                fuel_to_inject,
1962            } => {
1963                if *injection_count == 0 {
1964                    return Err(Trap::OutOfFuel.into());
1965                }
1966                *injection_count -= 1;
1967                let fuel = *fuel_to_inject;
1968                self.async_yield_impl()?;
1969                if fuel > 0 {
1970                    self.add_fuel(fuel).unwrap();
1971                }
1972                Ok(())
1973            }
1974            #[cfg(not(feature = "async"))]
1975            OutOfGas::InjectFuel { .. } => unreachable!(),
1976        };
1977    }
1978
1979    fn new_epoch(&mut self) -> Result<u64, anyhow::Error> {
1980        // Temporarily take the configured behavior to avoid mutably borrowing
1981        // multiple times.
1982        let mut behavior = std::mem::take(&mut self.epoch_deadline_behavior);
1983        let delta_result = match &mut behavior {
1984            EpochDeadline::Trap => Err(Trap::Interrupt.into()),
1985            EpochDeadline::Callback(callback) => {
1986                let delta = callback((&mut *self).as_context_mut())?;
1987                // Set a new deadline and return the new epoch deadline so
1988                // the Wasm code doesn't have to reload it.
1989                self.set_epoch_deadline(delta);
1990                Ok(self.get_epoch_deadline())
1991            }
1992            #[cfg(feature = "async")]
1993            EpochDeadline::YieldAndExtendDeadline { delta } => {
1994                let delta = *delta;
1995                // Do the async yield. May return a trap if future was
1996                // canceled while we're yielded.
1997                self.async_yield_impl()?;
1998                // Set a new deadline.
1999                self.set_epoch_deadline(delta);
2000
2001                // Return the new epoch deadline so the Wasm code
2002                // doesn't have to reload it.
2003                Ok(self.get_epoch_deadline())
2004            }
2005        };
2006
2007        // Put back the original behavior which was replaced by `take`.
2008        self.epoch_deadline_behavior = behavior;
2009        delta_result
2010    }
2011}
2012
2013impl<T> StoreInner<T> {
2014    pub(crate) fn set_epoch_deadline(&mut self, delta: u64) {
2015        // Set a new deadline based on the "epoch deadline delta".
2016        //
2017        // Safety: this is safe because the epoch deadline in the
2018        // `VMRuntimeLimits` is accessed only here and by Wasm guest code
2019        // running in this store, and we have a `&mut self` here.
2020        //
2021        // Also, note that when this update is performed while Wasm is
2022        // on the stack, the Wasm will reload the new value once we
2023        // return into it.
2024        let epoch_deadline = unsafe { (*self.vmruntime_limits()).epoch_deadline.get_mut() };
2025        *epoch_deadline = self.engine().current_epoch() + delta;
2026    }
2027
2028    fn epoch_deadline_trap(&mut self) {
2029        self.epoch_deadline_behavior = EpochDeadline::Trap;
2030    }
2031
2032    fn epoch_deadline_callback(
2033        &mut self,
2034        callback: Box<dyn FnMut(StoreContextMut<T>) -> Result<u64> + Send + Sync>,
2035    ) {
2036        self.epoch_deadline_behavior = EpochDeadline::Callback(callback);
2037    }
2038
2039    fn epoch_deadline_async_yield_and_update(&mut self, delta: u64) {
2040        assert!(
2041            self.async_support(),
2042            "cannot use `epoch_deadline_async_yield_and_update` without enabling async support in the config"
2043        );
2044        #[cfg(feature = "async")]
2045        {
2046            self.epoch_deadline_behavior = EpochDeadline::YieldAndExtendDeadline { delta };
2047        }
2048        drop(delta); // suppress warning in non-async build
2049    }
2050
2051    fn get_epoch_deadline(&self) -> u64 {
2052        // Safety: this is safe because, as above, it is only invoked
2053        // from within `new_epoch` which is called from guest Wasm
2054        // code, which will have an exclusive borrow on the Store.
2055        let epoch_deadline = unsafe { (*self.vmruntime_limits()).epoch_deadline.get_mut() };
2056        *epoch_deadline
2057    }
2058}
2059
2060impl<T: Default> Default for Store<T> {
2061    fn default() -> Store<T> {
2062        Store::new(&Engine::default(), T::default())
2063    }
2064}
2065
2066impl<T: fmt::Debug> fmt::Debug for Store<T> {
2067    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
2068        let inner = &**self.inner as *const StoreInner<T>;
2069        f.debug_struct("Store")
2070            .field("inner", &inner)
2071            .field("data", &self.inner.data)
2072            .finish()
2073    }
2074}
2075
2076impl<T> Drop for Store<T> {
2077    fn drop(&mut self) {
2078        // for documentation on this `unsafe`, see `into_data`.
2079        unsafe {
2080            ManuallyDrop::drop(&mut self.inner.data);
2081            ManuallyDrop::drop(&mut self.inner);
2082        }
2083    }
2084}
2085
2086impl Drop for StoreOpaque {
2087    fn drop(&mut self) {
2088        // NB it's important that this destructor does not access `self.data`.
2089        // That is deallocated by `Drop for Store<T>` above.
2090
2091        unsafe {
2092            let allocator = self.engine.allocator();
2093            let ondemand = OnDemandInstanceAllocator::default();
2094            for instance in self.instances.iter_mut() {
2095                if instance.ondemand {
2096                    ondemand.deallocate(&mut instance.handle);
2097                } else {
2098                    allocator.deallocate(&mut instance.handle);
2099                }
2100            }
2101            ondemand.deallocate(&mut self.default_caller);
2102
2103            // See documentation for these fields on `StoreOpaque` for why they
2104            // must be dropped in this order.
2105            ManuallyDrop::drop(&mut self.store_data);
2106            ManuallyDrop::drop(&mut self.rooted_host_funcs);
2107        }
2108    }
2109}
2110
2111impl wasmtime_runtime::ModuleInfoLookup for ModuleRegistry {
2112    fn lookup(&self, pc: usize) -> Option<&dyn ModuleInfo> {
2113        self.lookup_module(pc)
2114    }
2115}
2116
2117struct Reset<T: Copy>(*mut T, T);
2118
2119impl<T: Copy> Drop for Reset<T> {
2120    fn drop(&mut self) {
2121        unsafe {
2122            *self.0 = self.1;
2123        }
2124    }
2125}