portable_atomic/
lib.rs

1// SPDX-License-Identifier: Apache-2.0 OR MIT
2
3/*!
4<!-- tidy:crate-doc:start -->
5Portable atomic types including support for 128-bit atomics, atomic float, etc.
6
7- Provide all atomic integer types (`Atomic{I,U}{8,16,32,64}`) for all targets that can use atomic CAS. (i.e., all targets that can use `std`, and most no-std targets)
8- Provide `AtomicI128` and `AtomicU128`.
9- Provide `AtomicF32` and `AtomicF64`. ([optional, requires the `float` feature](#optional-features-float))
10- Provide atomic load/store for targets where atomic is not available at all in the standard library. (RISC-V without A-extension, MSP430, AVR)
11- Provide atomic CAS for targets where atomic CAS is not available in the standard library. (thumbv6m, pre-v6 Arm, RISC-V without A-extension, MSP430, AVR, Xtensa, etc.) (always enabled for MSP430 and AVR, [optional](#optional-features-critical-section) otherwise)
12- Provide stable equivalents of the standard library's atomic types' unstable APIs, such as [`AtomicPtr::fetch_*`](https://github.com/rust-lang/rust/issues/99108).
13- Make features that require newer compilers, such as [`fetch_{max,min}`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_max), [`fetch_update`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.fetch_update), [`as_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.as_ptr), [`from_ptr`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicUsize.html#method.from_ptr), [`AtomicBool::fetch_not`](https://doc.rust-lang.org/std/sync/atomic/struct.AtomicBool.html#method.fetch_not) and [stronger CAS failure ordering](https://github.com/rust-lang/rust/pull/98383) available on Rust 1.34+.
14- Provide workaround for bugs in the standard library's atomic-related APIs, such as [rust-lang/rust#100650], `fence`/`compiler_fence` on MSP430 that cause LLVM error, etc.
15
16<!-- TODO:
17- mention Atomic{I,U}*::fetch_neg, Atomic{I*,U*,Ptr}::bit_*, etc.
18- mention optimizations not available in the standard library's equivalents
19-->
20
21portable-atomic version of `std::sync::Arc` is provided by the [portable-atomic-util](https://github.com/taiki-e/portable-atomic/tree/HEAD/portable-atomic-util) crate.
22
23## Usage
24
25Add this to your `Cargo.toml`:
26
27```toml
28[dependencies]
29portable-atomic = "1"
30```
31
32The default features are mainly for users who use atomics larger than the pointer width.
33If you don't need them, disabling the default features may reduce code size and compile time slightly.
34
35```toml
36[dependencies]
37portable-atomic = { version = "1", default-features = false }
38```
39
40If your crate supports no-std environment and requires atomic CAS, enabling the `require-cas` feature will allow the `portable-atomic` to display a [helpful error message](https://github.com/taiki-e/portable-atomic/pull/100) to users on targets requiring additional action on the user side to provide atomic CAS.
41
42```toml
43[dependencies]
44portable-atomic = { version = "1.3", default-features = false, features = ["require-cas"] }
45```
46
47## 128-bit atomics support
48
49Native 128-bit atomic operations are available on x86_64 (Rust 1.59+), AArch64 (Rust 1.59+), riscv64 (Rust 1.82+), powerpc64 (nightly only), s390x (nightly only), and Arm64EC (nightly only), otherwise the fallback implementation is used.
50
51On x86_64, even if `cmpxchg16b` is not available at compile-time (note: `cmpxchg16b` target feature is enabled by default only on Apple and Windows (except Windows 7) targets), run-time detection checks whether `cmpxchg16b` is available. If `cmpxchg16b` is not available at either compile-time or run-time detection, the fallback implementation is used. See also [`portable_atomic_no_outline_atomics`](#optional-cfg-no-outline-atomics) cfg.
52
53They are usually implemented using inline assembly, and when using Miri or ThreadSanitizer that do not support inline assembly, core intrinsics are used instead of inline assembly if possible.
54
55See the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md) for details.
56
57## Optional features
58
59- **`fallback`** *(enabled by default)*<br>
60  Enable fallback implementations.
61
62  Disabling this allows only atomic types for which the platform natively supports atomic operations.
63
64- <a name="optional-features-float"></a>**`float`**<br>
65  Provide `AtomicF{32,64}`.
66
67  Note that most of `fetch_*` operations of atomic floats are implemented using CAS loops, which can be slower than equivalent operations of atomic integers. ([GPU targets have atomic instructions for float, so we plan to use these instructions for GPU targets in the future.](https://github.com/taiki-e/portable-atomic/issues/34))
68
69- **`std`**<br>
70  Use `std`.
71
72- <a name="optional-features-require-cas"></a>**`require-cas`**<br>
73  Emit compile error if atomic CAS is not available. See [Usage](#usage) section and [#100](https://github.com/taiki-e/portable-atomic/pull/100) for more.
74
75- <a name="optional-features-serde"></a>**`serde`**<br>
76  Implement `serde::{Serialize,Deserialize}` for atomic types.
77
78  Note:
79  - The MSRV when this feature is enabled depends on the MSRV of [serde].
80
81- <a name="optional-features-critical-section"></a>**`critical-section`**<br>
82  When this feature is enabled, this crate uses [critical-section] to provide atomic CAS for targets where
83  it is not natively available. When enabling it, you should provide a suitable critical section implementation
84  for the current target, see the [critical-section] documentation for details on how to do so.
85
86  `critical-section` support is useful to get atomic CAS when the [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) can't be used,
87  such as multi-core targets, unprivileged code running under some RTOS, or environments where disabling interrupts
88  needs extra care due to e.g. real-time requirements.
89
90  Note that with the `critical-section` feature, critical sections are taken for all atomic operations, while with
91  [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core) some operations don't require disabling interrupts (loads and stores, but
92  additionally on MSP430 `add`, `sub`, `and`, `or`, `xor`, `not`). Therefore, for better performance, if
93  all the `critical-section` implementation for your target does is disable interrupts, prefer using
94  `unsafe-assume-single-core` feature instead.
95
96  Note:
97  - The MSRV when this feature is enabled depends on the MSRV of [critical-section].
98  - It is usually *not* recommended to always enable this feature in dependencies of the library.
99
100    Enabling this feature will prevent the end user from having the chance to take advantage of other (potentially) efficient implementations ([Implementations provided by `unsafe-assume-single-core` feature, default implementations on MSP430 and AVR](#optional-features-unsafe-assume-single-core), implementation proposed in [#60], etc. Other systems may also be supported in the future).
101
102    The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where other implementations are known not to work.)
103
104    As an example, the end-user's `Cargo.toml` that uses a crate that provides a critical-section implementation and a crate that depends on portable-atomic as an option would be expected to look like this:
105
106    ```toml
107    [dependencies]
108    portable-atomic = { version = "1", default-features = false, features = ["critical-section"] }
109    crate-provides-critical-section-impl = "..."
110    crate-uses-portable-atomic-as-feature = { version = "...", features = ["portable-atomic"] }
111    ```
112
113- <a name="optional-features-unsafe-assume-single-core"></a>**`unsafe-assume-single-core`**<br>
114  Assume that the target is single-core.
115  When this feature is enabled, this crate provides atomic CAS for targets where atomic CAS is not available in the standard library by disabling interrupts.
116
117  This feature is `unsafe`, and note the following safety requirements:
118  - Enabling this feature for multi-core systems is always **unsound**.
119  - This uses privileged instructions to disable interrupts, so it usually doesn't work on unprivileged mode.
120    Enabling this feature in an environment where privileged instructions are not available, or if the instructions used are not sufficient to disable interrupts in the system, it is also usually considered **unsound**, although the details are system-dependent.
121
122    The following are known cases:
123    - On pre-v6 Arm, this disables only IRQs by default. For many systems (e.g., GBA) this is enough. If the system need to disable both IRQs and FIQs, you need to enable the `disable-fiq` feature together.
124    - On RISC-V without A-extension, this generates code for machine-mode (M-mode) by default. If you enable the `s-mode` together, this generates code for supervisor-mode (S-mode). In particular, `qemu-system-riscv*` uses [OpenSBI](https://github.com/riscv-software-src/opensbi) as the default firmware.
125
126    See also the [`interrupt` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/interrupt/README.md).
127
128  Consider using the [`critical-section` feature](#optional-features-critical-section) for systems that cannot use this feature.
129
130  It is **very strongly discouraged** to enable this feature in libraries that depend on `portable-atomic`. The recommended approach for libraries is to leave it up to the end user whether or not to enable this feature. (However, it may make sense to enable this feature by default for libraries specific to a platform where it is guaranteed to always be sound, for example in a hardware abstraction layer targeting a single-core chip.)
131
132  Armv6-M (thumbv6m), pre-v6 Arm (e.g., thumbv4t, thumbv5te), RISC-V without A-extension, and Xtensa are currently supported.
133
134  Since all MSP430 and AVR are single-core, we always provide atomic CAS for them without this feature.
135
136  Enabling this feature for targets that have atomic CAS will result in a compile error.
137
138  Feel free to submit an issue if your target is not supported yet.
139
140## Optional cfg
141
142One of the ways to enable cfg is to set [rustflags in the cargo config](https://doc.rust-lang.org/cargo/reference/config.html#targettriplerustflags):
143
144```toml
145# .cargo/config.toml
146[target.<target>]
147rustflags = ["--cfg", "portable_atomic_no_outline_atomics"]
148```
149
150Or set environment variable:
151
152```sh
153RUSTFLAGS="--cfg portable_atomic_no_outline_atomics" cargo ...
154```
155
156- <a name="optional-cfg-unsafe-assume-single-core"></a>**`--cfg portable_atomic_unsafe_assume_single_core`**<br>
157  Since 1.4.0, this cfg is an alias of [`unsafe-assume-single-core` feature](#optional-features-unsafe-assume-single-core).
158
159  Originally, we were providing these as cfgs instead of features, but based on a strong request from the embedded ecosystem, we have agreed to provide them as features as well. See [#94](https://github.com/taiki-e/portable-atomic/pull/94) for more.
160
161- <a name="optional-cfg-no-outline-atomics"></a>**`--cfg portable_atomic_no_outline_atomics`**<br>
162  Disable dynamic dispatching by run-time CPU feature detection.
163
164  If dynamic dispatching by run-time CPU feature detection is enabled, it allows maintaining support for older CPUs while using features that are not supported on older CPUs, such as CMPXCHG16B (x86_64) and FEAT_LSE/FEAT_LSE2 (AArch64).
165
166  Note:
167  - Dynamic detection is currently only enabled in Rust 1.59+ for x86_64 and AArch64, Rust 1.82+ for RISC-V (disabled by default), nightly only for powerpc64 (disabled by default) and Arm64EC, otherwise it works the same as when this cfg is set.
168  - If the required target features are enabled at compile-time, the atomic operations are inlined.
169  - This is compatible with no-std (as with all features except `std`).
170  - On some targets, run-time detection is disabled by default mainly for compatibility with older versions of operating systems or incomplete build environments, and can be enabled by `--cfg portable_atomic_outline_atomics`. (When both cfg are enabled, `*_no_*` cfg is preferred.)
171  - Some AArch64 targets enable LLVM's `outline-atomics` target feature by default, so if you set this cfg, you may want to disable that as well. (portable-atomic's outline-atomics does not depend on the compiler-rt symbols, so even if you need to disable LLVM's outline-atomics, you may not need to disable portable-atomic's outline-atomics.)
172
173  See also the [`atomic128` module's readme](https://github.com/taiki-e/portable-atomic/blob/HEAD/src/imp/atomic128/README.md).
174
175## Related Projects
176
177- [atomic-maybe-uninit]: Atomic operations on potentially uninitialized integers.
178- [atomic-memcpy]: Byte-wise atomic memcpy.
179
180[#60]: https://github.com/taiki-e/portable-atomic/issues/60
181[atomic-maybe-uninit]: https://github.com/taiki-e/atomic-maybe-uninit
182[atomic-memcpy]: https://github.com/taiki-e/atomic-memcpy
183[critical-section]: https://github.com/rust-embedded/critical-section
184[rust-lang/rust#100650]: https://github.com/rust-lang/rust/issues/100650
185[serde]: https://github.com/serde-rs/serde
186
187<!-- tidy:crate-doc:end -->
188*/
189
190#![no_std]
191#![doc(test(
192    no_crate_inject,
193    attr(
194        deny(warnings, rust_2018_idioms, single_use_lifetimes),
195        allow(dead_code, unused_variables)
196    )
197))]
198#![cfg_attr(not(portable_atomic_no_unsafe_op_in_unsafe_fn), warn(unsafe_op_in_unsafe_fn))] // unsafe_op_in_unsafe_fn requires Rust 1.52
199#![cfg_attr(portable_atomic_no_unsafe_op_in_unsafe_fn, allow(unused_unsafe))]
200#![warn(
201    // Lints that may help when writing public library.
202    missing_debug_implementations,
203    // missing_docs,
204    clippy::alloc_instead_of_core,
205    clippy::exhaustive_enums,
206    clippy::exhaustive_structs,
207    clippy::impl_trait_in_params,
208    clippy::missing_inline_in_public_items,
209    clippy::std_instead_of_alloc,
210    clippy::std_instead_of_core,
211    // Code outside of cfg(feature = "float") shouldn't use float.
212    clippy::float_arithmetic,
213)]
214#![cfg_attr(not(portable_atomic_no_asm), warn(missing_docs))] // module-level #![allow(missing_docs)] doesn't work for macros on old rustc
215#![allow(clippy::inline_always, clippy::used_underscore_items)]
216// asm_experimental_arch
217// AVR, MSP430, and Xtensa are tier 3 platforms and require nightly anyway.
218// On tier 2 platforms (arm64ec, powerpc64, and s390x), we use cfg set by build script to
219// determine whether this feature is available or not.
220#![cfg_attr(
221    all(
222        not(portable_atomic_no_asm),
223        any(
224            target_arch = "avr",
225            target_arch = "msp430",
226            all(target_arch = "xtensa", portable_atomic_unsafe_assume_single_core),
227            all(target_arch = "arm64ec", portable_atomic_unstable_asm_experimental_arch),
228            all(target_arch = "powerpc64", portable_atomic_unstable_asm_experimental_arch),
229            all(target_arch = "s390x", portable_atomic_unstable_asm_experimental_arch),
230        ),
231    ),
232    feature(asm_experimental_arch)
233)]
234// Old nightly only
235// These features are already stabilized or have already been removed from compilers,
236// and can safely be enabled for old nightly as long as version detection works.
237// - cfg(target_has_atomic)
238// - asm! on Arm, AArch64, RISC-V, x86, x86_64
239// - llvm_asm! on AVR (tier 3) and MSP430 (tier 3)
240// - #[instruction_set] on non-Linux/Android pre-v6 Arm (tier 3)
241// This also helps us test that our assembly code works with the minimum external
242// LLVM version of the first rustc version that inline assembly stabilized.
243#![cfg_attr(portable_atomic_unstable_cfg_target_has_atomic, feature(cfg_target_has_atomic))]
244#![cfg_attr(
245    all(
246        portable_atomic_unstable_asm,
247        any(
248            target_arch = "aarch64",
249            target_arch = "arm",
250            target_arch = "riscv32",
251            target_arch = "riscv64",
252            target_arch = "x86",
253            target_arch = "x86_64",
254        ),
255    ),
256    feature(asm)
257)]
258#![cfg_attr(
259    all(any(target_arch = "avr", target_arch = "msp430"), portable_atomic_no_asm),
260    feature(llvm_asm)
261)]
262#![cfg_attr(
263    all(
264        target_arch = "arm",
265        portable_atomic_unstable_isa_attribute,
266        any(test, portable_atomic_unsafe_assume_single_core),
267        not(any(target_feature = "v6", portable_atomic_target_feature = "v6")),
268        not(target_has_atomic = "ptr"),
269    ),
270    feature(isa_attribute)
271)]
272// Miri and/or ThreadSanitizer only
273// They do not support inline assembly, so we need to use unstable features instead.
274// Since they require nightly compilers anyway, we can use the unstable features.
275// This is not an ideal situation, but it is still better than always using lock-based
276// fallback and causing memory ordering problems to be missed by these checkers.
277#![cfg_attr(
278    all(
279        any(
280            target_arch = "aarch64",
281            target_arch = "arm64ec",
282            target_arch = "powerpc64",
283            target_arch = "riscv64",
284            target_arch = "s390x",
285        ),
286        any(miri, portable_atomic_sanitize_thread),
287    ),
288    allow(internal_features)
289)]
290#![cfg_attr(
291    all(
292        any(
293            target_arch = "aarch64",
294            target_arch = "arm64ec",
295            target_arch = "powerpc64",
296            target_arch = "riscv64",
297            target_arch = "s390x",
298        ),
299        any(miri, portable_atomic_sanitize_thread),
300    ),
301    feature(core_intrinsics)
302)]
303// docs.rs only (cfg is enabled by docs.rs, not build script)
304#![cfg_attr(docsrs, feature(doc_cfg))]
305#![cfg_attr(
306    all(
307        portable_atomic_no_atomic_load_store,
308        not(any(
309            target_arch = "avr",
310            target_arch = "bpf",
311            target_arch = "msp430",
312            target_arch = "riscv32",
313            target_arch = "riscv64",
314            feature = "critical-section",
315        )),
316    ),
317    allow(unused_imports, unused_macros)
318)]
319
320// There are currently no 128-bit or higher builtin targets.
321// (Although some of our generic code is written with the future
322// addition of 128-bit targets in mind.)
323// Note that Rust (and C99) pointers must be at least 16-bit (i.e., 8-bit targets are impossible): https://github.com/rust-lang/rust/pull/49305
324#[cfg(not(any(
325    target_pointer_width = "16",
326    target_pointer_width = "32",
327    target_pointer_width = "64",
328)))]
329compile_error!(
330    "portable-atomic currently only supports targets with {16,32,64}-bit pointer width; \
331     if you need support for others, \
332     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
333);
334
335#[cfg(portable_atomic_unsafe_assume_single_core)]
336#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(not(portable_atomic_no_atomic_cas)))]
337#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(target_has_atomic = "ptr"))]
338compile_error!(
339    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
340     does not compatible with target that supports atomic CAS;\n\
341     see also <https://github.com/taiki-e/portable-atomic/issues/148> for troubleshooting"
342);
343#[cfg(portable_atomic_unsafe_assume_single_core)]
344#[cfg_attr(portable_atomic_no_cfg_target_has_atomic, cfg(portable_atomic_no_atomic_cas))]
345#[cfg_attr(not(portable_atomic_no_cfg_target_has_atomic), cfg(not(target_has_atomic = "ptr")))]
346#[cfg(not(any(
347    target_arch = "arm",
348    target_arch = "avr",
349    target_arch = "msp430",
350    target_arch = "riscv32",
351    target_arch = "riscv64",
352    target_arch = "xtensa",
353)))]
354compile_error!(
355    "`portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) \
356     is not supported yet on this target;\n\
357     if you need unsafe-assume-single-core support for this target,\n\
358     please submit an issue at <https://github.com/taiki-e/portable-atomic>"
359);
360
361#[cfg(portable_atomic_no_outline_atomics)]
362#[cfg(not(any(
363    target_arch = "aarch64",
364    target_arch = "arm64ec",
365    target_arch = "arm",
366    target_arch = "powerpc64",
367    target_arch = "riscv32",
368    target_arch = "riscv64",
369    target_arch = "x86_64",
370)))]
371compile_error!("`portable_atomic_no_outline_atomics` cfg does not compatible with this target");
372#[cfg(portable_atomic_outline_atomics)]
373#[cfg(not(any(
374    target_arch = "aarch64",
375    target_arch = "powerpc64",
376    target_arch = "riscv32",
377    target_arch = "riscv64",
378)))]
379compile_error!("`portable_atomic_outline_atomics` cfg does not compatible with this target");
380
381#[cfg(portable_atomic_disable_fiq)]
382#[cfg(not(all(
383    target_arch = "arm",
384    not(any(target_feature = "mclass", portable_atomic_target_feature = "mclass")),
385)))]
386compile_error!(
387    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) is only available on pre-v6 Arm"
388);
389#[cfg(portable_atomic_s_mode)]
390#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
391compile_error!("`portable_atomic_s_mode` cfg (`s-mode` feature) is only available on RISC-V");
392#[cfg(portable_atomic_force_amo)]
393#[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64")))]
394compile_error!("`portable_atomic_force_amo` cfg (`force-amo` feature) is only available on RISC-V");
395
396#[cfg(portable_atomic_disable_fiq)]
397#[cfg(not(portable_atomic_unsafe_assume_single_core))]
398compile_error!(
399    "`portable_atomic_disable_fiq` cfg (`disable-fiq` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
400);
401#[cfg(portable_atomic_s_mode)]
402#[cfg(not(portable_atomic_unsafe_assume_single_core))]
403compile_error!(
404    "`portable_atomic_s_mode` cfg (`s-mode` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
405);
406#[cfg(portable_atomic_force_amo)]
407#[cfg(not(portable_atomic_unsafe_assume_single_core))]
408compile_error!(
409    "`portable_atomic_force_amo` cfg (`force-amo` feature) may only be used together with `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature)"
410);
411
412#[cfg(all(portable_atomic_unsafe_assume_single_core, feature = "critical-section"))]
413compile_error!(
414    "you may not enable `critical-section` feature and `portable_atomic_unsafe_assume_single_core` cfg (`unsafe-assume-single-core` feature) at the same time"
415);
416
417#[cfg(feature = "require-cas")]
418#[cfg_attr(
419    portable_atomic_no_cfg_target_has_atomic,
420    cfg(not(any(
421        not(portable_atomic_no_atomic_cas),
422        portable_atomic_unsafe_assume_single_core,
423        feature = "critical-section",
424        target_arch = "avr",
425        target_arch = "msp430",
426    )))
427)]
428#[cfg_attr(
429    not(portable_atomic_no_cfg_target_has_atomic),
430    cfg(not(any(
431        target_has_atomic = "ptr",
432        portable_atomic_unsafe_assume_single_core,
433        feature = "critical-section",
434        target_arch = "avr",
435        target_arch = "msp430",
436    )))
437)]
438compile_error!(
439    "dependents require atomic CAS but not available on this target by default;\n\
440    consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features.\n\
441    see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
442);
443
444#[cfg(any(test, feature = "std"))]
445extern crate std;
446
447#[macro_use]
448mod cfgs;
449#[cfg(target_pointer_width = "128")]
450pub use {cfg_has_atomic_128 as cfg_has_atomic_ptr, cfg_no_atomic_128 as cfg_no_atomic_ptr};
451#[cfg(target_pointer_width = "16")]
452pub use {cfg_has_atomic_16 as cfg_has_atomic_ptr, cfg_no_atomic_16 as cfg_no_atomic_ptr};
453#[cfg(target_pointer_width = "32")]
454pub use {cfg_has_atomic_32 as cfg_has_atomic_ptr, cfg_no_atomic_32 as cfg_no_atomic_ptr};
455#[cfg(target_pointer_width = "64")]
456pub use {cfg_has_atomic_64 as cfg_has_atomic_ptr, cfg_no_atomic_64 as cfg_no_atomic_ptr};
457
458#[macro_use]
459mod utils;
460
461#[cfg(test)]
462#[macro_use]
463mod tests;
464
465#[doc(no_inline)]
466pub use core::sync::atomic::Ordering;
467
468#[doc(no_inline)]
469// LLVM doesn't support fence/compiler_fence for MSP430.
470#[cfg(not(target_arch = "msp430"))]
471pub use core::sync::atomic::{compiler_fence, fence};
472#[cfg(target_arch = "msp430")]
473pub use imp::msp430::{compiler_fence, fence};
474
475mod imp;
476
477pub mod hint {
478    //! Re-export of the [`core::hint`] module.
479    //!
480    //! The only difference from the [`core::hint`] module is that [`spin_loop`]
481    //! is available in all rust versions that this crate supports.
482    //!
483    //! ```
484    //! use portable_atomic::hint;
485    //!
486    //! hint::spin_loop();
487    //! ```
488
489    #[doc(no_inline)]
490    pub use core::hint::*;
491
492    /// Emits a machine instruction to signal the processor that it is running in
493    /// a busy-wait spin-loop ("spin lock").
494    ///
495    /// Upon receiving the spin-loop signal the processor can optimize its behavior by,
496    /// for example, saving power or switching hyper-threads.
497    ///
498    /// This function is different from [`thread::yield_now`] which directly
499    /// yields to the system's scheduler, whereas `spin_loop` does not interact
500    /// with the operating system.
501    ///
502    /// A common use case for `spin_loop` is implementing bounded optimistic
503    /// spinning in a CAS loop in synchronization primitives. To avoid problems
504    /// like priority inversion, it is strongly recommended that the spin loop is
505    /// terminated after a finite amount of iterations and an appropriate blocking
506    /// syscall is made.
507    ///
508    /// **Note:** On platforms that do not support receiving spin-loop hints this
509    /// function does not do anything at all.
510    ///
511    /// [`thread::yield_now`]: https://doc.rust-lang.org/std/thread/fn.yield_now.html
512    #[inline]
513    pub fn spin_loop() {
514        #[allow(deprecated)]
515        core::sync::atomic::spin_loop_hint();
516    }
517}
518
519#[cfg(doc)]
520use core::sync::atomic::Ordering::{AcqRel, Acquire, Relaxed, Release, SeqCst};
521use core::{fmt, ptr};
522
523#[cfg(miri)]
524use crate::utils::strict;
525
526cfg_has_atomic_8! {
527/// A boolean type which can be safely shared between threads.
528///
529/// This type has the same in-memory representation as a [`bool`].
530///
531/// If the compiler and the platform support atomic loads and stores of `u8`,
532/// this type is a wrapper for the standard library's
533/// [`AtomicBool`](core::sync::atomic::AtomicBool). If the platform supports it
534/// but the compiler does not, atomic operations are implemented using inline
535/// assembly.
536#[repr(C, align(1))]
537pub struct AtomicBool {
538    v: core::cell::UnsafeCell<u8>,
539}
540
541impl Default for AtomicBool {
542    /// Creates an `AtomicBool` initialized to `false`.
543    #[inline]
544    fn default() -> Self {
545        Self::new(false)
546    }
547}
548
549impl From<bool> for AtomicBool {
550    /// Converts a `bool` into an `AtomicBool`.
551    #[inline]
552    fn from(b: bool) -> Self {
553        Self::new(b)
554    }
555}
556
557// Send is implicitly implemented.
558// SAFETY: any data races are prevented by disabling interrupts or
559// atomic intrinsics (see module-level comments).
560unsafe impl Sync for AtomicBool {}
561
562// UnwindSafe is implicitly implemented.
563#[cfg(not(portable_atomic_no_core_unwind_safe))]
564impl core::panic::RefUnwindSafe for AtomicBool {}
565#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
566impl std::panic::RefUnwindSafe for AtomicBool {}
567
568impl_debug_and_serde!(AtomicBool);
569
570impl AtomicBool {
571    /// Creates a new `AtomicBool`.
572    ///
573    /// # Examples
574    ///
575    /// ```
576    /// use portable_atomic::AtomicBool;
577    ///
578    /// let atomic_true = AtomicBool::new(true);
579    /// let atomic_false = AtomicBool::new(false);
580    /// ```
581    #[inline]
582    #[must_use]
583    pub const fn new(v: bool) -> Self {
584        static_assert_layout!(AtomicBool, bool);
585        Self { v: core::cell::UnsafeCell::new(v as u8) }
586    }
587
588    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
589    /// Creates a new `AtomicBool` from a pointer.
590    ///
591    /// # Safety
592    ///
593    /// * `ptr` must be aligned to `align_of::<AtomicBool>()` (note that on some platforms this can
594    ///   be bigger than `align_of::<bool>()`).
595    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
596    /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
597    ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
598    ///   value (or vice-versa).
599    ///   * In other words, time periods where the value is accessed atomically may not overlap
600    ///     with periods where the value is accessed non-atomically.
601    ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
602    ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
603    ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
604    ///     from the same thread.
605    /// * If this atomic type is *not* lock-free:
606    ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
607    ///     with accesses via the returned value (or vice-versa).
608    ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
609    ///     be compatible with operations performed by this atomic type.
610    /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
611    ///   these are not supported by the memory model.
612    ///
613    /// [valid]: core::ptr#safety
614    #[inline]
615    #[must_use]
616    pub unsafe fn from_ptr<'a>(ptr: *mut bool) -> &'a Self {
617        #[allow(clippy::cast_ptr_alignment)]
618        // SAFETY: guaranteed by the caller
619        unsafe { &*(ptr as *mut Self) }
620    }
621
622    /// Returns `true` if operations on values of this type are lock-free.
623    ///
624    /// If the compiler or the platform doesn't support the necessary
625    /// atomic instructions, global locks for every potentially
626    /// concurrent atomic operation will be used.
627    ///
628    /// # Examples
629    ///
630    /// ```
631    /// use portable_atomic::AtomicBool;
632    ///
633    /// let is_lock_free = AtomicBool::is_lock_free();
634    /// ```
635    #[inline]
636    #[must_use]
637    pub fn is_lock_free() -> bool {
638        imp::AtomicU8::is_lock_free()
639    }
640
641    /// Returns `true` if operations on values of this type are lock-free.
642    ///
643    /// If the compiler or the platform doesn't support the necessary
644    /// atomic instructions, global locks for every potentially
645    /// concurrent atomic operation will be used.
646    ///
647    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
648    /// this type may be lock-free even if the function returns false.
649    ///
650    /// # Examples
651    ///
652    /// ```
653    /// use portable_atomic::AtomicBool;
654    ///
655    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicBool::is_always_lock_free();
656    /// ```
657    #[inline]
658    #[must_use]
659    pub const fn is_always_lock_free() -> bool {
660        imp::AtomicU8::IS_ALWAYS_LOCK_FREE
661    }
662    #[cfg(test)]
663    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
664
665    /// Returns a mutable reference to the underlying [`bool`].
666    ///
667    /// This is safe because the mutable reference guarantees that no other threads are
668    /// concurrently accessing the atomic data.
669    ///
670    /// # Examples
671    ///
672    /// ```
673    /// use portable_atomic::{AtomicBool, Ordering};
674    ///
675    /// let mut some_bool = AtomicBool::new(true);
676    /// assert_eq!(*some_bool.get_mut(), true);
677    /// *some_bool.get_mut() = false;
678    /// assert_eq!(some_bool.load(Ordering::SeqCst), false);
679    /// ```
680    #[inline]
681    pub fn get_mut(&mut self) -> &mut bool {
682        // SAFETY: the mutable reference guarantees unique ownership.
683        unsafe { &mut *(self.v.get() as *mut bool) }
684    }
685
686    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
687    // https://github.com/rust-lang/rust/issues/76314
688
689    const_fn! {
690        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
691        /// Consumes the atomic and returns the contained value.
692        ///
693        /// This is safe because passing `self` by value guarantees that no other threads are
694        /// concurrently accessing the atomic data.
695        ///
696        /// This is `const fn` on Rust 1.56+.
697        ///
698        /// # Examples
699        ///
700        /// ```
701        /// use portable_atomic::AtomicBool;
702        ///
703        /// let some_bool = AtomicBool::new(true);
704        /// assert_eq!(some_bool.into_inner(), true);
705        /// ```
706        #[inline]
707        pub const fn into_inner(self) -> bool {
708            // SAFETY: AtomicBool and u8 have the same size and in-memory representations,
709            // so they can be safely transmuted.
710            // (const UnsafeCell::into_inner is unstable)
711            unsafe { core::mem::transmute::<AtomicBool, u8>(self) != 0 }
712        }
713    }
714
715    /// Loads a value from the bool.
716    ///
717    /// `load` takes an [`Ordering`] argument which describes the memory ordering
718    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
719    ///
720    /// # Panics
721    ///
722    /// Panics if `order` is [`Release`] or [`AcqRel`].
723    ///
724    /// # Examples
725    ///
726    /// ```
727    /// use portable_atomic::{AtomicBool, Ordering};
728    ///
729    /// let some_bool = AtomicBool::new(true);
730    ///
731    /// assert_eq!(some_bool.load(Ordering::Relaxed), true);
732    /// ```
733    #[inline]
734    #[cfg_attr(
735        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
736        track_caller
737    )]
738    pub fn load(&self, order: Ordering) -> bool {
739        self.as_atomic_u8().load(order) != 0
740    }
741
742    /// Stores a value into the bool.
743    ///
744    /// `store` takes an [`Ordering`] argument which describes the memory ordering
745    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
746    ///
747    /// # Panics
748    ///
749    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
750    ///
751    /// # Examples
752    ///
753    /// ```
754    /// use portable_atomic::{AtomicBool, Ordering};
755    ///
756    /// let some_bool = AtomicBool::new(true);
757    ///
758    /// some_bool.store(false, Ordering::Relaxed);
759    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
760    /// ```
761    #[inline]
762    #[cfg_attr(
763        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
764        track_caller
765    )]
766    pub fn store(&self, val: bool, order: Ordering) {
767        self.as_atomic_u8().store(val as u8, order);
768    }
769
770    cfg_has_atomic_cas_or_amo32! {
771    /// Stores a value into the bool, returning the previous value.
772    ///
773    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
774    /// of this operation. All ordering modes are possible. Note that using
775    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
776    /// using [`Release`] makes the load part [`Relaxed`].
777    ///
778    /// # Examples
779    ///
780    /// ```
781    /// use portable_atomic::{AtomicBool, Ordering};
782    ///
783    /// let some_bool = AtomicBool::new(true);
784    ///
785    /// assert_eq!(some_bool.swap(false, Ordering::Relaxed), true);
786    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
787    /// ```
788    #[inline]
789    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
790    pub fn swap(&self, val: bool, order: Ordering) -> bool {
791        #[cfg(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"))]
792        {
793            // See https://github.com/rust-lang/rust/pull/114034 for details.
794            // https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L233
795            // https://godbolt.org/z/Enh87Ph9b
796            if val { self.fetch_or(true, order) } else { self.fetch_and(false, order) }
797        }
798        #[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64")))]
799        {
800            self.as_atomic_u8().swap(val as u8, order) != 0
801        }
802    }
803
804    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
805    ///
806    /// The return value is a result indicating whether the new value was written and containing
807    /// the previous value. On success this value is guaranteed to be equal to `current`.
808    ///
809    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
810    /// ordering of this operation. `success` describes the required ordering for the
811    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
812    /// `failure` describes the required ordering for the load operation that takes place when
813    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
814    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
815    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
816    ///
817    /// # Panics
818    ///
819    /// Panics if `failure` is [`Release`], [`AcqRel`].
820    ///
821    /// # Examples
822    ///
823    /// ```
824    /// use portable_atomic::{AtomicBool, Ordering};
825    ///
826    /// let some_bool = AtomicBool::new(true);
827    ///
828    /// assert_eq!(
829    ///     some_bool.compare_exchange(true, false, Ordering::Acquire, Ordering::Relaxed),
830    ///     Ok(true)
831    /// );
832    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
833    ///
834    /// assert_eq!(
835    ///     some_bool.compare_exchange(true, true, Ordering::SeqCst, Ordering::Acquire),
836    ///     Err(false)
837    /// );
838    /// assert_eq!(some_bool.load(Ordering::Relaxed), false);
839    /// ```
840    #[inline]
841    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
842    #[cfg_attr(
843        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
844        track_caller
845    )]
846    pub fn compare_exchange(
847        &self,
848        current: bool,
849        new: bool,
850        success: Ordering,
851        failure: Ordering,
852    ) -> Result<bool, bool> {
853        #[cfg(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"))]
854        {
855            // See https://github.com/rust-lang/rust/pull/114034 for details.
856            // https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L233
857            // https://godbolt.org/z/Enh87Ph9b
858            crate::utils::assert_compare_exchange_ordering(success, failure);
859            let order = crate::utils::upgrade_success_ordering(success, failure);
860            let old = if current == new {
861                // This is a no-op, but we still need to perform the operation
862                // for memory ordering reasons.
863                self.fetch_or(false, order)
864            } else {
865                // This sets the value to the new one and returns the old one.
866                self.swap(new, order)
867            };
868            if old == current { Ok(old) } else { Err(old) }
869        }
870        #[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64")))]
871        {
872            match self.as_atomic_u8().compare_exchange(current as u8, new as u8, success, failure) {
873                Ok(x) => Ok(x != 0),
874                Err(x) => Err(x != 0),
875            }
876        }
877    }
878
879    /// Stores a value into the [`bool`] if the current value is the same as the `current` value.
880    ///
881    /// Unlike [`AtomicBool::compare_exchange`], this function is allowed to spuriously fail even when the
882    /// comparison succeeds, which can result in more efficient code on some platforms. The
883    /// return value is a result indicating whether the new value was written and containing the
884    /// previous value.
885    ///
886    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
887    /// ordering of this operation. `success` describes the required ordering for the
888    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
889    /// `failure` describes the required ordering for the load operation that takes place when
890    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
891    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
892    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
893    ///
894    /// # Panics
895    ///
896    /// Panics if `failure` is [`Release`], [`AcqRel`].
897    ///
898    /// # Examples
899    ///
900    /// ```
901    /// use portable_atomic::{AtomicBool, Ordering};
902    ///
903    /// let val = AtomicBool::new(false);
904    ///
905    /// let new = true;
906    /// let mut old = val.load(Ordering::Relaxed);
907    /// loop {
908    ///     match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
909    ///         Ok(_) => break,
910    ///         Err(x) => old = x,
911    ///     }
912    /// }
913    /// ```
914    #[inline]
915    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
916    #[cfg_attr(
917        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
918        track_caller
919    )]
920    pub fn compare_exchange_weak(
921        &self,
922        current: bool,
923        new: bool,
924        success: Ordering,
925        failure: Ordering,
926    ) -> Result<bool, bool> {
927        #[cfg(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64"))]
928        {
929            // See https://github.com/rust-lang/rust/pull/114034 for details.
930            // https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L233
931            // https://godbolt.org/z/Enh87Ph9b
932            self.compare_exchange(current, new, success, failure)
933        }
934        #[cfg(not(any(target_arch = "riscv32", target_arch = "riscv64", target_arch = "loongarch64")))]
935        {
936            match self
937                .as_atomic_u8()
938                .compare_exchange_weak(current as u8, new as u8, success, failure)
939            {
940                Ok(x) => Ok(x != 0),
941                Err(x) => Err(x != 0),
942            }
943        }
944    }
945
946    /// Logical "and" with a boolean value.
947    ///
948    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
949    /// the new value to the result.
950    ///
951    /// Returns the previous value.
952    ///
953    /// `fetch_and` takes an [`Ordering`] argument which describes the memory ordering
954    /// of this operation. All ordering modes are possible. Note that using
955    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
956    /// using [`Release`] makes the load part [`Relaxed`].
957    ///
958    /// # Examples
959    ///
960    /// ```
961    /// use portable_atomic::{AtomicBool, Ordering};
962    ///
963    /// let foo = AtomicBool::new(true);
964    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), true);
965    /// assert_eq!(foo.load(Ordering::SeqCst), false);
966    ///
967    /// let foo = AtomicBool::new(true);
968    /// assert_eq!(foo.fetch_and(true, Ordering::SeqCst), true);
969    /// assert_eq!(foo.load(Ordering::SeqCst), true);
970    ///
971    /// let foo = AtomicBool::new(false);
972    /// assert_eq!(foo.fetch_and(false, Ordering::SeqCst), false);
973    /// assert_eq!(foo.load(Ordering::SeqCst), false);
974    /// ```
975    #[inline]
976    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
977    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool {
978        self.as_atomic_u8().fetch_and(val as u8, order) != 0
979    }
980
981    /// Logical "and" with a boolean value.
982    ///
983    /// Performs a logical "and" operation on the current value and the argument `val`, and sets
984    /// the new value to the result.
985    ///
986    /// Unlike `fetch_and`, this does not return the previous value.
987    ///
988    /// `and` takes an [`Ordering`] argument which describes the memory ordering
989    /// of this operation. All ordering modes are possible. Note that using
990    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
991    /// using [`Release`] makes the load part [`Relaxed`].
992    ///
993    /// This function may generate more efficient code than `fetch_and` on some platforms.
994    ///
995    /// - x86/x86_64: `lock and` instead of `cmpxchg` loop
996    /// - MSP430: `and` instead of disabling interrupts
997    ///
998    /// Note: On x86/x86_64, the use of either function should not usually
999    /// affect the generated code, because LLVM can properly optimize the case
1000    /// where the result is unused.
1001    ///
1002    /// # Examples
1003    ///
1004    /// ```
1005    /// use portable_atomic::{AtomicBool, Ordering};
1006    ///
1007    /// let foo = AtomicBool::new(true);
1008    /// foo.and(false, Ordering::SeqCst);
1009    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1010    ///
1011    /// let foo = AtomicBool::new(true);
1012    /// foo.and(true, Ordering::SeqCst);
1013    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1014    ///
1015    /// let foo = AtomicBool::new(false);
1016    /// foo.and(false, Ordering::SeqCst);
1017    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1018    /// ```
1019    #[inline]
1020    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1021    pub fn and(&self, val: bool, order: Ordering) {
1022        self.as_atomic_u8().and(val as u8, order);
1023    }
1024
1025    /// Logical "nand" with a boolean value.
1026    ///
1027    /// Performs a logical "nand" operation on the current value and the argument `val`, and sets
1028    /// the new value to the result.
1029    ///
1030    /// Returns the previous value.
1031    ///
1032    /// `fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
1033    /// of this operation. All ordering modes are possible. Note that using
1034    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1035    /// using [`Release`] makes the load part [`Relaxed`].
1036    ///
1037    /// # Examples
1038    ///
1039    /// ```
1040    /// use portable_atomic::{AtomicBool, Ordering};
1041    ///
1042    /// let foo = AtomicBool::new(true);
1043    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), true);
1044    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1045    ///
1046    /// let foo = AtomicBool::new(true);
1047    /// assert_eq!(foo.fetch_nand(true, Ordering::SeqCst), true);
1048    /// assert_eq!(foo.load(Ordering::SeqCst) as usize, 0);
1049    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1050    ///
1051    /// let foo = AtomicBool::new(false);
1052    /// assert_eq!(foo.fetch_nand(false, Ordering::SeqCst), false);
1053    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1054    /// ```
1055    #[inline]
1056    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1057    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool {
1058        // https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L956-L970
1059        if val {
1060            // !(x & true) == !x
1061            // We must invert the bool.
1062            self.fetch_xor(true, order)
1063        } else {
1064            // !(x & false) == true
1065            // We must set the bool to true.
1066            self.swap(true, order)
1067        }
1068    }
1069
1070    /// Logical "or" with a boolean value.
1071    ///
1072    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1073    /// new value to the result.
1074    ///
1075    /// Returns the previous value.
1076    ///
1077    /// `fetch_or` takes an [`Ordering`] argument which describes the memory ordering
1078    /// of this operation. All ordering modes are possible. Note that using
1079    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1080    /// using [`Release`] makes the load part [`Relaxed`].
1081    ///
1082    /// # Examples
1083    ///
1084    /// ```
1085    /// use portable_atomic::{AtomicBool, Ordering};
1086    ///
1087    /// let foo = AtomicBool::new(true);
1088    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), true);
1089    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1090    ///
1091    /// let foo = AtomicBool::new(true);
1092    /// assert_eq!(foo.fetch_or(true, Ordering::SeqCst), true);
1093    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1094    ///
1095    /// let foo = AtomicBool::new(false);
1096    /// assert_eq!(foo.fetch_or(false, Ordering::SeqCst), false);
1097    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1098    /// ```
1099    #[inline]
1100    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1101    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool {
1102        self.as_atomic_u8().fetch_or(val as u8, order) != 0
1103    }
1104
1105    /// Logical "or" with a boolean value.
1106    ///
1107    /// Performs a logical "or" operation on the current value and the argument `val`, and sets the
1108    /// new value to the result.
1109    ///
1110    /// Unlike `fetch_or`, this does not return the previous value.
1111    ///
1112    /// `or` takes an [`Ordering`] argument which describes the memory ordering
1113    /// of this operation. All ordering modes are possible. Note that using
1114    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1115    /// using [`Release`] makes the load part [`Relaxed`].
1116    ///
1117    /// This function may generate more efficient code than `fetch_or` on some platforms.
1118    ///
1119    /// - x86/x86_64: `lock or` instead of `cmpxchg` loop
1120    /// - MSP430: `bis` instead of disabling interrupts
1121    ///
1122    /// Note: On x86/x86_64, the use of either function should not usually
1123    /// affect the generated code, because LLVM can properly optimize the case
1124    /// where the result is unused.
1125    ///
1126    /// # Examples
1127    ///
1128    /// ```
1129    /// use portable_atomic::{AtomicBool, Ordering};
1130    ///
1131    /// let foo = AtomicBool::new(true);
1132    /// foo.or(false, Ordering::SeqCst);
1133    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1134    ///
1135    /// let foo = AtomicBool::new(true);
1136    /// foo.or(true, Ordering::SeqCst);
1137    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1138    ///
1139    /// let foo = AtomicBool::new(false);
1140    /// foo.or(false, Ordering::SeqCst);
1141    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1142    /// ```
1143    #[inline]
1144    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1145    pub fn or(&self, val: bool, order: Ordering) {
1146        self.as_atomic_u8().or(val as u8, order);
1147    }
1148
1149    /// Logical "xor" with a boolean value.
1150    ///
1151    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1152    /// the new value to the result.
1153    ///
1154    /// Returns the previous value.
1155    ///
1156    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
1157    /// of this operation. All ordering modes are possible. Note that using
1158    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1159    /// using [`Release`] makes the load part [`Relaxed`].
1160    ///
1161    /// # Examples
1162    ///
1163    /// ```
1164    /// use portable_atomic::{AtomicBool, Ordering};
1165    ///
1166    /// let foo = AtomicBool::new(true);
1167    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), true);
1168    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1169    ///
1170    /// let foo = AtomicBool::new(true);
1171    /// assert_eq!(foo.fetch_xor(true, Ordering::SeqCst), true);
1172    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1173    ///
1174    /// let foo = AtomicBool::new(false);
1175    /// assert_eq!(foo.fetch_xor(false, Ordering::SeqCst), false);
1176    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1177    /// ```
1178    #[inline]
1179    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1180    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool {
1181        self.as_atomic_u8().fetch_xor(val as u8, order) != 0
1182    }
1183
1184    /// Logical "xor" with a boolean value.
1185    ///
1186    /// Performs a logical "xor" operation on the current value and the argument `val`, and sets
1187    /// the new value to the result.
1188    ///
1189    /// Unlike `fetch_xor`, this does not return the previous value.
1190    ///
1191    /// `xor` takes an [`Ordering`] argument which describes the memory ordering
1192    /// of this operation. All ordering modes are possible. Note that using
1193    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1194    /// using [`Release`] makes the load part [`Relaxed`].
1195    ///
1196    /// This function may generate more efficient code than `fetch_xor` on some platforms.
1197    ///
1198    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1199    /// - MSP430: `xor` instead of disabling interrupts
1200    ///
1201    /// Note: On x86/x86_64, the use of either function should not usually
1202    /// affect the generated code, because LLVM can properly optimize the case
1203    /// where the result is unused.
1204    ///
1205    /// # Examples
1206    ///
1207    /// ```
1208    /// use portable_atomic::{AtomicBool, Ordering};
1209    ///
1210    /// let foo = AtomicBool::new(true);
1211    /// foo.xor(false, Ordering::SeqCst);
1212    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1213    ///
1214    /// let foo = AtomicBool::new(true);
1215    /// foo.xor(true, Ordering::SeqCst);
1216    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1217    ///
1218    /// let foo = AtomicBool::new(false);
1219    /// foo.xor(false, Ordering::SeqCst);
1220    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1221    /// ```
1222    #[inline]
1223    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1224    pub fn xor(&self, val: bool, order: Ordering) {
1225        self.as_atomic_u8().xor(val as u8, order);
1226    }
1227
1228    /// Logical "not" with a boolean value.
1229    ///
1230    /// Performs a logical "not" operation on the current value, and sets
1231    /// the new value to the result.
1232    ///
1233    /// Returns the previous value.
1234    ///
1235    /// `fetch_not` takes an [`Ordering`] argument which describes the memory ordering
1236    /// of this operation. All ordering modes are possible. Note that using
1237    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1238    /// using [`Release`] makes the load part [`Relaxed`].
1239    ///
1240    /// # Examples
1241    ///
1242    /// ```
1243    /// use portable_atomic::{AtomicBool, Ordering};
1244    ///
1245    /// let foo = AtomicBool::new(true);
1246    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), true);
1247    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1248    ///
1249    /// let foo = AtomicBool::new(false);
1250    /// assert_eq!(foo.fetch_not(Ordering::SeqCst), false);
1251    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1252    /// ```
1253    #[inline]
1254    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1255    pub fn fetch_not(&self, order: Ordering) -> bool {
1256        self.fetch_xor(true, order)
1257    }
1258
1259    /// Logical "not" with a boolean value.
1260    ///
1261    /// Performs a logical "not" operation on the current value, and sets
1262    /// the new value to the result.
1263    ///
1264    /// Unlike `fetch_not`, this does not return the previous value.
1265    ///
1266    /// `not` takes an [`Ordering`] argument which describes the memory ordering
1267    /// of this operation. All ordering modes are possible. Note that using
1268    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1269    /// using [`Release`] makes the load part [`Relaxed`].
1270    ///
1271    /// This function may generate more efficient code than `fetch_not` on some platforms.
1272    ///
1273    /// - x86/x86_64: `lock xor` instead of `cmpxchg` loop
1274    /// - MSP430: `xor` instead of disabling interrupts
1275    ///
1276    /// Note: On x86/x86_64, the use of either function should not usually
1277    /// affect the generated code, because LLVM can properly optimize the case
1278    /// where the result is unused.
1279    ///
1280    /// # Examples
1281    ///
1282    /// ```
1283    /// use portable_atomic::{AtomicBool, Ordering};
1284    ///
1285    /// let foo = AtomicBool::new(true);
1286    /// foo.not(Ordering::SeqCst);
1287    /// assert_eq!(foo.load(Ordering::SeqCst), false);
1288    ///
1289    /// let foo = AtomicBool::new(false);
1290    /// foo.not(Ordering::SeqCst);
1291    /// assert_eq!(foo.load(Ordering::SeqCst), true);
1292    /// ```
1293    #[inline]
1294    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1295    pub fn not(&self, order: Ordering) {
1296        self.xor(true, order);
1297    }
1298
1299    /// Fetches the value, and applies a function to it that returns an optional
1300    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1301    /// returned `Some(_)`, else `Err(previous_value)`.
1302    ///
1303    /// Note: This may call the function multiple times if the value has been
1304    /// changed from other threads in the meantime, as long as the function
1305    /// returns `Some(_)`, but the function will have been applied only once to
1306    /// the stored value.
1307    ///
1308    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1309    /// ordering of this operation. The first describes the required ordering for
1310    /// when the operation finally succeeds while the second describes the
1311    /// required ordering for loads. These correspond to the success and failure
1312    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1313    ///
1314    /// Using [`Acquire`] as success ordering makes the store part of this
1315    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1316    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1317    /// [`Acquire`] or [`Relaxed`].
1318    ///
1319    /// # Considerations
1320    ///
1321    /// This method is not magic; it is not provided by the hardware.
1322    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1323    /// and suffers from the same drawbacks.
1324    /// In particular, this method will not circumvent the [ABA Problem].
1325    ///
1326    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1327    ///
1328    /// # Panics
1329    ///
1330    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1331    ///
1332    /// # Examples
1333    ///
1334    /// ```
1335    /// use portable_atomic::{AtomicBool, Ordering};
1336    ///
1337    /// let x = AtomicBool::new(false);
1338    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(false));
1339    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(false));
1340    /// assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(!x)), Ok(true));
1341    /// assert_eq!(x.load(Ordering::SeqCst), false);
1342    /// ```
1343    #[inline]
1344    #[cfg_attr(
1345        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1346        track_caller
1347    )]
1348    pub fn fetch_update<F>(
1349        &self,
1350        set_order: Ordering,
1351        fetch_order: Ordering,
1352        mut f: F,
1353    ) -> Result<bool, bool>
1354    where
1355        F: FnMut(bool) -> Option<bool>,
1356    {
1357        let mut prev = self.load(fetch_order);
1358        while let Some(next) = f(prev) {
1359            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1360                x @ Ok(_) => return x,
1361                Err(next_prev) => prev = next_prev,
1362            }
1363        }
1364        Err(prev)
1365    }
1366    } // cfg_has_atomic_cas_or_amo32!
1367
1368    const_fn! {
1369        // This function is actually `const fn`-compatible on Rust 1.32+,
1370        // but makes `const fn` only on Rust 1.58+ to match other atomic types.
1371        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
1372        /// Returns a mutable pointer to the underlying [`bool`].
1373        ///
1374        /// Returning an `*mut` pointer from a shared reference to this atomic is
1375        /// safe because the atomic types work with interior mutability. Any use of
1376        /// the returned raw pointer requires an `unsafe` block and has to uphold
1377        /// the safety requirements. If there is concurrent access, note the following
1378        /// additional safety requirements:
1379        ///
1380        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
1381        ///   operations on it must be atomic.
1382        /// - Otherwise, any concurrent operations on it must be compatible with
1383        ///   operations performed by this atomic type.
1384        ///
1385        /// This is `const fn` on Rust 1.58+.
1386        #[inline]
1387        pub const fn as_ptr(&self) -> *mut bool {
1388            self.v.get() as *mut bool
1389        }
1390    }
1391
1392    #[inline(always)]
1393    fn as_atomic_u8(&self) -> &imp::AtomicU8 {
1394        // SAFETY: AtomicBool and imp::AtomicU8 have the same layout,
1395        // and both access data in the same way.
1396        unsafe { &*(self as *const Self as *const imp::AtomicU8) }
1397    }
1398}
1399// See https://github.com/taiki-e/portable-atomic/issues/180
1400#[cfg(not(feature = "require-cas"))]
1401cfg_no_atomic_cas! {
1402#[doc(hidden)]
1403#[allow(unused_variables, clippy::unused_self)]
1404impl<'a> AtomicBool {
1405    cfg_no_atomic_cas_or_amo32! {
1406    #[inline]
1407    pub fn swap(&self, val: bool, order: Ordering) -> bool
1408    where
1409        &'a Self: HasSwap,
1410    {
1411        unimplemented!()
1412    }
1413    #[inline]
1414    pub fn compare_exchange(
1415        &self,
1416        current: bool,
1417        new: bool,
1418        success: Ordering,
1419        failure: Ordering,
1420    ) -> Result<bool, bool>
1421    where
1422        &'a Self: HasCompareExchange,
1423    {
1424        unimplemented!()
1425    }
1426    #[inline]
1427    pub fn compare_exchange_weak(
1428        &self,
1429        current: bool,
1430        new: bool,
1431        success: Ordering,
1432        failure: Ordering,
1433    ) -> Result<bool, bool>
1434    where
1435        &'a Self: HasCompareExchangeWeak,
1436    {
1437        unimplemented!()
1438    }
1439    #[inline]
1440    pub fn fetch_and(&self, val: bool, order: Ordering) -> bool
1441    where
1442        &'a Self: HasFetchAnd,
1443    {
1444        unimplemented!()
1445    }
1446    #[inline]
1447    pub fn and(&self, val: bool, order: Ordering)
1448    where
1449        &'a Self: HasAnd,
1450    {
1451        unimplemented!()
1452    }
1453    #[inline]
1454    pub fn fetch_nand(&self, val: bool, order: Ordering) -> bool
1455    where
1456        &'a Self: HasFetchNand,
1457    {
1458        unimplemented!()
1459    }
1460    #[inline]
1461    pub fn fetch_or(&self, val: bool, order: Ordering) -> bool
1462    where
1463        &'a Self: HasFetchOr,
1464    {
1465        unimplemented!()
1466    }
1467    #[inline]
1468    pub fn or(&self, val: bool, order: Ordering)
1469    where
1470        &'a Self: HasOr,
1471    {
1472        unimplemented!()
1473    }
1474    #[inline]
1475    pub fn fetch_xor(&self, val: bool, order: Ordering) -> bool
1476    where
1477        &'a Self: HasFetchXor,
1478    {
1479        unimplemented!()
1480    }
1481    #[inline]
1482    pub fn xor(&self, val: bool, order: Ordering)
1483    where
1484        &'a Self: HasXor,
1485    {
1486        unimplemented!()
1487    }
1488    #[inline]
1489    pub fn fetch_not(&self, order: Ordering) -> bool
1490    where
1491        &'a Self: HasFetchNot,
1492    {
1493        unimplemented!()
1494    }
1495    #[inline]
1496    pub fn not(&self, order: Ordering)
1497    where
1498        &'a Self: HasNot,
1499    {
1500        unimplemented!()
1501    }
1502    #[inline]
1503    pub fn fetch_update<F>(
1504        &self,
1505        set_order: Ordering,
1506        fetch_order: Ordering,
1507        f: F,
1508    ) -> Result<bool, bool>
1509    where
1510        F: FnMut(bool) -> Option<bool>,
1511        &'a Self: HasFetchUpdate,
1512    {
1513        unimplemented!()
1514    }
1515    } // cfg_no_atomic_cas_or_amo32!
1516}
1517} // cfg_no_atomic_cas!
1518} // cfg_has_atomic_8!
1519
1520cfg_has_atomic_ptr! {
1521/// A raw pointer type which can be safely shared between threads.
1522///
1523/// This type has the same in-memory representation as a `*mut T`.
1524///
1525/// If the compiler and the platform support atomic loads and stores of pointers,
1526/// this type is a wrapper for the standard library's
1527/// [`AtomicPtr`](core::sync::atomic::AtomicPtr). If the platform supports it
1528/// but the compiler does not, atomic operations are implemented using inline
1529/// assembly.
1530// We can use #[repr(transparent)] here, but #[repr(C, align(N))]
1531// will show clearer docs.
1532#[cfg_attr(target_pointer_width = "16", repr(C, align(2)))]
1533#[cfg_attr(target_pointer_width = "32", repr(C, align(4)))]
1534#[cfg_attr(target_pointer_width = "64", repr(C, align(8)))]
1535#[cfg_attr(target_pointer_width = "128", repr(C, align(16)))]
1536pub struct AtomicPtr<T> {
1537    inner: imp::AtomicPtr<T>,
1538}
1539
1540impl<T> Default for AtomicPtr<T> {
1541    /// Creates a null `AtomicPtr<T>`.
1542    #[inline]
1543    fn default() -> Self {
1544        Self::new(ptr::null_mut())
1545    }
1546}
1547
1548impl<T> From<*mut T> for AtomicPtr<T> {
1549    #[inline]
1550    fn from(p: *mut T) -> Self {
1551        Self::new(p)
1552    }
1553}
1554
1555impl<T> fmt::Debug for AtomicPtr<T> {
1556    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1557    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1558        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L2166
1559        fmt::Debug::fmt(&self.load(Ordering::Relaxed), f)
1560    }
1561}
1562
1563impl<T> fmt::Pointer for AtomicPtr<T> {
1564    #[inline] // fmt is not hot path, but #[inline] on fmt seems to still be useful: https://github.com/rust-lang/rust/pull/117727
1565    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
1566        // std atomic types use Relaxed in Debug::fmt: https://github.com/rust-lang/rust/blob/1.80.0/library/core/src/sync/atomic.rs#L2166
1567        fmt::Pointer::fmt(&self.load(Ordering::Relaxed), f)
1568    }
1569}
1570
1571// UnwindSafe is implicitly implemented.
1572#[cfg(not(portable_atomic_no_core_unwind_safe))]
1573impl<T> core::panic::RefUnwindSafe for AtomicPtr<T> {}
1574#[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
1575impl<T> std::panic::RefUnwindSafe for AtomicPtr<T> {}
1576
1577impl<T> AtomicPtr<T> {
1578    /// Creates a new `AtomicPtr`.
1579    ///
1580    /// # Examples
1581    ///
1582    /// ```
1583    /// use portable_atomic::AtomicPtr;
1584    ///
1585    /// let ptr = &mut 5;
1586    /// let atomic_ptr = AtomicPtr::new(ptr);
1587    /// ```
1588    #[inline]
1589    #[must_use]
1590    pub const fn new(p: *mut T) -> Self {
1591        static_assert_layout!(AtomicPtr<()>, *mut ());
1592        Self { inner: imp::AtomicPtr::new(p) }
1593    }
1594
1595    // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
1596    /// Creates a new `AtomicPtr` from a pointer.
1597    ///
1598    /// # Safety
1599    ///
1600    /// * `ptr` must be aligned to `align_of::<AtomicPtr<T>>()` (note that on some platforms this
1601    ///   can be bigger than `align_of::<*mut T>()`).
1602    /// * `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
1603    /// * If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
1604    ///   behind `ptr` must have a happens-before relationship with atomic accesses via the returned
1605    ///   value (or vice-versa).
1606    ///   * In other words, time periods where the value is accessed atomically may not overlap
1607    ///     with periods where the value is accessed non-atomically.
1608    ///   * This requirement is trivially satisfied if `ptr` is never used non-atomically for the
1609    ///     duration of lifetime `'a`. Most use cases should be able to follow this guideline.
1610    ///   * This requirement is also trivially satisfied if all accesses (atomic or not) are done
1611    ///     from the same thread.
1612    /// * If this atomic type is *not* lock-free:
1613    ///   * Any accesses to the value behind `ptr` must have a happens-before relationship
1614    ///     with accesses via the returned value (or vice-versa).
1615    ///   * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
1616    ///     be compatible with operations performed by this atomic type.
1617    /// * This method must not be used to create overlapping or mixed-size atomic accesses, as
1618    ///   these are not supported by the memory model.
1619    ///
1620    /// [valid]: core::ptr#safety
1621    #[inline]
1622    #[must_use]
1623    pub unsafe fn from_ptr<'a>(ptr: *mut *mut T) -> &'a Self {
1624        #[allow(clippy::cast_ptr_alignment)]
1625        // SAFETY: guaranteed by the caller
1626        unsafe { &*(ptr as *mut Self) }
1627    }
1628
1629    /// Returns `true` if operations on values of this type are lock-free.
1630    ///
1631    /// If the compiler or the platform doesn't support the necessary
1632    /// atomic instructions, global locks for every potentially
1633    /// concurrent atomic operation will be used.
1634    ///
1635    /// # Examples
1636    ///
1637    /// ```
1638    /// use portable_atomic::AtomicPtr;
1639    ///
1640    /// let is_lock_free = AtomicPtr::<()>::is_lock_free();
1641    /// ```
1642    #[inline]
1643    #[must_use]
1644    pub fn is_lock_free() -> bool {
1645        <imp::AtomicPtr<T>>::is_lock_free()
1646    }
1647
1648    /// Returns `true` if operations on values of this type are lock-free.
1649    ///
1650    /// If the compiler or the platform doesn't support the necessary
1651    /// atomic instructions, global locks for every potentially
1652    /// concurrent atomic operation will be used.
1653    ///
1654    /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
1655    /// this type may be lock-free even if the function returns false.
1656    ///
1657    /// # Examples
1658    ///
1659    /// ```
1660    /// use portable_atomic::AtomicPtr;
1661    ///
1662    /// const IS_ALWAYS_LOCK_FREE: bool = AtomicPtr::<()>::is_always_lock_free();
1663    /// ```
1664    #[inline]
1665    #[must_use]
1666    pub const fn is_always_lock_free() -> bool {
1667        <imp::AtomicPtr<T>>::IS_ALWAYS_LOCK_FREE
1668    }
1669    #[cfg(test)]
1670    const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
1671
1672    /// Returns a mutable reference to the underlying pointer.
1673    ///
1674    /// This is safe because the mutable reference guarantees that no other threads are
1675    /// concurrently accessing the atomic data.
1676    ///
1677    /// # Examples
1678    ///
1679    /// ```
1680    /// use portable_atomic::{AtomicPtr, Ordering};
1681    ///
1682    /// let mut data = 10;
1683    /// let mut atomic_ptr = AtomicPtr::new(&mut data);
1684    /// let mut other_data = 5;
1685    /// *atomic_ptr.get_mut() = &mut other_data;
1686    /// assert_eq!(unsafe { *atomic_ptr.load(Ordering::SeqCst) }, 5);
1687    /// ```
1688    #[inline]
1689    pub fn get_mut(&mut self) -> &mut *mut T {
1690        self.inner.get_mut()
1691    }
1692
1693    // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
1694    // https://github.com/rust-lang/rust/issues/76314
1695
1696    const_fn! {
1697        const_if: #[cfg(not(portable_atomic_no_const_transmute))];
1698        /// Consumes the atomic and returns the contained value.
1699        ///
1700        /// This is safe because passing `self` by value guarantees that no other threads are
1701        /// concurrently accessing the atomic data.
1702        ///
1703        /// This is `const fn` on Rust 1.56+.
1704        ///
1705        /// # Examples
1706        ///
1707        /// ```
1708        /// use portable_atomic::AtomicPtr;
1709        ///
1710        /// let mut data = 5;
1711        /// let atomic_ptr = AtomicPtr::new(&mut data);
1712        /// assert_eq!(unsafe { *atomic_ptr.into_inner() }, 5);
1713        /// ```
1714        #[inline]
1715        pub const fn into_inner(self) -> *mut T {
1716            // SAFETY: AtomicPtr<T> and *mut T have the same size and in-memory representations,
1717            // so they can be safely transmuted.
1718            // (const UnsafeCell::into_inner is unstable)
1719            unsafe { core::mem::transmute(self) }
1720        }
1721    }
1722
1723    /// Loads a value from the pointer.
1724    ///
1725    /// `load` takes an [`Ordering`] argument which describes the memory ordering
1726    /// of this operation. Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
1727    ///
1728    /// # Panics
1729    ///
1730    /// Panics if `order` is [`Release`] or [`AcqRel`].
1731    ///
1732    /// # Examples
1733    ///
1734    /// ```
1735    /// use portable_atomic::{AtomicPtr, Ordering};
1736    ///
1737    /// let ptr = &mut 5;
1738    /// let some_ptr = AtomicPtr::new(ptr);
1739    ///
1740    /// let value = some_ptr.load(Ordering::Relaxed);
1741    /// ```
1742    #[inline]
1743    #[cfg_attr(
1744        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1745        track_caller
1746    )]
1747    pub fn load(&self, order: Ordering) -> *mut T {
1748        self.inner.load(order)
1749    }
1750
1751    /// Stores a value into the pointer.
1752    ///
1753    /// `store` takes an [`Ordering`] argument which describes the memory ordering
1754    /// of this operation. Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
1755    ///
1756    /// # Panics
1757    ///
1758    /// Panics if `order` is [`Acquire`] or [`AcqRel`].
1759    ///
1760    /// # Examples
1761    ///
1762    /// ```
1763    /// use portable_atomic::{AtomicPtr, Ordering};
1764    ///
1765    /// let ptr = &mut 5;
1766    /// let some_ptr = AtomicPtr::new(ptr);
1767    ///
1768    /// let other_ptr = &mut 10;
1769    ///
1770    /// some_ptr.store(other_ptr, Ordering::Relaxed);
1771    /// ```
1772    #[inline]
1773    #[cfg_attr(
1774        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1775        track_caller
1776    )]
1777    pub fn store(&self, ptr: *mut T, order: Ordering) {
1778        self.inner.store(ptr, order);
1779    }
1780
1781    cfg_has_atomic_cas_or_amo32! {
1782    /// Stores a value into the pointer, returning the previous value.
1783    ///
1784    /// `swap` takes an [`Ordering`] argument which describes the memory ordering
1785    /// of this operation. All ordering modes are possible. Note that using
1786    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
1787    /// using [`Release`] makes the load part [`Relaxed`].
1788    ///
1789    /// # Examples
1790    ///
1791    /// ```
1792    /// use portable_atomic::{AtomicPtr, Ordering};
1793    ///
1794    /// let ptr = &mut 5;
1795    /// let some_ptr = AtomicPtr::new(ptr);
1796    ///
1797    /// let other_ptr = &mut 10;
1798    ///
1799    /// let value = some_ptr.swap(other_ptr, Ordering::Relaxed);
1800    /// ```
1801    #[inline]
1802    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1803    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T {
1804        self.inner.swap(ptr, order)
1805    }
1806
1807    cfg_has_atomic_cas! {
1808    /// Stores a value into the pointer if the current value is the same as the `current` value.
1809    ///
1810    /// The return value is a result indicating whether the new value was written and containing
1811    /// the previous value. On success this value is guaranteed to be equal to `current`.
1812    ///
1813    /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
1814    /// ordering of this operation. `success` describes the required ordering for the
1815    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1816    /// `failure` describes the required ordering for the load operation that takes place when
1817    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1818    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1819    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1820    ///
1821    /// # Panics
1822    ///
1823    /// Panics if `failure` is [`Release`], [`AcqRel`].
1824    ///
1825    /// # Examples
1826    ///
1827    /// ```
1828    /// use portable_atomic::{AtomicPtr, Ordering};
1829    ///
1830    /// let ptr = &mut 5;
1831    /// let some_ptr = AtomicPtr::new(ptr);
1832    ///
1833    /// let other_ptr = &mut 10;
1834    ///
1835    /// let value = some_ptr.compare_exchange(ptr, other_ptr, Ordering::SeqCst, Ordering::Relaxed);
1836    /// ```
1837    #[inline]
1838    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1839    #[cfg_attr(
1840        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1841        track_caller
1842    )]
1843    pub fn compare_exchange(
1844        &self,
1845        current: *mut T,
1846        new: *mut T,
1847        success: Ordering,
1848        failure: Ordering,
1849    ) -> Result<*mut T, *mut T> {
1850        self.inner.compare_exchange(current, new, success, failure)
1851    }
1852
1853    /// Stores a value into the pointer if the current value is the same as the `current` value.
1854    ///
1855    /// Unlike [`AtomicPtr::compare_exchange`], this function is allowed to spuriously fail even when the
1856    /// comparison succeeds, which can result in more efficient code on some platforms. The
1857    /// return value is a result indicating whether the new value was written and containing the
1858    /// previous value.
1859    ///
1860    /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
1861    /// ordering of this operation. `success` describes the required ordering for the
1862    /// read-modify-write operation that takes place if the comparison with `current` succeeds.
1863    /// `failure` describes the required ordering for the load operation that takes place when
1864    /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
1865    /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
1866    /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
1867    ///
1868    /// # Panics
1869    ///
1870    /// Panics if `failure` is [`Release`], [`AcqRel`].
1871    ///
1872    /// # Examples
1873    ///
1874    /// ```
1875    /// use portable_atomic::{AtomicPtr, Ordering};
1876    ///
1877    /// let some_ptr = AtomicPtr::new(&mut 5);
1878    ///
1879    /// let new = &mut 10;
1880    /// let mut old = some_ptr.load(Ordering::Relaxed);
1881    /// loop {
1882    ///     match some_ptr.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
1883    ///         Ok(_) => break,
1884    ///         Err(x) => old = x,
1885    ///     }
1886    /// }
1887    /// ```
1888    #[inline]
1889    #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
1890    #[cfg_attr(
1891        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1892        track_caller
1893    )]
1894    pub fn compare_exchange_weak(
1895        &self,
1896        current: *mut T,
1897        new: *mut T,
1898        success: Ordering,
1899        failure: Ordering,
1900    ) -> Result<*mut T, *mut T> {
1901        self.inner.compare_exchange_weak(current, new, success, failure)
1902    }
1903
1904    /// Fetches the value, and applies a function to it that returns an optional
1905    /// new value. Returns a `Result` of `Ok(previous_value)` if the function
1906    /// returned `Some(_)`, else `Err(previous_value)`.
1907    ///
1908    /// Note: This may call the function multiple times if the value has been
1909    /// changed from other threads in the meantime, as long as the function
1910    /// returns `Some(_)`, but the function will have been applied only once to
1911    /// the stored value.
1912    ///
1913    /// `fetch_update` takes two [`Ordering`] arguments to describe the memory
1914    /// ordering of this operation. The first describes the required ordering for
1915    /// when the operation finally succeeds while the second describes the
1916    /// required ordering for loads. These correspond to the success and failure
1917    /// orderings of [`compare_exchange`](Self::compare_exchange) respectively.
1918    ///
1919    /// Using [`Acquire`] as success ordering makes the store part of this
1920    /// operation [`Relaxed`], and using [`Release`] makes the final successful
1921    /// load [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`],
1922    /// [`Acquire`] or [`Relaxed`].
1923    ///
1924    /// # Panics
1925    ///
1926    /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
1927    ///
1928    /// # Considerations
1929    ///
1930    /// This method is not magic; it is not provided by the hardware.
1931    /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
1932    /// and suffers from the same drawbacks.
1933    /// In particular, this method will not circumvent the [ABA Problem].
1934    ///
1935    /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
1936    ///
1937    /// # Examples
1938    ///
1939    /// ```
1940    /// use portable_atomic::{AtomicPtr, Ordering};
1941    ///
1942    /// let ptr: *mut _ = &mut 5;
1943    /// let some_ptr = AtomicPtr::new(ptr);
1944    ///
1945    /// let new: *mut _ = &mut 10;
1946    /// assert_eq!(some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(ptr));
1947    /// let result = some_ptr.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| {
1948    ///     if x == ptr {
1949    ///         Some(new)
1950    ///     } else {
1951    ///         None
1952    ///     }
1953    /// });
1954    /// assert_eq!(result, Ok(ptr));
1955    /// assert_eq!(some_ptr.load(Ordering::SeqCst), new);
1956    /// ```
1957    #[inline]
1958    #[cfg_attr(
1959        any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
1960        track_caller
1961    )]
1962    pub fn fetch_update<F>(
1963        &self,
1964        set_order: Ordering,
1965        fetch_order: Ordering,
1966        mut f: F,
1967    ) -> Result<*mut T, *mut T>
1968    where
1969        F: FnMut(*mut T) -> Option<*mut T>,
1970    {
1971        let mut prev = self.load(fetch_order);
1972        while let Some(next) = f(prev) {
1973            match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
1974                x @ Ok(_) => return x,
1975                Err(next_prev) => prev = next_prev,
1976            }
1977        }
1978        Err(prev)
1979    }
1980
1981    #[cfg(miri)]
1982    #[inline]
1983    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
1984    fn fetch_update_<F>(&self, order: Ordering, mut f: F) -> *mut T
1985    where
1986        F: FnMut(*mut T) -> *mut T,
1987    {
1988        // This is a private function and all instances of `f` only operate on the value
1989        // loaded, so there is no need to synchronize the first load/failed CAS.
1990        let mut prev = self.load(Ordering::Relaxed);
1991        loop {
1992            let next = f(prev);
1993            match self.compare_exchange_weak(prev, next, order, Ordering::Relaxed) {
1994                Ok(x) => return x,
1995                Err(next_prev) => prev = next_prev,
1996            }
1997        }
1998    }
1999    } // cfg_has_atomic_cas!
2000
2001    /// Offsets the pointer's address by adding `val` (in units of `T`),
2002    /// returning the previous pointer.
2003    ///
2004    /// This is equivalent to using [`wrapping_add`] to atomically perform the
2005    /// equivalent of `ptr = ptr.wrapping_add(val);`.
2006    ///
2007    /// This method operates in units of `T`, which means that it cannot be used
2008    /// to offset the pointer by an amount which is not a multiple of
2009    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2010    /// work with a deliberately misaligned pointer. In such cases, you may use
2011    /// the [`fetch_byte_add`](Self::fetch_byte_add) method instead.
2012    ///
2013    /// `fetch_ptr_add` takes an [`Ordering`] argument which describes the
2014    /// memory ordering of this operation. All ordering modes are possible. Note
2015    /// that using [`Acquire`] makes the store part of this operation
2016    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2017    ///
2018    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2019    ///
2020    /// # Examples
2021    ///
2022    /// ```
2023    /// # #![allow(unstable_name_collisions)]
2024    /// use portable_atomic::{AtomicPtr, Ordering};
2025    /// use sptr::Strict; // stable polyfill for strict provenance
2026    ///
2027    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2028    /// assert_eq!(atom.fetch_ptr_add(1, Ordering::Relaxed).addr(), 0);
2029    /// // Note: units of `size_of::<i64>()`.
2030    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 8);
2031    /// ```
2032    #[inline]
2033    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2034    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T {
2035        self.fetch_byte_add(val.wrapping_mul(core::mem::size_of::<T>()), order)
2036    }
2037
2038    /// Offsets the pointer's address by subtracting `val` (in units of `T`),
2039    /// returning the previous pointer.
2040    ///
2041    /// This is equivalent to using [`wrapping_sub`] to atomically perform the
2042    /// equivalent of `ptr = ptr.wrapping_sub(val);`.
2043    ///
2044    /// This method operates in units of `T`, which means that it cannot be used
2045    /// to offset the pointer by an amount which is not a multiple of
2046    /// `size_of::<T>()`. This can sometimes be inconvenient, as you may want to
2047    /// work with a deliberately misaligned pointer. In such cases, you may use
2048    /// the [`fetch_byte_sub`](Self::fetch_byte_sub) method instead.
2049    ///
2050    /// `fetch_ptr_sub` takes an [`Ordering`] argument which describes the memory
2051    /// ordering of this operation. All ordering modes are possible. Note that
2052    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2053    /// and using [`Release`] makes the load part [`Relaxed`].
2054    ///
2055    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2056    ///
2057    /// # Examples
2058    ///
2059    /// ```
2060    /// use portable_atomic::{AtomicPtr, Ordering};
2061    ///
2062    /// let array = [1i32, 2i32];
2063    /// let atom = AtomicPtr::new(array.as_ptr().wrapping_add(1) as *mut _);
2064    ///
2065    /// assert!(core::ptr::eq(atom.fetch_ptr_sub(1, Ordering::Relaxed), &array[1]));
2066    /// assert!(core::ptr::eq(atom.load(Ordering::Relaxed), &array[0]));
2067    /// ```
2068    #[inline]
2069    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2070    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T {
2071        self.fetch_byte_sub(val.wrapping_mul(core::mem::size_of::<T>()), order)
2072    }
2073
2074    /// Offsets the pointer's address by adding `val` *bytes*, returning the
2075    /// previous pointer.
2076    ///
2077    /// This is equivalent to using [`wrapping_add`] and [`cast`] to atomically
2078    /// perform `ptr = ptr.cast::<u8>().wrapping_add(val).cast::<T>()`.
2079    ///
2080    /// `fetch_byte_add` takes an [`Ordering`] argument which describes the
2081    /// memory ordering of this operation. All ordering modes are possible. Note
2082    /// that using [`Acquire`] makes the store part of this operation
2083    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2084    ///
2085    /// [`wrapping_add`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_add
2086    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2087    ///
2088    /// # Examples
2089    ///
2090    /// ```
2091    /// # #![allow(unstable_name_collisions)]
2092    /// use portable_atomic::{AtomicPtr, Ordering};
2093    /// use sptr::Strict; // stable polyfill for strict provenance
2094    ///
2095    /// let atom = AtomicPtr::<i64>::new(core::ptr::null_mut());
2096    /// assert_eq!(atom.fetch_byte_add(1, Ordering::Relaxed).addr(), 0);
2097    /// // Note: in units of bytes, not `size_of::<i64>()`.
2098    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 1);
2099    /// ```
2100    #[inline]
2101    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2102    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T {
2103        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2104        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2105        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2106        // compatible and is sound.
2107        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2108        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2109        #[cfg(miri)]
2110        {
2111            self.fetch_update_(order, |x| strict::map_addr(x, |x| x.wrapping_add(val)))
2112        }
2113        #[cfg(not(miri))]
2114        {
2115            self.as_atomic_usize().fetch_add(val, order) as *mut T
2116        }
2117    }
2118
2119    /// Offsets the pointer's address by subtracting `val` *bytes*, returning the
2120    /// previous pointer.
2121    ///
2122    /// This is equivalent to using [`wrapping_sub`] and [`cast`] to atomically
2123    /// perform `ptr = ptr.cast::<u8>().wrapping_sub(val).cast::<T>()`.
2124    ///
2125    /// `fetch_byte_sub` takes an [`Ordering`] argument which describes the
2126    /// memory ordering of this operation. All ordering modes are possible. Note
2127    /// that using [`Acquire`] makes the store part of this operation
2128    /// [`Relaxed`], and using [`Release`] makes the load part [`Relaxed`].
2129    ///
2130    /// [`wrapping_sub`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.wrapping_sub
2131    /// [`cast`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.cast
2132    ///
2133    /// # Examples
2134    ///
2135    /// ```
2136    /// # #![allow(unstable_name_collisions)]
2137    /// use portable_atomic::{AtomicPtr, Ordering};
2138    /// use sptr::Strict; // stable polyfill for strict provenance
2139    ///
2140    /// let atom = AtomicPtr::<i64>::new(sptr::invalid_mut(1));
2141    /// assert_eq!(atom.fetch_byte_sub(1, Ordering::Relaxed).addr(), 1);
2142    /// assert_eq!(atom.load(Ordering::Relaxed).addr(), 0);
2143    /// ```
2144    #[inline]
2145    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2146    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T {
2147        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2148        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2149        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2150        // compatible and is sound.
2151        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2152        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2153        #[cfg(miri)]
2154        {
2155            self.fetch_update_(order, |x| strict::map_addr(x, |x| x.wrapping_sub(val)))
2156        }
2157        #[cfg(not(miri))]
2158        {
2159            self.as_atomic_usize().fetch_sub(val, order) as *mut T
2160        }
2161    }
2162
2163    /// Performs a bitwise "or" operation on the address of the current pointer,
2164    /// and the argument `val`, and stores a pointer with provenance of the
2165    /// current pointer and the resulting address.
2166    ///
2167    /// This is equivalent to using [`map_addr`] to atomically perform
2168    /// `ptr = ptr.map_addr(|a| a | val)`. This can be used in tagged
2169    /// pointer schemes to atomically set tag bits.
2170    ///
2171    /// **Caveat**: This operation returns the previous value. To compute the
2172    /// stored value without losing provenance, you may use [`map_addr`]. For
2173    /// example: `a.fetch_or(val).map_addr(|a| a | val)`.
2174    ///
2175    /// `fetch_or` takes an [`Ordering`] argument which describes the memory
2176    /// ordering of this operation. All ordering modes are possible. Note that
2177    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2178    /// and using [`Release`] makes the load part [`Relaxed`].
2179    ///
2180    /// This API and its claimed semantics are part of the Strict Provenance
2181    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2182    /// details.
2183    ///
2184    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2185    ///
2186    /// # Examples
2187    ///
2188    /// ```
2189    /// # #![allow(unstable_name_collisions)]
2190    /// use portable_atomic::{AtomicPtr, Ordering};
2191    /// use sptr::Strict; // stable polyfill for strict provenance
2192    ///
2193    /// let pointer = &mut 3i64 as *mut i64;
2194    ///
2195    /// let atom = AtomicPtr::<i64>::new(pointer);
2196    /// // Tag the bottom bit of the pointer.
2197    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 0);
2198    /// // Extract and untag.
2199    /// let tagged = atom.load(Ordering::Relaxed);
2200    /// assert_eq!(tagged.addr() & 1, 1);
2201    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2202    /// ```
2203    #[inline]
2204    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2205    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T {
2206        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2207        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2208        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2209        // compatible and is sound.
2210        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2211        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2212        #[cfg(miri)]
2213        {
2214            self.fetch_update_(order, |x| strict::map_addr(x, |x| x | val))
2215        }
2216        #[cfg(not(miri))]
2217        {
2218            self.as_atomic_usize().fetch_or(val, order) as *mut T
2219        }
2220    }
2221
2222    /// Performs a bitwise "and" operation on the address of the current
2223    /// pointer, and the argument `val`, and stores a pointer with provenance of
2224    /// the current pointer and the resulting address.
2225    ///
2226    /// This is equivalent to using [`map_addr`] to atomically perform
2227    /// `ptr = ptr.map_addr(|a| a & val)`. This can be used in tagged
2228    /// pointer schemes to atomically unset tag bits.
2229    ///
2230    /// **Caveat**: This operation returns the previous value. To compute the
2231    /// stored value without losing provenance, you may use [`map_addr`]. For
2232    /// example: `a.fetch_and(val).map_addr(|a| a & val)`.
2233    ///
2234    /// `fetch_and` takes an [`Ordering`] argument which describes the memory
2235    /// ordering of this operation. All ordering modes are possible. Note that
2236    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2237    /// and using [`Release`] makes the load part [`Relaxed`].
2238    ///
2239    /// This API and its claimed semantics are part of the Strict Provenance
2240    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2241    /// details.
2242    ///
2243    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2244    ///
2245    /// # Examples
2246    ///
2247    /// ```
2248    /// # #![allow(unstable_name_collisions)]
2249    /// use portable_atomic::{AtomicPtr, Ordering};
2250    /// use sptr::Strict; // stable polyfill for strict provenance
2251    ///
2252    /// let pointer = &mut 3i64 as *mut i64;
2253    /// // A tagged pointer
2254    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2255    /// assert_eq!(atom.fetch_or(1, Ordering::Relaxed).addr() & 1, 1);
2256    /// // Untag, and extract the previously tagged pointer.
2257    /// let untagged = atom.fetch_and(!1, Ordering::Relaxed).map_addr(|a| a & !1);
2258    /// assert_eq!(untagged, pointer);
2259    /// ```
2260    #[inline]
2261    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2262    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T {
2263        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2264        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2265        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2266        // compatible and is sound.
2267        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2268        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2269        #[cfg(miri)]
2270        {
2271            self.fetch_update_(order, |x| strict::map_addr(x, |x| x & val))
2272        }
2273        #[cfg(not(miri))]
2274        {
2275            self.as_atomic_usize().fetch_and(val, order) as *mut T
2276        }
2277    }
2278
2279    /// Performs a bitwise "xor" operation on the address of the current
2280    /// pointer, and the argument `val`, and stores a pointer with provenance of
2281    /// the current pointer and the resulting address.
2282    ///
2283    /// This is equivalent to using [`map_addr`] to atomically perform
2284    /// `ptr = ptr.map_addr(|a| a ^ val)`. This can be used in tagged
2285    /// pointer schemes to atomically toggle tag bits.
2286    ///
2287    /// **Caveat**: This operation returns the previous value. To compute the
2288    /// stored value without losing provenance, you may use [`map_addr`]. For
2289    /// example: `a.fetch_xor(val).map_addr(|a| a ^ val)`.
2290    ///
2291    /// `fetch_xor` takes an [`Ordering`] argument which describes the memory
2292    /// ordering of this operation. All ordering modes are possible. Note that
2293    /// using [`Acquire`] makes the store part of this operation [`Relaxed`],
2294    /// and using [`Release`] makes the load part [`Relaxed`].
2295    ///
2296    /// This API and its claimed semantics are part of the Strict Provenance
2297    /// experiment, see the [module documentation for `ptr`][core::ptr] for
2298    /// details.
2299    ///
2300    /// [`map_addr`]: https://doc.rust-lang.org/std/primitive.pointer.html#method.map_addr
2301    ///
2302    /// # Examples
2303    ///
2304    /// ```
2305    /// # #![allow(unstable_name_collisions)]
2306    /// use portable_atomic::{AtomicPtr, Ordering};
2307    /// use sptr::Strict; // stable polyfill for strict provenance
2308    ///
2309    /// let pointer = &mut 3i64 as *mut i64;
2310    /// let atom = AtomicPtr::<i64>::new(pointer);
2311    ///
2312    /// // Toggle a tag bit on the pointer.
2313    /// atom.fetch_xor(1, Ordering::Relaxed);
2314    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2315    /// ```
2316    #[inline]
2317    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2318    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T {
2319        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2320        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2321        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2322        // compatible and is sound.
2323        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2324        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2325        #[cfg(miri)]
2326        {
2327            self.fetch_update_(order, |x| strict::map_addr(x, |x| x ^ val))
2328        }
2329        #[cfg(not(miri))]
2330        {
2331            self.as_atomic_usize().fetch_xor(val, order) as *mut T
2332        }
2333    }
2334
2335    /// Sets the bit at the specified bit-position to 1.
2336    ///
2337    /// Returns `true` if the specified bit was previously set to 1.
2338    ///
2339    /// `bit_set` takes an [`Ordering`] argument which describes the memory ordering
2340    /// of this operation. All ordering modes are possible. Note that using
2341    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2342    /// using [`Release`] makes the load part [`Relaxed`].
2343    ///
2344    /// This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
2345    ///
2346    /// # Examples
2347    ///
2348    /// ```
2349    /// # #![allow(unstable_name_collisions)]
2350    /// use portable_atomic::{AtomicPtr, Ordering};
2351    /// use sptr::Strict; // stable polyfill for strict provenance
2352    ///
2353    /// let pointer = &mut 3i64 as *mut i64;
2354    ///
2355    /// let atom = AtomicPtr::<i64>::new(pointer);
2356    /// // Tag the bottom bit of the pointer.
2357    /// assert!(!atom.bit_set(0, Ordering::Relaxed));
2358    /// // Extract and untag.
2359    /// let tagged = atom.load(Ordering::Relaxed);
2360    /// assert_eq!(tagged.addr() & 1, 1);
2361    /// assert_eq!(tagged.map_addr(|p| p & !1), pointer);
2362    /// ```
2363    #[inline]
2364    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2365    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
2366        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2367        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2368        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2369        // compatible and is sound.
2370        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2371        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2372        #[cfg(miri)]
2373        {
2374            let mask = 1_usize.wrapping_shl(bit);
2375            self.fetch_or(mask, order) as usize & mask != 0
2376        }
2377        #[cfg(not(miri))]
2378        {
2379            self.as_atomic_usize().bit_set(bit, order)
2380        }
2381    }
2382
2383    /// Clears the bit at the specified bit-position to 1.
2384    ///
2385    /// Returns `true` if the specified bit was previously set to 1.
2386    ///
2387    /// `bit_clear` takes an [`Ordering`] argument which describes the memory ordering
2388    /// of this operation. All ordering modes are possible. Note that using
2389    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2390    /// using [`Release`] makes the load part [`Relaxed`].
2391    ///
2392    /// This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
2393    ///
2394    /// # Examples
2395    ///
2396    /// ```
2397    /// # #![allow(unstable_name_collisions)]
2398    /// use portable_atomic::{AtomicPtr, Ordering};
2399    /// use sptr::Strict; // stable polyfill for strict provenance
2400    ///
2401    /// let pointer = &mut 3i64 as *mut i64;
2402    /// // A tagged pointer
2403    /// let atom = AtomicPtr::<i64>::new(pointer.map_addr(|a| a | 1));
2404    /// assert!(atom.bit_set(0, Ordering::Relaxed));
2405    /// // Untag
2406    /// assert!(atom.bit_clear(0, Ordering::Relaxed));
2407    /// ```
2408    #[inline]
2409    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2410    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
2411        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2412        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2413        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2414        // compatible and is sound.
2415        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2416        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2417        #[cfg(miri)]
2418        {
2419            let mask = 1_usize.wrapping_shl(bit);
2420            self.fetch_and(!mask, order) as usize & mask != 0
2421        }
2422        #[cfg(not(miri))]
2423        {
2424            self.as_atomic_usize().bit_clear(bit, order)
2425        }
2426    }
2427
2428    /// Toggles the bit at the specified bit-position.
2429    ///
2430    /// Returns `true` if the specified bit was previously set to 1.
2431    ///
2432    /// `bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
2433    /// of this operation. All ordering modes are possible. Note that using
2434    /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
2435    /// using [`Release`] makes the load part [`Relaxed`].
2436    ///
2437    /// This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
2438    ///
2439    /// # Examples
2440    ///
2441    /// ```
2442    /// # #![allow(unstable_name_collisions)]
2443    /// use portable_atomic::{AtomicPtr, Ordering};
2444    /// use sptr::Strict; // stable polyfill for strict provenance
2445    ///
2446    /// let pointer = &mut 3i64 as *mut i64;
2447    /// let atom = AtomicPtr::<i64>::new(pointer);
2448    ///
2449    /// // Toggle a tag bit on the pointer.
2450    /// atom.bit_toggle(0, Ordering::Relaxed);
2451    /// assert_eq!(atom.load(Ordering::Relaxed).addr() & 1, 1);
2452    /// ```
2453    #[inline]
2454    #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2455    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
2456        // Ideally, we would always use AtomicPtr::fetch_* since it is strict-provenance
2457        // compatible, but it is unstable. So, for now emulate it only on cfg(miri).
2458        // Code using AtomicUsize::fetch_* via casts is still permissive-provenance
2459        // compatible and is sound.
2460        // TODO: Once `#![feature(strict_provenance_atomic_ptr)]` is stabilized,
2461        // use AtomicPtr::fetch_* in all cases from the version in which it is stabilized.
2462        #[cfg(miri)]
2463        {
2464            let mask = 1_usize.wrapping_shl(bit);
2465            self.fetch_xor(mask, order) as usize & mask != 0
2466        }
2467        #[cfg(not(miri))]
2468        {
2469            self.as_atomic_usize().bit_toggle(bit, order)
2470        }
2471    }
2472
2473    #[cfg(not(miri))]
2474    #[inline(always)]
2475    fn as_atomic_usize(&self) -> &AtomicUsize {
2476        static_assert!(
2477            core::mem::size_of::<AtomicPtr<()>>() == core::mem::size_of::<AtomicUsize>()
2478        );
2479        static_assert!(
2480            core::mem::align_of::<AtomicPtr<()>>() == core::mem::align_of::<AtomicUsize>()
2481        );
2482        // SAFETY: AtomicPtr and AtomicUsize have the same layout,
2483        // and both access data in the same way.
2484        unsafe { &*(self as *const Self as *const AtomicUsize) }
2485    }
2486    } // cfg_has_atomic_cas_or_amo32!
2487
2488    const_fn! {
2489        const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
2490        /// Returns a mutable pointer to the underlying pointer.
2491        ///
2492        /// Returning an `*mut` pointer from a shared reference to this atomic is
2493        /// safe because the atomic types work with interior mutability. Any use of
2494        /// the returned raw pointer requires an `unsafe` block and has to uphold
2495        /// the safety requirements. If there is concurrent access, note the following
2496        /// additional safety requirements:
2497        ///
2498        /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
2499        ///   operations on it must be atomic.
2500        /// - Otherwise, any concurrent operations on it must be compatible with
2501        ///   operations performed by this atomic type.
2502        ///
2503        /// This is `const fn` on Rust 1.58+.
2504        #[inline]
2505        pub const fn as_ptr(&self) -> *mut *mut T {
2506            self.inner.as_ptr()
2507        }
2508    }
2509}
2510// See https://github.com/taiki-e/portable-atomic/issues/180
2511#[cfg(not(feature = "require-cas"))]
2512cfg_no_atomic_cas! {
2513#[doc(hidden)]
2514#[allow(unused_variables, clippy::unused_self)]
2515impl<'a, T: 'a> AtomicPtr<T> {
2516    cfg_no_atomic_cas_or_amo32! {
2517    #[inline]
2518    pub fn swap(&self, ptr: *mut T, order: Ordering) -> *mut T
2519    where
2520        &'a Self: HasSwap,
2521    {
2522        unimplemented!()
2523    }
2524    } // cfg_no_atomic_cas_or_amo32!
2525    #[inline]
2526    pub fn compare_exchange(
2527        &self,
2528        current: *mut T,
2529        new: *mut T,
2530        success: Ordering,
2531        failure: Ordering,
2532    ) -> Result<*mut T, *mut T>
2533    where
2534        &'a Self: HasCompareExchange,
2535    {
2536        unimplemented!()
2537    }
2538    #[inline]
2539    pub fn compare_exchange_weak(
2540        &self,
2541        current: *mut T,
2542        new: *mut T,
2543        success: Ordering,
2544        failure: Ordering,
2545    ) -> Result<*mut T, *mut T>
2546    where
2547        &'a Self: HasCompareExchangeWeak,
2548    {
2549        unimplemented!()
2550    }
2551    #[inline]
2552    pub fn fetch_update<F>(
2553        &self,
2554        set_order: Ordering,
2555        fetch_order: Ordering,
2556        f: F,
2557    ) -> Result<*mut T, *mut T>
2558    where
2559        F: FnMut(*mut T) -> Option<*mut T>,
2560        &'a Self: HasFetchUpdate,
2561    {
2562        unimplemented!()
2563    }
2564    cfg_no_atomic_cas_or_amo32! {
2565    #[inline]
2566    pub fn fetch_ptr_add(&self, val: usize, order: Ordering) -> *mut T
2567    where
2568        &'a Self: HasFetchPtrAdd,
2569    {
2570        unimplemented!()
2571    }
2572    #[inline]
2573    pub fn fetch_ptr_sub(&self, val: usize, order: Ordering) -> *mut T
2574    where
2575        &'a Self: HasFetchPtrSub,
2576    {
2577        unimplemented!()
2578    }
2579    #[inline]
2580    pub fn fetch_byte_add(&self, val: usize, order: Ordering) -> *mut T
2581    where
2582        &'a Self: HasFetchByteAdd,
2583    {
2584        unimplemented!()
2585    }
2586    #[inline]
2587    pub fn fetch_byte_sub(&self, val: usize, order: Ordering) -> *mut T
2588    where
2589        &'a Self: HasFetchByteSub,
2590    {
2591        unimplemented!()
2592    }
2593    #[inline]
2594    pub fn fetch_or(&self, val: usize, order: Ordering) -> *mut T
2595    where
2596        &'a Self: HasFetchOr,
2597    {
2598        unimplemented!()
2599    }
2600    #[inline]
2601    pub fn fetch_and(&self, val: usize, order: Ordering) -> *mut T
2602    where
2603        &'a Self: HasFetchAnd,
2604    {
2605        unimplemented!()
2606    }
2607    #[inline]
2608    pub fn fetch_xor(&self, val: usize, order: Ordering) -> *mut T
2609    where
2610        &'a Self: HasFetchXor,
2611    {
2612        unimplemented!()
2613    }
2614    #[inline]
2615    pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
2616    where
2617        &'a Self: HasBitSet,
2618    {
2619        unimplemented!()
2620    }
2621    #[inline]
2622    pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
2623    where
2624        &'a Self: HasBitClear,
2625    {
2626        unimplemented!()
2627    }
2628    #[inline]
2629    pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
2630    where
2631        &'a Self: HasBitToggle,
2632    {
2633        unimplemented!()
2634    }
2635    } // cfg_no_atomic_cas_or_amo32!
2636}
2637} // cfg_no_atomic_cas!
2638} // cfg_has_atomic_ptr!
2639
2640macro_rules! atomic_int {
2641    // Atomic{I,U}* impls
2642    ($atomic_type:ident, $int_type:ident, $align:literal,
2643        $cfg_has_atomic_cas_or_amo32_or_8:ident, $cfg_no_atomic_cas_or_amo32_or_8:ident
2644        $(, #[$cfg_float:meta] $atomic_float_type:ident, $float_type:ident)?
2645    ) => {
2646        doc_comment! {
2647            concat!("An integer type which can be safely shared between threads.
2648
2649This type has the same in-memory representation as the underlying integer type,
2650[`", stringify!($int_type), "`].
2651
2652If the compiler and the platform support atomic loads and stores of [`", stringify!($int_type),
2653"`], this type is a wrapper for the standard library's `", stringify!($atomic_type),
2654"`. If the platform supports it but the compiler does not, atomic operations are implemented using
2655inline assembly. Otherwise synchronizes using global locks.
2656You can call [`", stringify!($atomic_type), "::is_lock_free()`] to check whether
2657atomic instructions or locks will be used.
2658"
2659            ),
2660            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
2661            // will show clearer docs.
2662            #[repr(C, align($align))]
2663            pub struct $atomic_type {
2664                inner: imp::$atomic_type,
2665            }
2666        }
2667
2668        impl Default for $atomic_type {
2669            #[inline]
2670            fn default() -> Self {
2671                Self::new($int_type::default())
2672            }
2673        }
2674
2675        impl From<$int_type> for $atomic_type {
2676            #[inline]
2677            fn from(v: $int_type) -> Self {
2678                Self::new(v)
2679            }
2680        }
2681
2682        // UnwindSafe is implicitly implemented.
2683        #[cfg(not(portable_atomic_no_core_unwind_safe))]
2684        impl core::panic::RefUnwindSafe for $atomic_type {}
2685        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
2686        impl std::panic::RefUnwindSafe for $atomic_type {}
2687
2688        impl_debug_and_serde!($atomic_type);
2689
2690        impl $atomic_type {
2691            doc_comment! {
2692                concat!(
2693                    "Creates a new atomic integer.
2694
2695# Examples
2696
2697```
2698use portable_atomic::", stringify!($atomic_type), ";
2699
2700let atomic_forty_two = ", stringify!($atomic_type), "::new(42);
2701```"
2702                ),
2703                #[inline]
2704                #[must_use]
2705                pub const fn new(v: $int_type) -> Self {
2706                    static_assert_layout!($atomic_type, $int_type);
2707                    Self { inner: imp::$atomic_type::new(v) }
2708                }
2709            }
2710
2711            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
2712            doc_comment! {
2713                concat!("Creates a new reference to an atomic integer from a pointer.
2714
2715# Safety
2716
2717* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
2718  can be bigger than `align_of::<", stringify!($int_type), ">()`).
2719* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
2720* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
2721  behind `ptr` must have a happens-before relationship with atomic accesses via
2722  the returned value (or vice-versa).
2723  * In other words, time periods where the value is accessed atomically may not
2724    overlap with periods where the value is accessed non-atomically.
2725  * This requirement is trivially satisfied if `ptr` is never used non-atomically
2726    for the duration of lifetime `'a`. Most use cases should be able to follow
2727    this guideline.
2728  * This requirement is also trivially satisfied if all accesses (atomic or not) are
2729    done from the same thread.
2730* If this atomic type is *not* lock-free:
2731  * Any accesses to the value behind `ptr` must have a happens-before relationship
2732    with accesses via the returned value (or vice-versa).
2733  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
2734    be compatible with operations performed by this atomic type.
2735* This method must not be used to create overlapping or mixed-size atomic
2736  accesses, as these are not supported by the memory model.
2737
2738[valid]: core::ptr#safety"),
2739                #[inline]
2740                #[must_use]
2741                pub unsafe fn from_ptr<'a>(ptr: *mut $int_type) -> &'a Self {
2742                    #[allow(clippy::cast_ptr_alignment)]
2743                    // SAFETY: guaranteed by the caller
2744                    unsafe { &*(ptr as *mut Self) }
2745                }
2746            }
2747
2748            doc_comment! {
2749                concat!("Returns `true` if operations on values of this type are lock-free.
2750
2751If the compiler or the platform doesn't support the necessary
2752atomic instructions, global locks for every potentially
2753concurrent atomic operation will be used.
2754
2755# Examples
2756
2757```
2758use portable_atomic::", stringify!($atomic_type), ";
2759
2760let is_lock_free = ", stringify!($atomic_type), "::is_lock_free();
2761```"),
2762                #[inline]
2763                #[must_use]
2764                pub fn is_lock_free() -> bool {
2765                    <imp::$atomic_type>::is_lock_free()
2766                }
2767            }
2768
2769            doc_comment! {
2770                concat!("Returns `true` if operations on values of this type are lock-free.
2771
2772If the compiler or the platform doesn't support the necessary
2773atomic instructions, global locks for every potentially
2774concurrent atomic operation will be used.
2775
2776**Note:** If the atomic operation relies on dynamic CPU feature detection,
2777this type may be lock-free even if the function returns false.
2778
2779# Examples
2780
2781```
2782use portable_atomic::", stringify!($atomic_type), ";
2783
2784const IS_ALWAYS_LOCK_FREE: bool = ", stringify!($atomic_type), "::is_always_lock_free();
2785```"),
2786                #[inline]
2787                #[must_use]
2788                pub const fn is_always_lock_free() -> bool {
2789                    <imp::$atomic_type>::IS_ALWAYS_LOCK_FREE
2790                }
2791            }
2792            #[cfg(test)]
2793            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
2794
2795            doc_comment! {
2796                concat!("Returns a mutable reference to the underlying integer.\n
2797This is safe because the mutable reference guarantees that no other threads are
2798concurrently accessing the atomic data.
2799
2800# Examples
2801
2802```
2803use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2804
2805let mut some_var = ", stringify!($atomic_type), "::new(10);
2806assert_eq!(*some_var.get_mut(), 10);
2807*some_var.get_mut() = 5;
2808assert_eq!(some_var.load(Ordering::SeqCst), 5);
2809```"),
2810                #[inline]
2811                pub fn get_mut(&mut self) -> &mut $int_type {
2812                    self.inner.get_mut()
2813                }
2814            }
2815
2816            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
2817            // https://github.com/rust-lang/rust/issues/76314
2818
2819            #[cfg(not(portable_atomic_no_const_transmute))]
2820            doc_comment! {
2821                concat!("Consumes the atomic and returns the contained value.
2822
2823This is safe because passing `self` by value guarantees that no other threads are
2824concurrently accessing the atomic data.
2825
2826This is `const fn` on Rust 1.56+.
2827
2828# Examples
2829
2830```
2831use portable_atomic::", stringify!($atomic_type), ";
2832
2833let some_var = ", stringify!($atomic_type), "::new(5);
2834assert_eq!(some_var.into_inner(), 5);
2835```"),
2836                #[inline]
2837                pub const fn into_inner(self) -> $int_type {
2838                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2839                    // so they can be safely transmuted.
2840                    // (const UnsafeCell::into_inner is unstable)
2841                    unsafe { core::mem::transmute(self) }
2842                }
2843            }
2844            #[cfg(portable_atomic_no_const_transmute)]
2845            doc_comment! {
2846                concat!("Consumes the atomic and returns the contained value.
2847
2848This is safe because passing `self` by value guarantees that no other threads are
2849concurrently accessing the atomic data.
2850
2851This is `const fn` on Rust 1.56+.
2852
2853# Examples
2854
2855```
2856use portable_atomic::", stringify!($atomic_type), ";
2857
2858let some_var = ", stringify!($atomic_type), "::new(5);
2859assert_eq!(some_var.into_inner(), 5);
2860```"),
2861                #[inline]
2862                pub fn into_inner(self) -> $int_type {
2863                    // SAFETY: $atomic_type and $int_type have the same size and in-memory representations,
2864                    // so they can be safely transmuted.
2865                    // (const UnsafeCell::into_inner is unstable)
2866                    unsafe { core::mem::transmute(self) }
2867                }
2868            }
2869
2870            doc_comment! {
2871                concat!("Loads a value from the atomic integer.
2872
2873`load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2874Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
2875
2876# Panics
2877
2878Panics if `order` is [`Release`] or [`AcqRel`].
2879
2880# Examples
2881
2882```
2883use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2884
2885let some_var = ", stringify!($atomic_type), "::new(5);
2886
2887assert_eq!(some_var.load(Ordering::Relaxed), 5);
2888```"),
2889                #[inline]
2890                #[cfg_attr(
2891                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2892                    track_caller
2893                )]
2894                pub fn load(&self, order: Ordering) -> $int_type {
2895                    self.inner.load(order)
2896                }
2897            }
2898
2899            doc_comment! {
2900                concat!("Stores a value into the atomic integer.
2901
2902`store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
2903Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
2904
2905# Panics
2906
2907Panics if `order` is [`Acquire`] or [`AcqRel`].
2908
2909# Examples
2910
2911```
2912use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2913
2914let some_var = ", stringify!($atomic_type), "::new(5);
2915
2916some_var.store(10, Ordering::Relaxed);
2917assert_eq!(some_var.load(Ordering::Relaxed), 10);
2918```"),
2919                #[inline]
2920                #[cfg_attr(
2921                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
2922                    track_caller
2923                )]
2924                pub fn store(&self, val: $int_type, order: Ordering) {
2925                    self.inner.store(val, order)
2926                }
2927            }
2928
2929            cfg_has_atomic_cas_or_amo32! {
2930            $cfg_has_atomic_cas_or_amo32_or_8! {
2931            doc_comment! {
2932                concat!("Stores a value into the atomic integer, returning the previous value.
2933
2934`swap` takes an [`Ordering`] argument which describes the memory ordering
2935of this operation. All ordering modes are possible. Note that using
2936[`Acquire`] makes the store part of this operation [`Relaxed`], and
2937using [`Release`] makes the load part [`Relaxed`].
2938
2939# Examples
2940
2941```
2942use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2943
2944let some_var = ", stringify!($atomic_type), "::new(5);
2945
2946assert_eq!(some_var.swap(10, Ordering::Relaxed), 5);
2947```"),
2948                #[inline]
2949                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
2950                pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type {
2951                    self.inner.swap(val, order)
2952                }
2953            }
2954            } // $cfg_has_atomic_cas_or_amo32_or_8!
2955
2956            cfg_has_atomic_cas! {
2957            doc_comment! {
2958                concat!("Stores a value into the atomic integer if the current value is the same as
2959the `current` value.
2960
2961The return value is a result indicating whether the new value was written and
2962containing the previous value. On success this value is guaranteed to be equal to
2963`current`.
2964
2965`compare_exchange` takes two [`Ordering`] arguments to describe the memory
2966ordering of this operation. `success` describes the required ordering for the
2967read-modify-write operation that takes place if the comparison with `current` succeeds.
2968`failure` describes the required ordering for the load operation that takes place when
2969the comparison fails. Using [`Acquire`] as success ordering makes the store part
2970of this operation [`Relaxed`], and using [`Release`] makes the successful load
2971[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
2972
2973# Panics
2974
2975Panics if `failure` is [`Release`], [`AcqRel`].
2976
2977# Examples
2978
2979```
2980use portable_atomic::{", stringify!($atomic_type), ", Ordering};
2981
2982let some_var = ", stringify!($atomic_type), "::new(5);
2983
2984assert_eq!(
2985    some_var.compare_exchange(5, 10, Ordering::Acquire, Ordering::Relaxed),
2986    Ok(5),
2987);
2988assert_eq!(some_var.load(Ordering::Relaxed), 10);
2989
2990assert_eq!(
2991    some_var.compare_exchange(6, 12, Ordering::SeqCst, Ordering::Acquire),
2992    Err(10),
2993);
2994assert_eq!(some_var.load(Ordering::Relaxed), 10);
2995```"),
2996                #[inline]
2997                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
2998                #[cfg_attr(
2999                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3000                    track_caller
3001                )]
3002                pub fn compare_exchange(
3003                    &self,
3004                    current: $int_type,
3005                    new: $int_type,
3006                    success: Ordering,
3007                    failure: Ordering,
3008                ) -> Result<$int_type, $int_type> {
3009                    self.inner.compare_exchange(current, new, success, failure)
3010                }
3011            }
3012
3013            doc_comment! {
3014                concat!("Stores a value into the atomic integer if the current value is the same as
3015the `current` value.
3016Unlike [`compare_exchange`](Self::compare_exchange)
3017this function is allowed to spuriously fail even
3018when the comparison succeeds, which can result in more efficient code on some
3019platforms. The return value is a result indicating whether the new value was
3020written and containing the previous value.
3021
3022`compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
3023ordering of this operation. `success` describes the required ordering for the
3024read-modify-write operation that takes place if the comparison with `current` succeeds.
3025`failure` describes the required ordering for the load operation that takes place when
3026the comparison fails. Using [`Acquire`] as success ordering makes the store part
3027of this operation [`Relaxed`], and using [`Release`] makes the successful load
3028[`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3029
3030# Panics
3031
3032Panics if `failure` is [`Release`], [`AcqRel`].
3033
3034# Examples
3035
3036```
3037use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3038
3039let val = ", stringify!($atomic_type), "::new(4);
3040
3041let mut old = val.load(Ordering::Relaxed);
3042loop {
3043    let new = old * 2;
3044    match val.compare_exchange_weak(old, new, Ordering::SeqCst, Ordering::Relaxed) {
3045        Ok(_) => break,
3046        Err(x) => old = x,
3047    }
3048}
3049```"),
3050                #[inline]
3051                #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
3052                #[cfg_attr(
3053                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3054                    track_caller
3055                )]
3056                pub fn compare_exchange_weak(
3057                    &self,
3058                    current: $int_type,
3059                    new: $int_type,
3060                    success: Ordering,
3061                    failure: Ordering,
3062                ) -> Result<$int_type, $int_type> {
3063                    self.inner.compare_exchange_weak(current, new, success, failure)
3064                }
3065            }
3066            } // cfg_has_atomic_cas!
3067
3068            $cfg_has_atomic_cas_or_amo32_or_8! {
3069            doc_comment! {
3070                concat!("Adds to the current value, returning the previous value.
3071
3072This operation wraps around on overflow.
3073
3074`fetch_add` takes an [`Ordering`] argument which describes the memory ordering
3075of this operation. All ordering modes are possible. Note that using
3076[`Acquire`] makes the store part of this operation [`Relaxed`], and
3077using [`Release`] makes the load part [`Relaxed`].
3078
3079# Examples
3080
3081```
3082use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3083
3084let foo = ", stringify!($atomic_type), "::new(0);
3085assert_eq!(foo.fetch_add(10, Ordering::SeqCst), 0);
3086assert_eq!(foo.load(Ordering::SeqCst), 10);
3087```"),
3088                #[inline]
3089                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3090                pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type {
3091                    self.inner.fetch_add(val, order)
3092                }
3093            }
3094
3095            doc_comment! {
3096                concat!("Adds to the current value.
3097
3098This operation wraps around on overflow.
3099
3100Unlike `fetch_add`, this does not return the previous value.
3101
3102`add` takes an [`Ordering`] argument which describes the memory ordering
3103of this operation. All ordering modes are possible. Note that using
3104[`Acquire`] makes the store part of this operation [`Relaxed`], and
3105using [`Release`] makes the load part [`Relaxed`].
3106
3107This function may generate more efficient code than `fetch_add` on some platforms.
3108
3109- MSP430: `add` instead of disabling interrupts ({8,16}-bit atomics)
3110
3111# Examples
3112
3113```
3114use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3115
3116let foo = ", stringify!($atomic_type), "::new(0);
3117foo.add(10, Ordering::SeqCst);
3118assert_eq!(foo.load(Ordering::SeqCst), 10);
3119```"),
3120                #[inline]
3121                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3122                pub fn add(&self, val: $int_type, order: Ordering) {
3123                    self.inner.add(val, order);
3124                }
3125            }
3126
3127            doc_comment! {
3128                concat!("Subtracts from the current value, returning the previous value.
3129
3130This operation wraps around on overflow.
3131
3132`fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
3133of this operation. All ordering modes are possible. Note that using
3134[`Acquire`] makes the store part of this operation [`Relaxed`], and
3135using [`Release`] makes the load part [`Relaxed`].
3136
3137# Examples
3138
3139```
3140use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3141
3142let foo = ", stringify!($atomic_type), "::new(20);
3143assert_eq!(foo.fetch_sub(10, Ordering::SeqCst), 20);
3144assert_eq!(foo.load(Ordering::SeqCst), 10);
3145```"),
3146                #[inline]
3147                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3148                pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type {
3149                    self.inner.fetch_sub(val, order)
3150                }
3151            }
3152
3153            doc_comment! {
3154                concat!("Subtracts from the current value.
3155
3156This operation wraps around on overflow.
3157
3158Unlike `fetch_sub`, this does not return the previous value.
3159
3160`sub` takes an [`Ordering`] argument which describes the memory ordering
3161of this operation. All ordering modes are possible. Note that using
3162[`Acquire`] makes the store part of this operation [`Relaxed`], and
3163using [`Release`] makes the load part [`Relaxed`].
3164
3165This function may generate more efficient code than `fetch_sub` on some platforms.
3166
3167- MSP430: `sub` instead of disabling interrupts ({8,16}-bit atomics)
3168
3169# Examples
3170
3171```
3172use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3173
3174let foo = ", stringify!($atomic_type), "::new(20);
3175foo.sub(10, Ordering::SeqCst);
3176assert_eq!(foo.load(Ordering::SeqCst), 10);
3177```"),
3178                #[inline]
3179                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3180                pub fn sub(&self, val: $int_type, order: Ordering) {
3181                    self.inner.sub(val, order);
3182                }
3183            }
3184            } // $cfg_has_atomic_cas_or_amo32_or_8!
3185
3186            doc_comment! {
3187                concat!("Bitwise \"and\" with the current value.
3188
3189Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3190sets the new value to the result.
3191
3192Returns the previous value.
3193
3194`fetch_and` takes an [`Ordering`] argument which describes the memory ordering
3195of this operation. All ordering modes are possible. Note that using
3196[`Acquire`] makes the store part of this operation [`Relaxed`], and
3197using [`Release`] makes the load part [`Relaxed`].
3198
3199# Examples
3200
3201```
3202use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3203
3204let foo = ", stringify!($atomic_type), "::new(0b101101);
3205assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3206assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3207```"),
3208                #[inline]
3209                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3210                pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type {
3211                    self.inner.fetch_and(val, order)
3212                }
3213            }
3214
3215            doc_comment! {
3216                concat!("Bitwise \"and\" with the current value.
3217
3218Performs a bitwise \"and\" operation on the current value and the argument `val`, and
3219sets the new value to the result.
3220
3221Unlike `fetch_and`, this does not return the previous value.
3222
3223`and` takes an [`Ordering`] argument which describes the memory ordering
3224of this operation. All ordering modes are possible. Note that using
3225[`Acquire`] makes the store part of this operation [`Relaxed`], and
3226using [`Release`] makes the load part [`Relaxed`].
3227
3228This function may generate more efficient code than `fetch_and` on some platforms.
3229
3230- x86/x86_64: `lock and` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3231- MSP430: `and` instead of disabling interrupts ({8,16}-bit atomics)
3232
3233Note: On x86/x86_64, the use of either function should not usually
3234affect the generated code, because LLVM can properly optimize the case
3235where the result is unused.
3236
3237# Examples
3238
3239```
3240use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3241
3242let foo = ", stringify!($atomic_type), "::new(0b101101);
3243assert_eq!(foo.fetch_and(0b110011, Ordering::SeqCst), 0b101101);
3244assert_eq!(foo.load(Ordering::SeqCst), 0b100001);
3245```"),
3246                #[inline]
3247                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3248                pub fn and(&self, val: $int_type, order: Ordering) {
3249                    self.inner.and(val, order);
3250                }
3251            }
3252
3253            cfg_has_atomic_cas! {
3254            doc_comment! {
3255                concat!("Bitwise \"nand\" with the current value.
3256
3257Performs a bitwise \"nand\" operation on the current value and the argument `val`, and
3258sets the new value to the result.
3259
3260Returns the previous value.
3261
3262`fetch_nand` takes an [`Ordering`] argument which describes the memory ordering
3263of this operation. All ordering modes are possible. Note that using
3264[`Acquire`] makes the store part of this operation [`Relaxed`], and
3265using [`Release`] makes the load part [`Relaxed`].
3266
3267# Examples
3268
3269```
3270use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3271
3272let foo = ", stringify!($atomic_type), "::new(0x13);
3273assert_eq!(foo.fetch_nand(0x31, Ordering::SeqCst), 0x13);
3274assert_eq!(foo.load(Ordering::SeqCst), !(0x13 & 0x31));
3275```"),
3276                #[inline]
3277                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3278                pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type {
3279                    self.inner.fetch_nand(val, order)
3280                }
3281            }
3282            } // cfg_has_atomic_cas!
3283
3284            doc_comment! {
3285                concat!("Bitwise \"or\" with the current value.
3286
3287Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3288sets the new value to the result.
3289
3290Returns the previous value.
3291
3292`fetch_or` takes an [`Ordering`] argument which describes the memory ordering
3293of this operation. All ordering modes are possible. Note that using
3294[`Acquire`] makes the store part of this operation [`Relaxed`], and
3295using [`Release`] makes the load part [`Relaxed`].
3296
3297# Examples
3298
3299```
3300use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3301
3302let foo = ", stringify!($atomic_type), "::new(0b101101);
3303assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3304assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3305```"),
3306                #[inline]
3307                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3308                pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type {
3309                    self.inner.fetch_or(val, order)
3310                }
3311            }
3312
3313            doc_comment! {
3314                concat!("Bitwise \"or\" with the current value.
3315
3316Performs a bitwise \"or\" operation on the current value and the argument `val`, and
3317sets the new value to the result.
3318
3319Unlike `fetch_or`, this does not return the previous value.
3320
3321`or` takes an [`Ordering`] argument which describes the memory ordering
3322of this operation. All ordering modes are possible. Note that using
3323[`Acquire`] makes the store part of this operation [`Relaxed`], and
3324using [`Release`] makes the load part [`Relaxed`].
3325
3326This function may generate more efficient code than `fetch_or` on some platforms.
3327
3328- x86/x86_64: `lock or` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3329- MSP430: `or` instead of disabling interrupts ({8,16}-bit atomics)
3330
3331Note: On x86/x86_64, the use of either function should not usually
3332affect the generated code, because LLVM can properly optimize the case
3333where the result is unused.
3334
3335# Examples
3336
3337```
3338use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3339
3340let foo = ", stringify!($atomic_type), "::new(0b101101);
3341assert_eq!(foo.fetch_or(0b110011, Ordering::SeqCst), 0b101101);
3342assert_eq!(foo.load(Ordering::SeqCst), 0b111111);
3343```"),
3344                #[inline]
3345                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3346                pub fn or(&self, val: $int_type, order: Ordering) {
3347                    self.inner.or(val, order);
3348                }
3349            }
3350
3351            doc_comment! {
3352                concat!("Bitwise \"xor\" with the current value.
3353
3354Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3355sets the new value to the result.
3356
3357Returns the previous value.
3358
3359`fetch_xor` takes an [`Ordering`] argument which describes the memory ordering
3360of this operation. All ordering modes are possible. Note that using
3361[`Acquire`] makes the store part of this operation [`Relaxed`], and
3362using [`Release`] makes the load part [`Relaxed`].
3363
3364# Examples
3365
3366```
3367use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3368
3369let foo = ", stringify!($atomic_type), "::new(0b101101);
3370assert_eq!(foo.fetch_xor(0b110011, Ordering::SeqCst), 0b101101);
3371assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3372```"),
3373                #[inline]
3374                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3375                pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type {
3376                    self.inner.fetch_xor(val, order)
3377                }
3378            }
3379
3380            doc_comment! {
3381                concat!("Bitwise \"xor\" with the current value.
3382
3383Performs a bitwise \"xor\" operation on the current value and the argument `val`, and
3384sets the new value to the result.
3385
3386Unlike `fetch_xor`, this does not return the previous value.
3387
3388`xor` takes an [`Ordering`] argument which describes the memory ordering
3389of this operation. All ordering modes are possible. Note that using
3390[`Acquire`] makes the store part of this operation [`Relaxed`], and
3391using [`Release`] makes the load part [`Relaxed`].
3392
3393This function may generate more efficient code than `fetch_xor` on some platforms.
3394
3395- x86/x86_64: `lock xor` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3396- MSP430: `xor` instead of disabling interrupts ({8,16}-bit atomics)
3397
3398Note: On x86/x86_64, the use of either function should not usually
3399affect the generated code, because LLVM can properly optimize the case
3400where the result is unused.
3401
3402# Examples
3403
3404```
3405use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3406
3407let foo = ", stringify!($atomic_type), "::new(0b101101);
3408foo.xor(0b110011, Ordering::SeqCst);
3409assert_eq!(foo.load(Ordering::SeqCst), 0b011110);
3410```"),
3411                #[inline]
3412                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3413                pub fn xor(&self, val: $int_type, order: Ordering) {
3414                    self.inner.xor(val, order);
3415                }
3416            }
3417
3418            cfg_has_atomic_cas! {
3419            doc_comment! {
3420                concat!("Fetches the value, and applies a function to it that returns an optional
3421new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
3422`Err(previous_value)`.
3423
3424Note: This may call the function multiple times if the value has been changed from other threads in
3425the meantime, as long as the function returns `Some(_)`, but the function will have been applied
3426only once to the stored value.
3427
3428`fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
3429The first describes the required ordering for when the operation finally succeeds while the second
3430describes the required ordering for loads. These correspond to the success and failure orderings of
3431[`compare_exchange`](Self::compare_exchange) respectively.
3432
3433Using [`Acquire`] as success ordering makes the store part
3434of this operation [`Relaxed`], and using [`Release`] makes the final successful load
3435[`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
3436
3437# Panics
3438
3439Panics if `fetch_order` is [`Release`], [`AcqRel`].
3440
3441# Considerations
3442
3443This method is not magic; it is not provided by the hardware.
3444It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
3445and suffers from the same drawbacks.
3446In particular, this method will not circumvent the [ABA Problem].
3447
3448[ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
3449
3450# Examples
3451
3452```
3453use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3454
3455let x = ", stringify!($atomic_type), "::new(7);
3456assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |_| None), Err(7));
3457assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(7));
3458assert_eq!(x.fetch_update(Ordering::SeqCst, Ordering::SeqCst, |x| Some(x + 1)), Ok(8));
3459assert_eq!(x.load(Ordering::SeqCst), 9);
3460```"),
3461                #[inline]
3462                #[cfg_attr(
3463                    any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
3464                    track_caller
3465                )]
3466                pub fn fetch_update<F>(
3467                    &self,
3468                    set_order: Ordering,
3469                    fetch_order: Ordering,
3470                    mut f: F,
3471                ) -> Result<$int_type, $int_type>
3472                where
3473                    F: FnMut($int_type) -> Option<$int_type>,
3474                {
3475                    let mut prev = self.load(fetch_order);
3476                    while let Some(next) = f(prev) {
3477                        match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
3478                            x @ Ok(_) => return x,
3479                            Err(next_prev) => prev = next_prev,
3480                        }
3481                    }
3482                    Err(prev)
3483                }
3484            }
3485            } // cfg_has_atomic_cas!
3486
3487            $cfg_has_atomic_cas_or_amo32_or_8! {
3488            doc_comment! {
3489                concat!("Maximum with the current value.
3490
3491Finds the maximum of the current value and the argument `val`, and
3492sets the new value to the result.
3493
3494Returns the previous value.
3495
3496`fetch_max` takes an [`Ordering`] argument which describes the memory ordering
3497of this operation. All ordering modes are possible. Note that using
3498[`Acquire`] makes the store part of this operation [`Relaxed`], and
3499using [`Release`] makes the load part [`Relaxed`].
3500
3501# Examples
3502
3503```
3504use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3505
3506let foo = ", stringify!($atomic_type), "::new(23);
3507assert_eq!(foo.fetch_max(42, Ordering::SeqCst), 23);
3508assert_eq!(foo.load(Ordering::SeqCst), 42);
3509```
3510
3511If you want to obtain the maximum value in one step, you can use the following:
3512
3513```
3514use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3515
3516let foo = ", stringify!($atomic_type), "::new(23);
3517let bar = 42;
3518let max_foo = foo.fetch_max(bar, Ordering::SeqCst).max(bar);
3519assert!(max_foo == 42);
3520```"),
3521                #[inline]
3522                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3523                pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type {
3524                    self.inner.fetch_max(val, order)
3525                }
3526            }
3527
3528            doc_comment! {
3529                concat!("Minimum with the current value.
3530
3531Finds the minimum of the current value and the argument `val`, and
3532sets the new value to the result.
3533
3534Returns the previous value.
3535
3536`fetch_min` takes an [`Ordering`] argument which describes the memory ordering
3537of this operation. All ordering modes are possible. Note that using
3538[`Acquire`] makes the store part of this operation [`Relaxed`], and
3539using [`Release`] makes the load part [`Relaxed`].
3540
3541# Examples
3542
3543```
3544use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3545
3546let foo = ", stringify!($atomic_type), "::new(23);
3547assert_eq!(foo.fetch_min(42, Ordering::Relaxed), 23);
3548assert_eq!(foo.load(Ordering::Relaxed), 23);
3549assert_eq!(foo.fetch_min(22, Ordering::Relaxed), 23);
3550assert_eq!(foo.load(Ordering::Relaxed), 22);
3551```
3552
3553If you want to obtain the minimum value in one step, you can use the following:
3554
3555```
3556use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3557
3558let foo = ", stringify!($atomic_type), "::new(23);
3559let bar = 12;
3560let min_foo = foo.fetch_min(bar, Ordering::SeqCst).min(bar);
3561assert_eq!(min_foo, 12);
3562```"),
3563                #[inline]
3564                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3565                pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type {
3566                    self.inner.fetch_min(val, order)
3567                }
3568            }
3569            } // $cfg_has_atomic_cas_or_amo32_or_8!
3570
3571            doc_comment! {
3572                concat!("Sets the bit at the specified bit-position to 1.
3573
3574Returns `true` if the specified bit was previously set to 1.
3575
3576`bit_set` takes an [`Ordering`] argument which describes the memory ordering
3577of this operation. All ordering modes are possible. Note that using
3578[`Acquire`] makes the store part of this operation [`Relaxed`], and
3579using [`Release`] makes the load part [`Relaxed`].
3580
3581This corresponds to x86's `lock bts`, and the implementation calls them on x86/x86_64.
3582
3583# Examples
3584
3585```
3586use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3587
3588let foo = ", stringify!($atomic_type), "::new(0b0000);
3589assert!(!foo.bit_set(0, Ordering::Relaxed));
3590assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3591assert!(foo.bit_set(0, Ordering::Relaxed));
3592assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3593```"),
3594                #[inline]
3595                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3596                pub fn bit_set(&self, bit: u32, order: Ordering) -> bool {
3597                    self.inner.bit_set(bit, order)
3598                }
3599            }
3600
3601            doc_comment! {
3602                concat!("Clears the bit at the specified bit-position to 1.
3603
3604Returns `true` if the specified bit was previously set to 1.
3605
3606`bit_clear` takes an [`Ordering`] argument which describes the memory ordering
3607of this operation. All ordering modes are possible. Note that using
3608[`Acquire`] makes the store part of this operation [`Relaxed`], and
3609using [`Release`] makes the load part [`Relaxed`].
3610
3611This corresponds to x86's `lock btr`, and the implementation calls them on x86/x86_64.
3612
3613# Examples
3614
3615```
3616use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3617
3618let foo = ", stringify!($atomic_type), "::new(0b0001);
3619assert!(foo.bit_clear(0, Ordering::Relaxed));
3620assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3621```"),
3622                #[inline]
3623                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3624                pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool {
3625                    self.inner.bit_clear(bit, order)
3626                }
3627            }
3628
3629            doc_comment! {
3630                concat!("Toggles the bit at the specified bit-position.
3631
3632Returns `true` if the specified bit was previously set to 1.
3633
3634`bit_toggle` takes an [`Ordering`] argument which describes the memory ordering
3635of this operation. All ordering modes are possible. Note that using
3636[`Acquire`] makes the store part of this operation [`Relaxed`], and
3637using [`Release`] makes the load part [`Relaxed`].
3638
3639This corresponds to x86's `lock btc`, and the implementation calls them on x86/x86_64.
3640
3641# Examples
3642
3643```
3644use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3645
3646let foo = ", stringify!($atomic_type), "::new(0b0000);
3647assert!(!foo.bit_toggle(0, Ordering::Relaxed));
3648assert_eq!(foo.load(Ordering::Relaxed), 0b0001);
3649assert!(foo.bit_toggle(0, Ordering::Relaxed));
3650assert_eq!(foo.load(Ordering::Relaxed), 0b0000);
3651```"),
3652                #[inline]
3653                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3654                pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool {
3655                    self.inner.bit_toggle(bit, order)
3656                }
3657            }
3658
3659            doc_comment! {
3660                concat!("Logical negates the current value, and sets the new value to the result.
3661
3662Returns the previous value.
3663
3664`fetch_not` takes an [`Ordering`] argument which describes the memory ordering
3665of this operation. All ordering modes are possible. Note that using
3666[`Acquire`] makes the store part of this operation [`Relaxed`], and
3667using [`Release`] makes the load part [`Relaxed`].
3668
3669# Examples
3670
3671```
3672use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3673
3674let foo = ", stringify!($atomic_type), "::new(0);
3675assert_eq!(foo.fetch_not(Ordering::Relaxed), 0);
3676assert_eq!(foo.load(Ordering::Relaxed), !0);
3677```"),
3678                #[inline]
3679                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3680                pub fn fetch_not(&self, order: Ordering) -> $int_type {
3681                    self.inner.fetch_not(order)
3682                }
3683            }
3684
3685            doc_comment! {
3686                concat!("Logical negates the current value, and sets the new value to the result.
3687
3688Unlike `fetch_not`, this does not return the previous value.
3689
3690`not` takes an [`Ordering`] argument which describes the memory ordering
3691of this operation. All ordering modes are possible. Note that using
3692[`Acquire`] makes the store part of this operation [`Relaxed`], and
3693using [`Release`] makes the load part [`Relaxed`].
3694
3695This function may generate more efficient code than `fetch_not` on some platforms.
3696
3697- x86/x86_64: `lock not` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3698- MSP430: `inv` instead of disabling interrupts ({8,16}-bit atomics)
3699
3700# Examples
3701
3702```
3703use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3704
3705let foo = ", stringify!($atomic_type), "::new(0);
3706foo.not(Ordering::Relaxed);
3707assert_eq!(foo.load(Ordering::Relaxed), !0);
3708```"),
3709                #[inline]
3710                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3711                pub fn not(&self, order: Ordering) {
3712                    self.inner.not(order);
3713                }
3714            }
3715
3716            cfg_has_atomic_cas! {
3717            doc_comment! {
3718                concat!("Negates the current value, and sets the new value to the result.
3719
3720Returns the previous value.
3721
3722`fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
3723of this operation. All ordering modes are possible. Note that using
3724[`Acquire`] makes the store part of this operation [`Relaxed`], and
3725using [`Release`] makes the load part [`Relaxed`].
3726
3727# Examples
3728
3729```
3730use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3731
3732let foo = ", stringify!($atomic_type), "::new(5);
3733assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5);
3734assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3735assert_eq!(foo.fetch_neg(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3736assert_eq!(foo.load(Ordering::Relaxed), 5);
3737```"),
3738                #[inline]
3739                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3740                pub fn fetch_neg(&self, order: Ordering) -> $int_type {
3741                    self.inner.fetch_neg(order)
3742                }
3743            }
3744
3745            doc_comment! {
3746                concat!("Negates the current value, and sets the new value to the result.
3747
3748Unlike `fetch_neg`, this does not return the previous value.
3749
3750`neg` takes an [`Ordering`] argument which describes the memory ordering
3751of this operation. All ordering modes are possible. Note that using
3752[`Acquire`] makes the store part of this operation [`Relaxed`], and
3753using [`Release`] makes the load part [`Relaxed`].
3754
3755This function may generate more efficient code than `fetch_neg` on some platforms.
3756
3757- x86/x86_64: `lock neg` instead of `cmpxchg` loop ({8,16,32}-bit atomics on x86, but additionally 64-bit atomics on x86_64)
3758
3759# Examples
3760
3761```
3762use portable_atomic::{", stringify!($atomic_type), ", Ordering};
3763
3764let foo = ", stringify!($atomic_type), "::new(5);
3765foo.neg(Ordering::Relaxed);
3766assert_eq!(foo.load(Ordering::Relaxed), 5_", stringify!($int_type), ".wrapping_neg());
3767foo.neg(Ordering::Relaxed);
3768assert_eq!(foo.load(Ordering::Relaxed), 5);
3769```"),
3770                #[inline]
3771                #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
3772                pub fn neg(&self, order: Ordering) {
3773                    self.inner.neg(order);
3774                }
3775            }
3776            } // cfg_has_atomic_cas!
3777            } // cfg_has_atomic_cas_or_amo32!
3778
3779            const_fn! {
3780                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
3781                /// Returns a mutable pointer to the underlying integer.
3782                ///
3783                /// Returning an `*mut` pointer from a shared reference to this atomic is
3784                /// safe because the atomic types work with interior mutability. Any use of
3785                /// the returned raw pointer requires an `unsafe` block and has to uphold
3786                /// the safety requirements. If there is concurrent access, note the following
3787                /// additional safety requirements:
3788                ///
3789                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
3790                ///   operations on it must be atomic.
3791                /// - Otherwise, any concurrent operations on it must be compatible with
3792                ///   operations performed by this atomic type.
3793                ///
3794                /// This is `const fn` on Rust 1.58+.
3795                #[inline]
3796                pub const fn as_ptr(&self) -> *mut $int_type {
3797                    self.inner.as_ptr()
3798                }
3799            }
3800        }
3801        // See https://github.com/taiki-e/portable-atomic/issues/180
3802        #[cfg(not(feature = "require-cas"))]
3803        cfg_no_atomic_cas! {
3804        #[doc(hidden)]
3805        #[allow(unused_variables, clippy::unused_self)]
3806        impl<'a> $atomic_type {
3807            $cfg_no_atomic_cas_or_amo32_or_8! {
3808            #[inline]
3809            pub fn swap(&self, val: $int_type, order: Ordering) -> $int_type
3810            where
3811                &'a Self: HasSwap,
3812            {
3813                unimplemented!()
3814            }
3815            } // $cfg_no_atomic_cas_or_amo32_or_8!
3816            #[inline]
3817            pub fn compare_exchange(
3818                &self,
3819                current: $int_type,
3820                new: $int_type,
3821                success: Ordering,
3822                failure: Ordering,
3823            ) -> Result<$int_type, $int_type>
3824            where
3825                &'a Self: HasCompareExchange,
3826            {
3827                unimplemented!()
3828            }
3829            #[inline]
3830            pub fn compare_exchange_weak(
3831                &self,
3832                current: $int_type,
3833                new: $int_type,
3834                success: Ordering,
3835                failure: Ordering,
3836            ) -> Result<$int_type, $int_type>
3837            where
3838                &'a Self: HasCompareExchangeWeak,
3839            {
3840                unimplemented!()
3841            }
3842            $cfg_no_atomic_cas_or_amo32_or_8! {
3843            #[inline]
3844            pub fn fetch_add(&self, val: $int_type, order: Ordering) -> $int_type
3845            where
3846                &'a Self: HasFetchAdd,
3847            {
3848                unimplemented!()
3849            }
3850            #[inline]
3851            pub fn add(&self, val: $int_type, order: Ordering)
3852            where
3853                &'a Self: HasAdd,
3854            {
3855                unimplemented!()
3856            }
3857            #[inline]
3858            pub fn fetch_sub(&self, val: $int_type, order: Ordering) -> $int_type
3859            where
3860                &'a Self: HasFetchSub,
3861            {
3862                unimplemented!()
3863            }
3864            #[inline]
3865            pub fn sub(&self, val: $int_type, order: Ordering)
3866            where
3867                &'a Self: HasSub,
3868            {
3869                unimplemented!()
3870            }
3871            } // $cfg_no_atomic_cas_or_amo32_or_8!
3872            cfg_no_atomic_cas_or_amo32! {
3873            #[inline]
3874            pub fn fetch_and(&self, val: $int_type, order: Ordering) -> $int_type
3875            where
3876                &'a Self: HasFetchAnd,
3877            {
3878                unimplemented!()
3879            }
3880            #[inline]
3881            pub fn and(&self, val: $int_type, order: Ordering)
3882            where
3883                &'a Self: HasAnd,
3884            {
3885                unimplemented!()
3886            }
3887            } // cfg_no_atomic_cas_or_amo32!
3888            #[inline]
3889            pub fn fetch_nand(&self, val: $int_type, order: Ordering) -> $int_type
3890            where
3891                &'a Self: HasFetchNand,
3892            {
3893                unimplemented!()
3894            }
3895            cfg_no_atomic_cas_or_amo32! {
3896            #[inline]
3897            pub fn fetch_or(&self, val: $int_type, order: Ordering) -> $int_type
3898            where
3899                &'a Self: HasFetchOr,
3900            {
3901                unimplemented!()
3902            }
3903            #[inline]
3904            pub fn or(&self, val: $int_type, order: Ordering)
3905            where
3906                &'a Self: HasOr,
3907            {
3908                unimplemented!()
3909            }
3910            #[inline]
3911            pub fn fetch_xor(&self, val: $int_type, order: Ordering) -> $int_type
3912            where
3913                &'a Self: HasFetchXor,
3914            {
3915                unimplemented!()
3916            }
3917            #[inline]
3918            pub fn xor(&self, val: $int_type, order: Ordering)
3919            where
3920                &'a Self: HasXor,
3921            {
3922                unimplemented!()
3923            }
3924            } // cfg_no_atomic_cas_or_amo32!
3925            #[inline]
3926            pub fn fetch_update<F>(
3927                &self,
3928                set_order: Ordering,
3929                fetch_order: Ordering,
3930                f: F,
3931            ) -> Result<$int_type, $int_type>
3932            where
3933                F: FnMut($int_type) -> Option<$int_type>,
3934                &'a Self: HasFetchUpdate,
3935            {
3936                unimplemented!()
3937            }
3938            $cfg_no_atomic_cas_or_amo32_or_8! {
3939            #[inline]
3940            pub fn fetch_max(&self, val: $int_type, order: Ordering) -> $int_type
3941            where
3942                &'a Self: HasFetchMax,
3943            {
3944                unimplemented!()
3945            }
3946            #[inline]
3947            pub fn fetch_min(&self, val: $int_type, order: Ordering) -> $int_type
3948            where
3949                &'a Self: HasFetchMin,
3950            {
3951                unimplemented!()
3952            }
3953            } // $cfg_no_atomic_cas_or_amo32_or_8!
3954            cfg_no_atomic_cas_or_amo32! {
3955            #[inline]
3956            pub fn bit_set(&self, bit: u32, order: Ordering) -> bool
3957            where
3958                &'a Self: HasBitSet,
3959            {
3960                unimplemented!()
3961            }
3962            #[inline]
3963            pub fn bit_clear(&self, bit: u32, order: Ordering) -> bool
3964            where
3965                &'a Self: HasBitClear,
3966            {
3967                unimplemented!()
3968            }
3969            #[inline]
3970            pub fn bit_toggle(&self, bit: u32, order: Ordering) -> bool
3971            where
3972                &'a Self: HasBitToggle,
3973            {
3974                unimplemented!()
3975            }
3976            #[inline]
3977            pub fn fetch_not(&self, order: Ordering) -> $int_type
3978            where
3979                &'a Self: HasFetchNot,
3980            {
3981                unimplemented!()
3982            }
3983            #[inline]
3984            pub fn not(&self, order: Ordering)
3985            where
3986                &'a Self: HasNot,
3987            {
3988                unimplemented!()
3989            }
3990            } // cfg_no_atomic_cas_or_amo32!
3991            #[inline]
3992            pub fn fetch_neg(&self, order: Ordering) -> $int_type
3993            where
3994                &'a Self: HasFetchNeg,
3995            {
3996                unimplemented!()
3997            }
3998            #[inline]
3999            pub fn neg(&self, order: Ordering)
4000            where
4001                &'a Self: HasNeg,
4002            {
4003                unimplemented!()
4004            }
4005        }
4006        } // cfg_no_atomic_cas!
4007        $(
4008            #[$cfg_float]
4009            atomic_int!(float, $atomic_float_type, $float_type, $atomic_type, $int_type, $align);
4010        )?
4011    };
4012
4013    // AtomicF* impls
4014    (float,
4015        $atomic_type:ident,
4016        $float_type:ident,
4017        $atomic_int_type:ident,
4018        $int_type:ident,
4019        $align:literal
4020    ) => {
4021        doc_comment! {
4022            concat!("A floating point type which can be safely shared between threads.
4023
4024This type has the same in-memory representation as the underlying floating point type,
4025[`", stringify!($float_type), "`].
4026"
4027            ),
4028            #[cfg_attr(docsrs, doc(cfg(feature = "float")))]
4029            // We can use #[repr(transparent)] here, but #[repr(C, align(N))]
4030            // will show clearer docs.
4031            #[repr(C, align($align))]
4032            pub struct $atomic_type {
4033                inner: imp::float::$atomic_type,
4034            }
4035        }
4036
4037        impl Default for $atomic_type {
4038            #[inline]
4039            fn default() -> Self {
4040                Self::new($float_type::default())
4041            }
4042        }
4043
4044        impl From<$float_type> for $atomic_type {
4045            #[inline]
4046            fn from(v: $float_type) -> Self {
4047                Self::new(v)
4048            }
4049        }
4050
4051        // UnwindSafe is implicitly implemented.
4052        #[cfg(not(portable_atomic_no_core_unwind_safe))]
4053        impl core::panic::RefUnwindSafe for $atomic_type {}
4054        #[cfg(all(portable_atomic_no_core_unwind_safe, feature = "std"))]
4055        impl std::panic::RefUnwindSafe for $atomic_type {}
4056
4057        impl_debug_and_serde!($atomic_type);
4058
4059        impl $atomic_type {
4060            /// Creates a new atomic float.
4061            #[inline]
4062            #[must_use]
4063            pub const fn new(v: $float_type) -> Self {
4064                static_assert_layout!($atomic_type, $float_type);
4065                Self { inner: imp::float::$atomic_type::new(v) }
4066            }
4067
4068            // TODO: update docs based on https://github.com/rust-lang/rust/pull/116762
4069            doc_comment! {
4070                concat!("Creates a new reference to an atomic float from a pointer.
4071
4072# Safety
4073
4074* `ptr` must be aligned to `align_of::<", stringify!($atomic_type), ">()` (note that on some platforms this
4075  can be bigger than `align_of::<", stringify!($float_type), ">()`).
4076* `ptr` must be [valid] for both reads and writes for the whole lifetime `'a`.
4077* If this atomic type is [lock-free](Self::is_lock_free), non-atomic accesses to the value
4078  behind `ptr` must have a happens-before relationship with atomic accesses via
4079  the returned value (or vice-versa).
4080  * In other words, time periods where the value is accessed atomically may not
4081    overlap with periods where the value is accessed non-atomically.
4082  * This requirement is trivially satisfied if `ptr` is never used non-atomically
4083    for the duration of lifetime `'a`. Most use cases should be able to follow
4084    this guideline.
4085  * This requirement is also trivially satisfied if all accesses (atomic or not) are
4086    done from the same thread.
4087* If this atomic type is *not* lock-free:
4088  * Any accesses to the value behind `ptr` must have a happens-before relationship
4089    with accesses via the returned value (or vice-versa).
4090  * Any concurrent accesses to the value behind `ptr` for the duration of lifetime `'a` must
4091    be compatible with operations performed by this atomic type.
4092* This method must not be used to create overlapping or mixed-size atomic
4093  accesses, as these are not supported by the memory model.
4094
4095[valid]: core::ptr#safety"),
4096                #[inline]
4097                #[must_use]
4098                pub unsafe fn from_ptr<'a>(ptr: *mut $float_type) -> &'a Self {
4099                    #[allow(clippy::cast_ptr_alignment)]
4100                    // SAFETY: guaranteed by the caller
4101                    unsafe { &*(ptr as *mut Self) }
4102                }
4103            }
4104
4105            /// Returns `true` if operations on values of this type are lock-free.
4106            ///
4107            /// If the compiler or the platform doesn't support the necessary
4108            /// atomic instructions, global locks for every potentially
4109            /// concurrent atomic operation will be used.
4110            #[inline]
4111            #[must_use]
4112            pub fn is_lock_free() -> bool {
4113                <imp::float::$atomic_type>::is_lock_free()
4114            }
4115
4116            /// Returns `true` if operations on values of this type are lock-free.
4117            ///
4118            /// If the compiler or the platform doesn't support the necessary
4119            /// atomic instructions, global locks for every potentially
4120            /// concurrent atomic operation will be used.
4121            ///
4122            /// **Note:** If the atomic operation relies on dynamic CPU feature detection,
4123            /// this type may be lock-free even if the function returns false.
4124            #[inline]
4125            #[must_use]
4126            pub const fn is_always_lock_free() -> bool {
4127                <imp::float::$atomic_type>::IS_ALWAYS_LOCK_FREE
4128            }
4129            #[cfg(test)]
4130            const IS_ALWAYS_LOCK_FREE: bool = Self::is_always_lock_free();
4131
4132            /// Returns a mutable reference to the underlying float.
4133            ///
4134            /// This is safe because the mutable reference guarantees that no other threads are
4135            /// concurrently accessing the atomic data.
4136            #[inline]
4137            pub fn get_mut(&mut self) -> &mut $float_type {
4138                self.inner.get_mut()
4139            }
4140
4141            // TODO: Add from_mut/get_mut_slice/from_mut_slice once it is stable on std atomic types.
4142            // https://github.com/rust-lang/rust/issues/76314
4143
4144            const_fn! {
4145                const_if: #[cfg(not(portable_atomic_no_const_transmute))];
4146                /// Consumes the atomic and returns the contained value.
4147                ///
4148                /// This is safe because passing `self` by value guarantees that no other threads are
4149                /// concurrently accessing the atomic data.
4150                ///
4151                /// This is `const fn` on Rust 1.56+.
4152                #[inline]
4153                pub const fn into_inner(self) -> $float_type {
4154                    // SAFETY: $atomic_type and $float_type have the same size and in-memory representations,
4155                    // so they can be safely transmuted.
4156                    // (const UnsafeCell::into_inner is unstable)
4157                    unsafe { core::mem::transmute(self) }
4158                }
4159            }
4160
4161            /// Loads a value from the atomic float.
4162            ///
4163            /// `load` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4164            /// Possible values are [`SeqCst`], [`Acquire`] and [`Relaxed`].
4165            ///
4166            /// # Panics
4167            ///
4168            /// Panics if `order` is [`Release`] or [`AcqRel`].
4169            #[inline]
4170            #[cfg_attr(
4171                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4172                track_caller
4173            )]
4174            pub fn load(&self, order: Ordering) -> $float_type {
4175                self.inner.load(order)
4176            }
4177
4178            /// Stores a value into the atomic float.
4179            ///
4180            /// `store` takes an [`Ordering`] argument which describes the memory ordering of this operation.
4181            ///  Possible values are [`SeqCst`], [`Release`] and [`Relaxed`].
4182            ///
4183            /// # Panics
4184            ///
4185            /// Panics if `order` is [`Acquire`] or [`AcqRel`].
4186            #[inline]
4187            #[cfg_attr(
4188                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4189                track_caller
4190            )]
4191            pub fn store(&self, val: $float_type, order: Ordering) {
4192                self.inner.store(val, order)
4193            }
4194
4195            cfg_has_atomic_cas_or_amo32! {
4196            /// Stores a value into the atomic float, returning the previous value.
4197            ///
4198            /// `swap` takes an [`Ordering`] argument which describes the memory ordering
4199            /// of this operation. All ordering modes are possible. Note that using
4200            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4201            /// using [`Release`] makes the load part [`Relaxed`].
4202            #[inline]
4203            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4204            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type {
4205                self.inner.swap(val, order)
4206            }
4207
4208            cfg_has_atomic_cas! {
4209            /// Stores a value into the atomic float if the current value is the same as
4210            /// the `current` value.
4211            ///
4212            /// The return value is a result indicating whether the new value was written and
4213            /// containing the previous value. On success this value is guaranteed to be equal to
4214            /// `current`.
4215            ///
4216            /// `compare_exchange` takes two [`Ordering`] arguments to describe the memory
4217            /// ordering of this operation. `success` describes the required ordering for the
4218            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4219            /// `failure` describes the required ordering for the load operation that takes place when
4220            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4221            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4222            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4223            ///
4224            /// # Panics
4225            ///
4226            /// Panics if `failure` is [`Release`], [`AcqRel`].
4227            #[inline]
4228            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4229            #[cfg_attr(
4230                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4231                track_caller
4232            )]
4233            pub fn compare_exchange(
4234                &self,
4235                current: $float_type,
4236                new: $float_type,
4237                success: Ordering,
4238                failure: Ordering,
4239            ) -> Result<$float_type, $float_type> {
4240                self.inner.compare_exchange(current, new, success, failure)
4241            }
4242
4243            /// Stores a value into the atomic float if the current value is the same as
4244            /// the `current` value.
4245            /// Unlike [`compare_exchange`](Self::compare_exchange)
4246            /// this function is allowed to spuriously fail even
4247            /// when the comparison succeeds, which can result in more efficient code on some
4248            /// platforms. The return value is a result indicating whether the new value was
4249            /// written and containing the previous value.
4250            ///
4251            /// `compare_exchange_weak` takes two [`Ordering`] arguments to describe the memory
4252            /// ordering of this operation. `success` describes the required ordering for the
4253            /// read-modify-write operation that takes place if the comparison with `current` succeeds.
4254            /// `failure` describes the required ordering for the load operation that takes place when
4255            /// the comparison fails. Using [`Acquire`] as success ordering makes the store part
4256            /// of this operation [`Relaxed`], and using [`Release`] makes the successful load
4257            /// [`Relaxed`]. The failure ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4258            ///
4259            /// # Panics
4260            ///
4261            /// Panics if `failure` is [`Release`], [`AcqRel`].
4262            #[inline]
4263            #[cfg_attr(docsrs, doc(alias = "compare_and_swap"))]
4264            #[cfg_attr(
4265                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4266                track_caller
4267            )]
4268            pub fn compare_exchange_weak(
4269                &self,
4270                current: $float_type,
4271                new: $float_type,
4272                success: Ordering,
4273                failure: Ordering,
4274            ) -> Result<$float_type, $float_type> {
4275                self.inner.compare_exchange_weak(current, new, success, failure)
4276            }
4277
4278            /// Adds to the current value, returning the previous value.
4279            ///
4280            /// This operation wraps around on overflow.
4281            ///
4282            /// `fetch_add` takes an [`Ordering`] argument which describes the memory ordering
4283            /// of this operation. All ordering modes are possible. Note that using
4284            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4285            /// using [`Release`] makes the load part [`Relaxed`].
4286            #[inline]
4287            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4288            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type {
4289                self.inner.fetch_add(val, order)
4290            }
4291
4292            /// Subtracts from the current value, returning the previous value.
4293            ///
4294            /// This operation wraps around on overflow.
4295            ///
4296            /// `fetch_sub` takes an [`Ordering`] argument which describes the memory ordering
4297            /// of this operation. All ordering modes are possible. Note that using
4298            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4299            /// using [`Release`] makes the load part [`Relaxed`].
4300            #[inline]
4301            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4302            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type {
4303                self.inner.fetch_sub(val, order)
4304            }
4305
4306            /// Fetches the value, and applies a function to it that returns an optional
4307            /// new value. Returns a `Result` of `Ok(previous_value)` if the function returned `Some(_)`, else
4308            /// `Err(previous_value)`.
4309            ///
4310            /// Note: This may call the function multiple times if the value has been changed from other threads in
4311            /// the meantime, as long as the function returns `Some(_)`, but the function will have been applied
4312            /// only once to the stored value.
4313            ///
4314            /// `fetch_update` takes two [`Ordering`] arguments to describe the memory ordering of this operation.
4315            /// The first describes the required ordering for when the operation finally succeeds while the second
4316            /// describes the required ordering for loads. These correspond to the success and failure orderings of
4317            /// [`compare_exchange`](Self::compare_exchange) respectively.
4318            ///
4319            /// Using [`Acquire`] as success ordering makes the store part
4320            /// of this operation [`Relaxed`], and using [`Release`] makes the final successful load
4321            /// [`Relaxed`]. The (failed) load ordering can only be [`SeqCst`], [`Acquire`] or [`Relaxed`].
4322            ///
4323            /// # Panics
4324            ///
4325            /// Panics if `fetch_order` is [`Release`], [`AcqRel`].
4326            ///
4327            /// # Considerations
4328            ///
4329            /// This method is not magic; it is not provided by the hardware.
4330            /// It is implemented in terms of [`compare_exchange_weak`](Self::compare_exchange_weak),
4331            /// and suffers from the same drawbacks.
4332            /// In particular, this method will not circumvent the [ABA Problem].
4333            ///
4334            /// [ABA Problem]: https://en.wikipedia.org/wiki/ABA_problem
4335            #[inline]
4336            #[cfg_attr(
4337                any(all(debug_assertions, not(portable_atomic_no_track_caller)), miri),
4338                track_caller
4339            )]
4340            pub fn fetch_update<F>(
4341                &self,
4342                set_order: Ordering,
4343                fetch_order: Ordering,
4344                mut f: F,
4345            ) -> Result<$float_type, $float_type>
4346            where
4347                F: FnMut($float_type) -> Option<$float_type>,
4348            {
4349                let mut prev = self.load(fetch_order);
4350                while let Some(next) = f(prev) {
4351                    match self.compare_exchange_weak(prev, next, set_order, fetch_order) {
4352                        x @ Ok(_) => return x,
4353                        Err(next_prev) => prev = next_prev,
4354                    }
4355                }
4356                Err(prev)
4357            }
4358
4359            /// Maximum with the current value.
4360            ///
4361            /// Finds the maximum of the current value and the argument `val`, and
4362            /// sets the new value to the result.
4363            ///
4364            /// Returns the previous value.
4365            ///
4366            /// `fetch_max` takes an [`Ordering`] argument which describes the memory ordering
4367            /// of this operation. All ordering modes are possible. Note that using
4368            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4369            /// using [`Release`] makes the load part [`Relaxed`].
4370            #[inline]
4371            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4372            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type {
4373                self.inner.fetch_max(val, order)
4374            }
4375
4376            /// Minimum with the current value.
4377            ///
4378            /// Finds the minimum of the current value and the argument `val`, and
4379            /// sets the new value to the result.
4380            ///
4381            /// Returns the previous value.
4382            ///
4383            /// `fetch_min` takes an [`Ordering`] argument which describes the memory ordering
4384            /// of this operation. All ordering modes are possible. Note that using
4385            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4386            /// using [`Release`] makes the load part [`Relaxed`].
4387            #[inline]
4388            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4389            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type {
4390                self.inner.fetch_min(val, order)
4391            }
4392            } // cfg_has_atomic_cas!
4393
4394            /// Negates the current value, and sets the new value to the result.
4395            ///
4396            /// Returns the previous value.
4397            ///
4398            /// `fetch_neg` takes an [`Ordering`] argument which describes the memory ordering
4399            /// of this operation. All ordering modes are possible. Note that using
4400            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4401            /// using [`Release`] makes the load part [`Relaxed`].
4402            #[inline]
4403            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4404            pub fn fetch_neg(&self, order: Ordering) -> $float_type {
4405                self.inner.fetch_neg(order)
4406            }
4407
4408            /// Computes the absolute value of the current value, and sets the
4409            /// new value to the result.
4410            ///
4411            /// Returns the previous value.
4412            ///
4413            /// `fetch_abs` takes an [`Ordering`] argument which describes the memory ordering
4414            /// of this operation. All ordering modes are possible. Note that using
4415            /// [`Acquire`] makes the store part of this operation [`Relaxed`], and
4416            /// using [`Release`] makes the load part [`Relaxed`].
4417            #[inline]
4418            #[cfg_attr(miri, track_caller)] // even without panics, this helps for Miri backtraces
4419            pub fn fetch_abs(&self, order: Ordering) -> $float_type {
4420                self.inner.fetch_abs(order)
4421            }
4422            } // cfg_has_atomic_cas_or_amo32!
4423
4424            #[cfg(not(portable_atomic_no_const_raw_ptr_deref))]
4425            doc_comment! {
4426                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4427
4428See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4429portability of this operation (there are almost no issues).
4430
4431This is `const fn` on Rust 1.58+."),
4432                #[inline]
4433                pub const fn as_bits(&self) -> &$atomic_int_type {
4434                    self.inner.as_bits()
4435                }
4436            }
4437            #[cfg(portable_atomic_no_const_raw_ptr_deref)]
4438            doc_comment! {
4439                concat!("Raw transmutation to `&", stringify!($atomic_int_type), "`.
4440
4441See [`", stringify!($float_type) ,"::from_bits`] for some discussion of the
4442portability of this operation (there are almost no issues).
4443
4444This is `const fn` on Rust 1.58+."),
4445                #[inline]
4446                pub fn as_bits(&self) -> &$atomic_int_type {
4447                    self.inner.as_bits()
4448                }
4449            }
4450
4451            const_fn! {
4452                const_if: #[cfg(not(portable_atomic_no_const_raw_ptr_deref))];
4453                /// Returns a mutable pointer to the underlying float.
4454                ///
4455                /// Returning an `*mut` pointer from a shared reference to this atomic is
4456                /// safe because the atomic types work with interior mutability. Any use of
4457                /// the returned raw pointer requires an `unsafe` block and has to uphold
4458                /// the safety requirements. If there is concurrent access, note the following
4459                /// additional safety requirements:
4460                ///
4461                /// - If this atomic type is [lock-free](Self::is_lock_free), any concurrent
4462                ///   operations on it must be atomic.
4463                /// - Otherwise, any concurrent operations on it must be compatible with
4464                ///   operations performed by this atomic type.
4465                ///
4466                /// This is `const fn` on Rust 1.58+.
4467                #[inline]
4468                pub const fn as_ptr(&self) -> *mut $float_type {
4469                    self.inner.as_ptr()
4470                }
4471            }
4472        }
4473        // See https://github.com/taiki-e/portable-atomic/issues/180
4474        #[cfg(not(feature = "require-cas"))]
4475        cfg_no_atomic_cas! {
4476        #[doc(hidden)]
4477        #[allow(unused_variables, clippy::unused_self)]
4478        impl<'a> $atomic_type {
4479            cfg_no_atomic_cas_or_amo32! {
4480            #[inline]
4481            pub fn swap(&self, val: $float_type, order: Ordering) -> $float_type
4482            where
4483                &'a Self: HasSwap,
4484            {
4485                unimplemented!()
4486            }
4487            } // cfg_no_atomic_cas_or_amo32!
4488            #[inline]
4489            pub fn compare_exchange(
4490                &self,
4491                current: $float_type,
4492                new: $float_type,
4493                success: Ordering,
4494                failure: Ordering,
4495            ) -> Result<$float_type, $float_type>
4496            where
4497                &'a Self: HasCompareExchange,
4498            {
4499                unimplemented!()
4500            }
4501            #[inline]
4502            pub fn compare_exchange_weak(
4503                &self,
4504                current: $float_type,
4505                new: $float_type,
4506                success: Ordering,
4507                failure: Ordering,
4508            ) -> Result<$float_type, $float_type>
4509            where
4510                &'a Self: HasCompareExchangeWeak,
4511            {
4512                unimplemented!()
4513            }
4514            #[inline]
4515            pub fn fetch_add(&self, val: $float_type, order: Ordering) -> $float_type
4516            where
4517                &'a Self: HasFetchAdd,
4518            {
4519                unimplemented!()
4520            }
4521            #[inline]
4522            pub fn fetch_sub(&self, val: $float_type, order: Ordering) -> $float_type
4523            where
4524                &'a Self: HasFetchSub,
4525            {
4526                unimplemented!()
4527            }
4528            #[inline]
4529            pub fn fetch_update<F>(
4530                &self,
4531                set_order: Ordering,
4532                fetch_order: Ordering,
4533                f: F,
4534            ) -> Result<$float_type, $float_type>
4535            where
4536                F: FnMut($float_type) -> Option<$float_type>,
4537                &'a Self: HasFetchUpdate,
4538            {
4539                unimplemented!()
4540            }
4541            #[inline]
4542            pub fn fetch_max(&self, val: $float_type, order: Ordering) -> $float_type
4543            where
4544                &'a Self: HasFetchMax,
4545            {
4546                unimplemented!()
4547            }
4548            #[inline]
4549            pub fn fetch_min(&self, val: $float_type, order: Ordering) -> $float_type
4550            where
4551                &'a Self: HasFetchMin,
4552            {
4553                unimplemented!()
4554            }
4555            cfg_no_atomic_cas_or_amo32! {
4556            #[inline]
4557            pub fn fetch_neg(&self, order: Ordering) -> $float_type
4558            where
4559                &'a Self: HasFetchNeg,
4560            {
4561                unimplemented!()
4562            }
4563            #[inline]
4564            pub fn fetch_abs(&self, order: Ordering) -> $float_type
4565            where
4566                &'a Self: HasFetchAbs,
4567            {
4568                unimplemented!()
4569            }
4570            } // cfg_no_atomic_cas_or_amo32!
4571        }
4572        } // cfg_no_atomic_cas!
4573    };
4574}
4575
4576cfg_has_atomic_ptr! {
4577    #[cfg(target_pointer_width = "16")]
4578    atomic_int!(AtomicIsize, isize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4579    #[cfg(target_pointer_width = "16")]
4580    atomic_int!(AtomicUsize, usize, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4581    #[cfg(target_pointer_width = "32")]
4582    atomic_int!(AtomicIsize, isize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4583    #[cfg(target_pointer_width = "32")]
4584    atomic_int!(AtomicUsize, usize, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4585    #[cfg(target_pointer_width = "64")]
4586    atomic_int!(AtomicIsize, isize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4587    #[cfg(target_pointer_width = "64")]
4588    atomic_int!(AtomicUsize, usize, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4589    #[cfg(target_pointer_width = "128")]
4590    atomic_int!(AtomicIsize, isize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4591    #[cfg(target_pointer_width = "128")]
4592    atomic_int!(AtomicUsize, usize, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4593}
4594
4595cfg_has_atomic_8! {
4596    atomic_int!(AtomicI8, i8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4597    atomic_int!(AtomicU8, u8, 1, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4598}
4599cfg_has_atomic_16! {
4600    atomic_int!(AtomicI16, i16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4601    atomic_int!(AtomicU16, u16, 2, cfg_has_atomic_cas_or_amo8, cfg_no_atomic_cas_or_amo8);
4602        // TODO: support once https://github.com/rust-lang/rust/issues/116909 stabilized.
4603        // #[cfg(all(feature = "float", not(portable_atomic_no_f16)))] AtomicF16, f16);
4604}
4605cfg_has_atomic_32! {
4606    atomic_int!(AtomicI32, i32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4607    atomic_int!(AtomicU32, u32, 4, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4608        #[cfg(feature = "float")] AtomicF32, f32);
4609}
4610cfg_has_atomic_64! {
4611    atomic_int!(AtomicI64, i64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4612    atomic_int!(AtomicU64, u64, 8, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32,
4613        #[cfg(feature = "float")] AtomicF64, f64);
4614}
4615cfg_has_atomic_128! {
4616    atomic_int!(AtomicI128, i128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4617    atomic_int!(AtomicU128, u128, 16, cfg_has_atomic_cas_or_amo32, cfg_no_atomic_cas_or_amo32);
4618        // TODO: support once https://github.com/rust-lang/rust/issues/116909 stabilized.
4619        // #[cfg(all(feature = "float", not(portable_atomic_no_f128)))] AtomicF128, f128);
4620}
4621
4622// See https://github.com/taiki-e/portable-atomic/issues/180
4623#[cfg(not(feature = "require-cas"))]
4624cfg_no_atomic_cas! {
4625cfg_no_atomic_cas_or_amo32! {
4626#[cfg(feature = "float")]
4627use diagnostic_helper::HasFetchAbs;
4628use diagnostic_helper::{
4629    HasAnd, HasBitClear, HasBitSet, HasBitToggle, HasFetchAnd, HasFetchByteAdd, HasFetchByteSub,
4630    HasFetchNot, HasFetchOr, HasFetchPtrAdd, HasFetchPtrSub, HasFetchXor, HasNot, HasOr, HasXor,
4631};
4632} // cfg_no_atomic_cas_or_amo32!
4633cfg_no_atomic_cas_or_amo8! {
4634use diagnostic_helper::{HasAdd, HasSub, HasSwap};
4635} // cfg_no_atomic_cas_or_amo8!
4636#[cfg_attr(not(feature = "float"), allow(unused_imports))]
4637use diagnostic_helper::{
4638    HasCompareExchange, HasCompareExchangeWeak, HasFetchAdd, HasFetchMax, HasFetchMin,
4639    HasFetchNand, HasFetchNeg, HasFetchSub, HasFetchUpdate, HasNeg,
4640};
4641#[cfg_attr(
4642    any(
4643        all(
4644            portable_atomic_no_atomic_load_store,
4645            not(any(
4646                target_arch = "avr",
4647                target_arch = "bpf",
4648                target_arch = "msp430",
4649                target_arch = "riscv32",
4650                target_arch = "riscv64",
4651                feature = "critical-section",
4652            )),
4653        ),
4654        not(feature = "float"),
4655    ),
4656    allow(dead_code, unreachable_pub)
4657)]
4658mod diagnostic_helper {
4659    cfg_no_atomic_cas_or_amo8! {
4660    #[doc(hidden)]
4661    #[cfg_attr(
4662        not(portable_atomic_no_diagnostic_namespace),
4663        diagnostic::on_unimplemented(
4664            message = "`swap` requires atomic CAS but not available on this target by default",
4665            label = "this associated function is not available on this target by default",
4666            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4667            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4668        )
4669    )]
4670    pub trait HasSwap {}
4671    } // cfg_no_atomic_cas_or_amo8!
4672    #[doc(hidden)]
4673    #[cfg_attr(
4674        not(portable_atomic_no_diagnostic_namespace),
4675        diagnostic::on_unimplemented(
4676            message = "`compare_exchange` requires atomic CAS but not available on this target by default",
4677            label = "this associated function is not available on this target by default",
4678            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4679            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4680        )
4681    )]
4682    pub trait HasCompareExchange {}
4683    #[doc(hidden)]
4684    #[cfg_attr(
4685        not(portable_atomic_no_diagnostic_namespace),
4686        diagnostic::on_unimplemented(
4687            message = "`compare_exchange_weak` requires atomic CAS but not available on this target by default",
4688            label = "this associated function is not available on this target by default",
4689            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4690            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4691        )
4692    )]
4693    pub trait HasCompareExchangeWeak {}
4694    #[doc(hidden)]
4695    #[cfg_attr(
4696        not(portable_atomic_no_diagnostic_namespace),
4697        diagnostic::on_unimplemented(
4698            message = "`fetch_add` requires atomic CAS but not available on this target by default",
4699            label = "this associated function is not available on this target by default",
4700            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4701            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4702        )
4703    )]
4704    pub trait HasFetchAdd {}
4705    cfg_no_atomic_cas_or_amo8! {
4706    #[doc(hidden)]
4707    #[cfg_attr(
4708        not(portable_atomic_no_diagnostic_namespace),
4709        diagnostic::on_unimplemented(
4710            message = "`add` requires atomic CAS but not available on this target by default",
4711            label = "this associated function is not available on this target by default",
4712            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4713            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4714        )
4715    )]
4716    pub trait HasAdd {}
4717    } // cfg_no_atomic_cas_or_amo8!
4718    #[doc(hidden)]
4719    #[cfg_attr(
4720        not(portable_atomic_no_diagnostic_namespace),
4721        diagnostic::on_unimplemented(
4722            message = "`fetch_sub` requires atomic CAS but not available on this target by default",
4723            label = "this associated function is not available on this target by default",
4724            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4725            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4726        )
4727    )]
4728    pub trait HasFetchSub {}
4729    cfg_no_atomic_cas_or_amo8! {
4730    #[doc(hidden)]
4731    #[cfg_attr(
4732        not(portable_atomic_no_diagnostic_namespace),
4733        diagnostic::on_unimplemented(
4734            message = "`sub` requires atomic CAS but not available on this target by default",
4735            label = "this associated function is not available on this target by default",
4736            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4737            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4738        )
4739    )]
4740    pub trait HasSub {}
4741    } // cfg_no_atomic_cas_or_amo8!
4742    cfg_no_atomic_cas_or_amo32! {
4743    #[doc(hidden)]
4744    #[cfg_attr(
4745        not(portable_atomic_no_diagnostic_namespace),
4746        diagnostic::on_unimplemented(
4747            message = "`fetch_ptr_add` requires atomic CAS but not available on this target by default",
4748            label = "this associated function is not available on this target by default",
4749            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4750            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4751        )
4752    )]
4753    pub trait HasFetchPtrAdd {}
4754    #[doc(hidden)]
4755    #[cfg_attr(
4756        not(portable_atomic_no_diagnostic_namespace),
4757        diagnostic::on_unimplemented(
4758            message = "`fetch_ptr_sub` requires atomic CAS but not available on this target by default",
4759            label = "this associated function is not available on this target by default",
4760            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4761            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4762        )
4763    )]
4764    pub trait HasFetchPtrSub {}
4765    #[doc(hidden)]
4766    #[cfg_attr(
4767        not(portable_atomic_no_diagnostic_namespace),
4768        diagnostic::on_unimplemented(
4769            message = "`fetch_byte_add` requires atomic CAS but not available on this target by default",
4770            label = "this associated function is not available on this target by default",
4771            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4772            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4773        )
4774    )]
4775    pub trait HasFetchByteAdd {}
4776    #[doc(hidden)]
4777    #[cfg_attr(
4778        not(portable_atomic_no_diagnostic_namespace),
4779        diagnostic::on_unimplemented(
4780            message = "`fetch_byte_sub` requires atomic CAS but not available on this target by default",
4781            label = "this associated function is not available on this target by default",
4782            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4783            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4784        )
4785    )]
4786    pub trait HasFetchByteSub {}
4787    #[doc(hidden)]
4788    #[cfg_attr(
4789        not(portable_atomic_no_diagnostic_namespace),
4790        diagnostic::on_unimplemented(
4791            message = "`fetch_and` requires atomic CAS but not available on this target by default",
4792            label = "this associated function is not available on this target by default",
4793            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4794            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4795        )
4796    )]
4797    pub trait HasFetchAnd {}
4798    #[doc(hidden)]
4799    #[cfg_attr(
4800        not(portable_atomic_no_diagnostic_namespace),
4801        diagnostic::on_unimplemented(
4802            message = "`and` requires atomic CAS but not available on this target by default",
4803            label = "this associated function is not available on this target by default",
4804            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4805            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4806        )
4807    )]
4808    pub trait HasAnd {}
4809    } // cfg_no_atomic_cas_or_amo32!
4810    #[doc(hidden)]
4811    #[cfg_attr(
4812        not(portable_atomic_no_diagnostic_namespace),
4813        diagnostic::on_unimplemented(
4814            message = "`fetch_nand` requires atomic CAS but not available on this target by default",
4815            label = "this associated function is not available on this target by default",
4816            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4817            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4818        )
4819    )]
4820    pub trait HasFetchNand {}
4821    cfg_no_atomic_cas_or_amo32! {
4822    #[doc(hidden)]
4823    #[cfg_attr(
4824        not(portable_atomic_no_diagnostic_namespace),
4825        diagnostic::on_unimplemented(
4826            message = "`fetch_or` requires atomic CAS but not available on this target by default",
4827            label = "this associated function is not available on this target by default",
4828            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4829            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4830        )
4831    )]
4832    pub trait HasFetchOr {}
4833    #[doc(hidden)]
4834    #[cfg_attr(
4835        not(portable_atomic_no_diagnostic_namespace),
4836        diagnostic::on_unimplemented(
4837            message = "`or` requires atomic CAS but not available on this target by default",
4838            label = "this associated function is not available on this target by default",
4839            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4840            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4841        )
4842    )]
4843    pub trait HasOr {}
4844    #[doc(hidden)]
4845    #[cfg_attr(
4846        not(portable_atomic_no_diagnostic_namespace),
4847        diagnostic::on_unimplemented(
4848            message = "`fetch_xor` requires atomic CAS but not available on this target by default",
4849            label = "this associated function is not available on this target by default",
4850            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4851            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4852        )
4853    )]
4854    pub trait HasFetchXor {}
4855    #[doc(hidden)]
4856    #[cfg_attr(
4857        not(portable_atomic_no_diagnostic_namespace),
4858        diagnostic::on_unimplemented(
4859            message = "`xor` requires atomic CAS but not available on this target by default",
4860            label = "this associated function is not available on this target by default",
4861            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4862            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4863        )
4864    )]
4865    pub trait HasXor {}
4866    #[doc(hidden)]
4867    #[cfg_attr(
4868        not(portable_atomic_no_diagnostic_namespace),
4869        diagnostic::on_unimplemented(
4870            message = "`fetch_not` requires atomic CAS but not available on this target by default",
4871            label = "this associated function is not available on this target by default",
4872            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4873            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4874        )
4875    )]
4876    pub trait HasFetchNot {}
4877    #[doc(hidden)]
4878    #[cfg_attr(
4879        not(portable_atomic_no_diagnostic_namespace),
4880        diagnostic::on_unimplemented(
4881            message = "`not` requires atomic CAS but not available on this target by default",
4882            label = "this associated function is not available on this target by default",
4883            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4884            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4885        )
4886    )]
4887    pub trait HasNot {}
4888    } // cfg_no_atomic_cas_or_amo32!
4889    #[doc(hidden)]
4890    #[cfg_attr(
4891        not(portable_atomic_no_diagnostic_namespace),
4892        diagnostic::on_unimplemented(
4893            message = "`fetch_neg` requires atomic CAS but not available on this target by default",
4894            label = "this associated function is not available on this target by default",
4895            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4896            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4897        )
4898    )]
4899    pub trait HasFetchNeg {}
4900    #[doc(hidden)]
4901    #[cfg_attr(
4902        not(portable_atomic_no_diagnostic_namespace),
4903        diagnostic::on_unimplemented(
4904            message = "`neg` requires atomic CAS but not available on this target by default",
4905            label = "this associated function is not available on this target by default",
4906            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4907            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4908        )
4909    )]
4910    pub trait HasNeg {}
4911    cfg_no_atomic_cas_or_amo32! {
4912    #[cfg(feature = "float")]
4913    #[cfg_attr(target_pointer_width = "16", allow(dead_code, unreachable_pub))]
4914    #[doc(hidden)]
4915    #[cfg_attr(
4916        not(portable_atomic_no_diagnostic_namespace),
4917        diagnostic::on_unimplemented(
4918            message = "`fetch_abs` requires atomic CAS but not available on this target by default",
4919            label = "this associated function is not available on this target by default",
4920            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4921            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4922        )
4923    )]
4924    pub trait HasFetchAbs {}
4925    } // cfg_no_atomic_cas_or_amo32!
4926    #[doc(hidden)]
4927    #[cfg_attr(
4928        not(portable_atomic_no_diagnostic_namespace),
4929        diagnostic::on_unimplemented(
4930            message = "`fetch_min` requires atomic CAS but not available on this target by default",
4931            label = "this associated function is not available on this target by default",
4932            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4933            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4934        )
4935    )]
4936    pub trait HasFetchMin {}
4937    #[doc(hidden)]
4938    #[cfg_attr(
4939        not(portable_atomic_no_diagnostic_namespace),
4940        diagnostic::on_unimplemented(
4941            message = "`fetch_max` requires atomic CAS but not available on this target by default",
4942            label = "this associated function is not available on this target by default",
4943            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4944            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4945        )
4946    )]
4947    pub trait HasFetchMax {}
4948    #[doc(hidden)]
4949    #[cfg_attr(
4950        not(portable_atomic_no_diagnostic_namespace),
4951        diagnostic::on_unimplemented(
4952            message = "`fetch_update` requires atomic CAS but not available on this target by default",
4953            label = "this associated function is not available on this target by default",
4954            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4955            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4956        )
4957    )]
4958    pub trait HasFetchUpdate {}
4959    cfg_no_atomic_cas_or_amo32! {
4960    #[doc(hidden)]
4961    #[cfg_attr(
4962        not(portable_atomic_no_diagnostic_namespace),
4963        diagnostic::on_unimplemented(
4964            message = "`bit_set` requires atomic CAS but not available on this target by default",
4965            label = "this associated function is not available on this target by default",
4966            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4967            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4968        )
4969    )]
4970    pub trait HasBitSet {}
4971    #[doc(hidden)]
4972    #[cfg_attr(
4973        not(portable_atomic_no_diagnostic_namespace),
4974        diagnostic::on_unimplemented(
4975            message = "`bit_clear` requires atomic CAS but not available on this target by default",
4976            label = "this associated function is not available on this target by default",
4977            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4978            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4979        )
4980    )]
4981    pub trait HasBitClear {}
4982    #[doc(hidden)]
4983    #[cfg_attr(
4984        not(portable_atomic_no_diagnostic_namespace),
4985        diagnostic::on_unimplemented(
4986            message = "`bit_toggle` requires atomic CAS but not available on this target by default",
4987            label = "this associated function is not available on this target by default",
4988            note = "consider enabling one of the `unsafe-assume-single-core` or `critical-section` Cargo features",
4989            note = "see <https://docs.rs/portable-atomic/latest/portable_atomic/#optional-features> for more."
4990        )
4991    )]
4992    pub trait HasBitToggle {}
4993    } // cfg_no_atomic_cas_or_amo32!
4994}
4995} // cfg_no_atomic_cas!