dary_heap-0.3.6/.cargo_vcs_info.json0000644000000001360000000000100127760ustar { "git": { "sha1": "914c2e23baf7041415870e8c806b65dbb6abb196" }, "path_in_vcs": "" }dary_heap-0.3.6/CHANGELOG.md000064400000000000000000000143051046102023000134020ustar 00000000000000# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). The changelog of 0.2.x releases for x > 3 can be found [on the non-const-generics branch](https://github.com/hanmertens/dary_heap/tree/non-const-generics). The 0.3.0 release was based on 0.2.3, later 0.2.x releases are backports of 0.3.y releases that can be used with older Rust compilers without const generics support. ## [Unreleased] ## [0.3.6] – 2023-06-12 ### Added - Implement `Default` for `IntoIter`. ### Changed - Synchronize source code with standard library of Rust version 1.70.0. - The `retain` method no longer requires the `unstable` feature. - Improve `extend` performance. ## [0.3.5] – 2023-05-21 ### Changed - Synchronize source code with standard library of Rust version 1.69.0. ### Fixed - Leaking a `PeekMut` value can no longer lead to an inconsistent state, but it can leak other heap elements instead. - A panic in the closure provided to `retain` can no longer lead to an inconsistent state. ## [0.3.4] – 2022-08-19 ### Changed - Synchronize source code with standard library of Rust version 1.63.0. - Move `try_reserve` and `try_reserve_exact` methods from `unstable` to `extra` feature. This raises the MSRV of the `extra` feature to 1.57.0. ## [0.3.3] – 2022-02-25 ### Added - Add `try_reserve` and `try_reserve_exact` methods when `unstable` feature is enabled. ### Changed - Synchronize source code with standard library of Rust version 1.61.0. - Several functions are now marked `must_use` (`new`, `with_capacity`, `into_sorted_vec`, `as_slice`, `into_vec`, `peek`, `capacity`, `len`, `is_empty`), as well as some iterators (`Iter` and `IntoIterSorted`). ## [0.3.2] – 2021-10-30 ### Added - Implement array conversion `From<[T; N]>` for `DaryHeap`. - The feature `extra` is added for non-essential functions that require a higher MSRV than the crate otherwise would. This higher MSRV is currently 1.56.0. ### Changed - Synchronize source code with standard library of Rust version 1.56.0. - `DaryHeap::shrink_to` no longer needs the `unstable_nightly` flag. Because it requires a higher MSRV it is now available under the `extra` feature flag. ### Fixed - For `unstable_nightly`, fix necessary Rust feature flags since `SourceIter` has been marked as `rustc_specialization_trait`. ## [0.3.1] – 2021-06-18 ### Added - New function `DaryHeap::as_slice` when `unstable` feature is enabled. ### Changed - Synchronize source code with standard library of Rust version 1.53.0. - Performance improvement for `DaryHeap::retain`. ### Fixed - No integer overflow when rebuilding heaps with arities greater than 13 in `DaryHeap::append`. ## [0.3.0] – 2021-03-28 ### Changed - Use const generics to specify arity instead of `Arity` trait. - Raise MSRV to 1.51.0 for const generics support. ## [0.2.3] – 2021-03-27 ### Changed - Synchronize source code with standard library of Rust version 1.51.0. - Performance improvement for `DaryHeap::append`. ## [0.2.2] – 2021-01-13 ### Changed - Synchronize source code with standard library of Rust version 1.49.0. - Performance improvements, especially for arities up to four due to specialized code for those arities. ## [0.2.1] – 2020-11-20 ### Added - Implement `SourceIter` and `InplaceIterable` for `IntoIter` when `unstable_nightly` is enabled. ### Changed - Synchronize source code with standard library of Rust version 1.48.0. ## [0.2.0] – 2020-10-26 ### Changed - Change `serde` serialization format to be the same as sequence types in the standard library like `std::collections::BinaryHeap`. - MSRV lowered to 1.31.0, with caveats (`Vec::from(DaryHeap)` requires 1.41.0+; `no_std` support and `serde` feature require 1.36.0+). ### Fixed - Ensure heaps are valid after deserializing via `serde`. ## [0.1.1] – 2020-10-08 ### Added - Add support for Serde behind `serde` feature. - Establish stability guidelines and set MSRV at 1.41.0. ### Changed - Extra safeguards against constructing and using a nullary heap. - Simpler unstable Cargo features: `unstable` for everything available on stable compilers (previously `drain_sorted`, `into_iter_sorted`, and `retain`) and `unstable_nightly` for everything only available on nightly (previously `exact_size_is_empty`, `extend_one`, `shrink_to`, and `trusted_len`). - Synchronize source code with standard library of Rust version 1.47.0. ### Fixed - Fix division by zero for unary heap in `append`. ## [0.1.0] – 2020-09-26 ### Added - `DaryHeap` based on `std::collections::BinaryHeap` (Rust version 1.46.0). - `Arity` trait and `arity` macro to specify heap arity. - Arity markers for two to eight, `D2`–`D8`, and type aliases for heaps with those arities, `BinaryHeap`–`OctonaryHeap`. - Cargo features corresponding to unstable Rust features, specifically the features `drain_sorted`, `into_iter_sorted`, and `retain` that are available on stable compilers, and the features `exact_size_is_empty`, `extend_one`, `shrink_to`, and `trusted_len` that are only available on nightly compilers. [Unreleased]: https://github.com/hanmertens/dary_heap/compare/v0.3.6...HEAD [0.3.6]: https://github.com/hanmertens/dary_heap/compare/v0.3.5...v0.3.6 [0.3.5]: https://github.com/hanmertens/dary_heap/compare/v0.3.4...v0.3.5 [0.3.4]: https://github.com/hanmertens/dary_heap/compare/v0.3.3...v0.3.4 [0.3.3]: https://github.com/hanmertens/dary_heap/compare/v0.3.2...v0.3.3 [0.3.2]: https://github.com/hanmertens/dary_heap/compare/v0.3.1...v0.3.2 [0.3.1]: https://github.com/hanmertens/dary_heap/compare/v0.3.0...v0.3.1 [0.3.0]: https://github.com/hanmertens/dary_heap/compare/v0.2.3...v0.3.0 [0.2.3]: https://github.com/hanmertens/dary_heap/compare/v0.2.2...v0.2.3 [0.2.2]: https://github.com/hanmertens/dary_heap/compare/v0.2.1...v0.2.2 [0.2.1]: https://github.com/hanmertens/dary_heap/compare/v0.2.0...v0.2.1 [0.2.0]: https://github.com/hanmertens/dary_heap/compare/v0.1.1...v0.2.0 [0.1.1]: https://github.com/hanmertens/dary_heap/compare/v0.1.0...v0.1.1 [0.1.0]: https://github.com/hanmertens/dary_heap/releases/tag/v0.1.0 dary_heap-0.3.6/Cargo.toml0000644000000026440000000000100110020ustar # THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies. # # If you are reading this file be aware that the original Cargo.toml # will likely look very different (and much more reasonable). # See Cargo.toml.orig for the original contents. [package] edition = "2018" name = "dary_heap" version = "0.3.6" authors = ["Han Mertens "] include = [ "build.rs", "CHANGELOG.md", "LICENSE-*", "README.md", "/src/", ] description = "A d-ary heap" readme = "README.md" keywords = [ "heap", "priority-queue", "no_std", ] categories = ["data-structures"] license = "MIT OR Apache-2.0" repository = "https://github.com/hanmertens/dary_heap" [package.metadata.docs.rs] all-features = true rustdoc-args = [ "--cfg", "docsrs", ] [[bench]] name = "dary_heap" path = "benches/dary_heap.rs" harness = false [dependencies.serde] version = "1" features = ["alloc"] optional = true default-features = false [dev-dependencies.criterion] version = "0.4" default-features = false [dev-dependencies.rand] version = "0.8" [dev-dependencies.rand_xorshift] version = "0.3" [dev-dependencies.serde_test] version = "1" [features] extra = [] unstable = [] unstable_nightly = [] dary_heap-0.3.6/Cargo.toml.orig000064400000000000000000000015401046102023000144550ustar 00000000000000[package] name = "dary_heap" version = "0.3.6" authors = ["Han Mertens "] edition = "2018" license = "MIT OR Apache-2.0" description = "A d-ary heap" repository = "https://github.com/hanmertens/dary_heap" readme = "README.md" keywords = ["heap", "priority-queue", "no_std"] categories = ["data-structures"] include = ["build.rs", "CHANGELOG.md", "LICENSE-*", "README.md", "/src/"] [package.metadata.docs.rs] all-features = true rustdoc-args = ["--cfg", "docsrs"] [features] extra = [] unstable = [] unstable_nightly = [] [dependencies.serde] version = "1" default-features = false features = ["alloc"] optional = true [dev-dependencies] rand = "0.8" rand_xorshift = "0.3" serde_test = "1" [dev-dependencies.criterion] version = "0.4" default-features = false [[bench]] name = "dary_heap" path = "benches/dary_heap.rs" harness = false dary_heap-0.3.6/LICENSE-APACHE000064400000000000000000000227731046102023000135250ustar 00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS dary_heap-0.3.6/LICENSE-MIT000064400000000000000000000017771046102023000132360ustar 00000000000000Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. dary_heap-0.3.6/README.md000064400000000000000000000077501046102023000130560ustar 00000000000000# dary_heap [![CI](https://github.com/hanmertens/dary_heap/workflows/CI/badge.svg)](https://github.com/hanmertens/dary_heap/actions?query=workflow%3ACI+branch%3Amaster) [![Crates.io](https://img.shields.io/crates/v/dary_heap.svg)](https://crates.io/crates/dary_heap) [![Docs.rs](https://docs.rs/dary_heap/badge.svg)](https://docs.rs/dary_heap) Rust implementation of a [*d*-ary heap][wiki]. The *d* = 2 version is present in the standard library as [`BinaryHeap`][std-binaryheap], but using a higher value for *d* can bring performance improvements in many use cases. This is because a higher arity *d* (maximum number of children each node can have) means the heap contains less layers, making adding elements to the heap faster. However, removing elements is slower, because then the amount of work per layer is higher as there are more children. The latter effect is often diminished due to higher cache locality. Therefore, overall performance is often increased if *d* > 2 but not too high. Benchmarking is necessary to determine the best value of *d* for a specific use case. ## Compatibility and stability The API of this crate aims to be analogous to that of [`BinaryHeap` in the standard library][std-binaryheap]. Feature-gated API in the standard library is also feature-gated in `dary_heap`, see [the section on features](#features) for more information. In fact, the code in `dary_heap` is directly based on that of the standard library. The `BinaryHeap` provided by this crate should therefore provide similar performance as that of the standard library, and the other heap types provided by this crate may provide performance improvements. The version of the standard library this crate is based on is currently 1.70.0. The aim is to keep the crate in sync with the latest stable Rust release. The minimum supported Rust version (MSRV) is currently 1.51.0. The last version without const generics has a MSRV of 1.31.0 and is being maintained on [the non-const-generics branch][non-const-generics] of this repository. The MSRV can be increased in a minor level release, but not in a patch level release. There are no stability guarantees for the `unstable` and `unstable_nightly` features. Changes to the behavior of nullary heaps (that should not be used anyway) are also not considered to be breaking and can happen in a patch level release. ## Features - `extra`: add features that require a higher MSRV (currently 1.57.0). - add `shrink_to` method to shrink heap capacity to a lower bound. - add `try_reserve` method to try to reserve additional capacity in the heap. - add `try_reserve_exact` method to try to reserve minimal additonal capacity. - `serde`: add support for (de)serialization using [Serde][serde]. - `unstable`: enable support for experimental (unstable) features: - add `as_slice` function to obtain a slice with the underlying data in arbitrary order. - add `drain_sorted` method which is like `drain` but yields elements in heap order. - add `into_iter_sorted` method which is like `into_iter` but yields elements in heap order. - `unstable_nightly`: enable support for experimental (unstable) features that require a nightly Rust compiler: - implement methods defined by unstable feature `exact_size_is_empty` on `ExactSizeIterator`s in this crate. - implement methods defined by unstable feature `extend_one`. - implement `SourceIter` and `InPlaceIterable` for `IntoIter`. - implement `TrustedLen` for iterators if possible (only when `unstable` is also enabled). ## License `dary_heap` is licensed under either of * Apache License, Version 2.0 ([LICENSE-APACHE](LICENSE-APACHE) or https://www.apache.org/licenses/LICENSE-2.0) * MIT license ([LICENSE-MIT](LICENSE-MIT) or https://opensource.org/licenses/MIT) at your option. [wiki]: https://en.wikipedia.org/wiki/D-ary_heap [std-binaryheap]: https://doc.rust-lang.org/std/collections/struct.BinaryHeap.html [non-const-generics]: https://github.com/hanmertens/dary_heap/tree/non-const-generics [serde]: https://serde.rs dary_heap-0.3.6/src/lib.rs000064400000000000000000002011611046102023000134720ustar 00000000000000//! A priority queue implemented with a *d*-ary heap. //! //! Insertion and popping the largest element have *O*(log(*n*)) time complexity. //! Checking the largest element is *O*(1). Converting a vector to a *d*-ary heap //! can be done in-place, and has *O*(*n*) complexity. A *d*-ary heap can also be //! converted to a sorted vector in-place, allowing it to be used for an *O*(*n* * log(*n*)) //! in-place heapsort. //! //! # Comparison to standard library //! //! The standard library contains a 2-ary heap //! ([`std::collections::BinaryHeap`][std]). The [`BinaryHeap`] of this crate //! aims to be a drop-in replacement, both in API and in performance. Cargo //! features are used in place of unstable Rust features. The advantage of this //! crate over the standard library lies in the possibility of easily changing //! the arity of the heap, which can increase performance. //! //! The standard library binary heap can contain up to [`isize::MAX`] elements; //! this is the same for the binary heap of this crate, but other heaps in this //! crate can hold less elements. In the general case, the maximum number of //! elements is ([`usize::MAX`] - 1) / *d* for an arity of *d*. On 64-bit systems //! this should generally not be a concern when using reasonable arities. On //! 32-bit systems this may be a concern when using very large heaps with a //! relatively high arity. //! //! [std]: https://doc.rust-lang.org/std/collections/struct.BinaryHeap.html //! //! # Comparison of different arities *d* //! //! The arity *d* is defined as the maximum number of children each node can //! have. A higher number means the heap has less layers, but may require more //! work per layer because there are more children present. This generally makes //! methods adding elements to the heap such as [`push`] faster, and methods //! removing them such as [`pop`] slower. However, due to higher cache locality //! for higher *d*, the drop in [`pop`] performance is often diminished. If you're //! unsure what value of *d* to choose, the [`QuaternaryHeap`] with *d* = 4 is //! usually a good start, but benchmarking is necessary to determine the best //! value of *d*. //! //! [`push`]: struct.DaryHeap.html#method.push //! [`pop`]: struct.DaryHeap.html#method.pop //! //! # Usage //! //! Rust type interference cannot infer the desired heap arity (value of *d*) //! automatically when using [`DaryHeap`] directly. It is therefore more //! ergonomic to use one of the type aliases to select the desired arity: //! //! | Name | Arity | //! |--------------------|---------| //! | [`BinaryHeap`] | *d* = 2 | //! | [`TernaryHeap`] | *d* = 3 | //! | [`QuaternaryHeap`] | *d* = 4 | //! | [`QuinaryHeap`] | *d* = 5 | //! | [`SenaryHeap`] | *d* = 6 | //! | [`SeptenaryHeap`] | *d* = 7 | //! | [`OctonaryHeap`] | *d* = 8 | //! //! The difference in ergonomics illustrated in the following: //! //! ``` //! use dary_heap::{DaryHeap, TernaryHeap}; //! //! // Type parameter T can be inferred, but arity cannot //! let mut heap1 = DaryHeap::<_, 3>::new(); //! heap1.push(42); //! //! // Type alias removes need for explicit type //! let mut heap2 = TernaryHeap::new(); //! heap2.push(42); //! ``` //! //! If a different arity is desired, you can use the former or a define a type //! alias yourself. It should be noted that *d* > 8 is rarely beneficial. //! //! ## Validity of arities in *d*-ary heaps //! //! Only arities of two or greater are useful in *d*-ary heap, and are therefore //! the only ones implemented by default. Lower arities are only possible if you //! put in the effort to implement them yourself. An arity of one is possible, //! but yields a heap where every element has one child. This essentially makes //! it a sorted vector with poor performance. Regarding an arity of zero: this //! is not statically prevented, but constructing a [`DaryHeap`] with it and //! using it may (and probably will) result in a runtime panic. //! //! [`DaryHeap`]: struct.DaryHeap.html //! [`BinaryHeap`]: type.BinaryHeap.html //! [`TernaryHeap`]: type.TernaryHeap.html //! [`QuaternaryHeap`]: type.QuaternaryHeap.html //! [`QuinaryHeap`]: type.QuinaryHeap.html //! [`SenaryHeap`]: type.SenaryHeap.html //! [`SeptenaryHeap`]: type.SeptenaryHeap.html //! [`OctonaryHeap`]: type.OctonaryHeap.html //! //! # Examples //! //! This is a larger example that implements [Dijkstra's algorithm][dijkstra] //! to solve the [shortest path problem][sssp] on a [directed graph][dir_graph]. //! It shows how to use [`DaryHeap`] with custom types. //! //! [dijkstra]: https://en.wikipedia.org/wiki/Dijkstra%27s_algorithm //! [sssp]: https://en.wikipedia.org/wiki/Shortest_path_problem //! [dir_graph]: https://en.wikipedia.org/wiki/Directed_graph //! //! ``` //! use std::cmp::Ordering; //! use dary_heap::TernaryHeap; //! //! #[derive(Copy, Clone, Eq, PartialEq)] //! struct State { //! cost: usize, //! position: usize, //! } //! //! // The priority queue depends on `Ord`. //! // Explicitly implement the trait so the queue becomes a min-heap //! // instead of a max-heap. //! impl Ord for State { //! fn cmp(&self, other: &Self) -> Ordering { //! // Notice that the we flip the ordering on costs. //! // In case of a tie we compare positions - this step is necessary //! // to make implementations of `PartialEq` and `Ord` consistent. //! other.cost.cmp(&self.cost) //! .then_with(|| self.position.cmp(&other.position)) //! } //! } //! //! // `PartialOrd` needs to be implemented as well. //! impl PartialOrd for State { //! fn partial_cmp(&self, other: &Self) -> Option { //! Some(self.cmp(other)) //! } //! } //! //! // Each node is represented as a `usize`, for a shorter implementation. //! struct Edge { //! node: usize, //! cost: usize, //! } //! //! // Dijkstra's shortest path algorithm. //! //! // Start at `start` and use `dist` to track the current shortest distance //! // to each node. This implementation isn't memory-efficient as it may leave duplicate //! // nodes in the queue. It also uses `usize::MAX` as a sentinel value, //! // for a simpler implementation. //! fn shortest_path(adj_list: &Vec>, start: usize, goal: usize) -> Option { //! // dist[node] = current shortest distance from `start` to `node` //! let mut dist: Vec<_> = (0..adj_list.len()).map(|_| usize::MAX).collect(); //! //! let mut heap = TernaryHeap::new(); //! //! // We're at `start`, with a zero cost //! dist[start] = 0; //! heap.push(State { cost: 0, position: start }); //! //! // Examine the frontier with lower cost nodes first (min-heap) //! while let Some(State { cost, position }) = heap.pop() { //! // Alternatively we could have continued to find all shortest paths //! if position == goal { return Some(cost); } //! //! // Important as we may have already found a better way //! if cost > dist[position] { continue; } //! //! // For each node we can reach, see if we can find a way with //! // a lower cost going through this node //! for edge in &adj_list[position] { //! let next = State { cost: cost + edge.cost, position: edge.node }; //! //! // If so, add it to the frontier and continue //! if next.cost < dist[next.position] { //! heap.push(next); //! // Relaxation, we have now found a better way //! dist[next.position] = next.cost; //! } //! } //! } //! //! // Goal not reachable //! None //! } //! //! fn main() { //! // This is the directed graph we're going to use. //! // The node numbers correspond to the different states, //! // and the edge weights symbolize the cost of moving //! // from one node to another. //! // Note that the edges are one-way. //! // //! // 7 //! // +-----------------+ //! // | | //! // v 1 2 | 2 //! // 0 -----> 1 -----> 3 ---> 4 //! // | ^ ^ ^ //! // | | 1 | | //! // | | | 3 | 1 //! // +------> 2 -------+ | //! // 10 | | //! // +---------------+ //! // //! // The graph is represented as an adjacency list where each index, //! // corresponding to a node value, has a list of outgoing edges. //! // Chosen for its efficiency. //! let graph = vec![ //! // Node 0 //! vec![Edge { node: 2, cost: 10 }, //! Edge { node: 1, cost: 1 }], //! // Node 1 //! vec![Edge { node: 3, cost: 2 }], //! // Node 2 //! vec![Edge { node: 1, cost: 1 }, //! Edge { node: 3, cost: 3 }, //! Edge { node: 4, cost: 1 }], //! // Node 3 //! vec![Edge { node: 0, cost: 7 }, //! Edge { node: 4, cost: 2 }], //! // Node 4 //! vec![]]; //! //! assert_eq!(shortest_path(&graph, 0, 1), Some(1)); //! assert_eq!(shortest_path(&graph, 0, 3), Some(3)); //! assert_eq!(shortest_path(&graph, 3, 0), Some(7)); //! assert_eq!(shortest_path(&graph, 0, 4), Some(5)); //! assert_eq!(shortest_path(&graph, 4, 0), None); //! } //! ``` #![no_std] #![cfg_attr( feature = "unstable_nightly", feature( exact_size_is_empty, extend_one, inplace_iteration, min_specialization, trusted_len ) )] #![cfg_attr(docsrs, feature(doc_cfg))] #![allow(clippy::needless_doctest_main)] extern crate alloc; use core::fmt; use core::iter::{FromIterator, FusedIterator}; use core::mem::{size_of, swap, ManuallyDrop}; use core::num::NonZeroUsize; use core::ops::{Deref, DerefMut}; use core::ptr; use core::slice; #[cfg(feature = "extra")] use alloc::collections::TryReserveError; use alloc::{vec, vec::Vec}; /// A binary heap (*d* = 2). pub type BinaryHeap = DaryHeap; /// A ternary heap (*d* = 3). pub type TernaryHeap = DaryHeap; /// A quaternary heap (*d* = 4). pub type QuaternaryHeap = DaryHeap; /// A quinary heap (*d* = 5). pub type QuinaryHeap = DaryHeap; /// A senary heap (*d* = 6). pub type SenaryHeap = DaryHeap; /// A septenary heap (*d* = 7). pub type SeptenaryHeap = DaryHeap; /// An octonary heap (*d* = 8). pub type OctonaryHeap = DaryHeap; /// A priority queue implemented with a *d*-ary heap. /// /// This will be a max-heap. /// /// It is a logic error for an item to be modified in such a way that the /// item's ordering relative to any other item, as determined by the [`Ord`] /// trait, changes while it is in the heap. This is normally only possible /// through interior mutability, global state, I/O, or unsafe code. The /// behavior resulting from such a logic error is not specified, but will /// be encapsulated to the `DaryHeap` that observed the logic error and not /// result in undefined behavior. This could include panics, incorrect results, /// aborts, memory leaks, and non-termination. /// /// As long as no elements change their relative order while being in the heap /// as described above, the API of `DaryHeap` guarantees that the heap /// invariant remains intact i.e. its methods all behave as documented. For /// example if a method is documented as iterating in sorted order, that's /// guaranteed to work as long as elements in the heap have not changed order, /// even in the presence of closures getting unwinded out of, iterators getting /// leaked, and similar foolishness. /// /// /// # Usage /// /// Rust type interference cannot infer the desired heap arity (value of *d*) /// automatically. Therefore, it is generally more ergonomic to use one of the /// [type aliases] instead of `DaryHeap` directly. See the [crate-level /// documentation][usage] for more information. /// /// [type aliases]: index.html#types /// [usage]: index.html#usage /// /// # Comparison to standard library /// /// For a comparison with [`std::collections::BinaryHeap`][std], see the [crate-level /// documentation][comparison]. /// /// [std]: https://doc.rust-lang.org/std/collections/struct.BinaryHeap.html /// [comparison]: index.html#comparison-to-standard-library /// /// # Examples /// /// ``` /// use dary_heap::BinaryHeap; /// /// // Type inference lets us omit an explicit type signature (which /// // would be `BinaryHeap` in this example). /// let mut heap = BinaryHeap::new(); /// /// // We can use peek to look at the next item in the heap. In this case, /// // there's no items in there yet so we get None. /// assert_eq!(heap.peek(), None); /// /// // Let's add some scores... /// heap.push(1); /// heap.push(5); /// heap.push(2); /// /// // Now peek shows the most important item in the heap. /// assert_eq!(heap.peek(), Some(&5)); /// /// // We can check the length of a heap. /// assert_eq!(heap.len(), 3); /// /// // We can iterate over the items in the heap, although they are returned in /// // a random order. /// for x in &heap { /// println!("{x}"); /// } /// /// // If we instead pop these scores, they should come back in order. /// assert_eq!(heap.pop(), Some(5)); /// assert_eq!(heap.pop(), Some(2)); /// assert_eq!(heap.pop(), Some(1)); /// assert_eq!(heap.pop(), None); /// /// // We can clear the heap of any remaining items. /// heap.clear(); /// /// // The heap should now be empty. /// assert!(heap.is_empty()) /// ``` /// /// A `DaryHeap` with a known list of items can be initialized from an array: /// /// ``` /// use dary_heap::QuaternaryHeap; /// /// let heap = QuaternaryHeap::from([1, 5, 2]); /// ``` /// /// ## Min-heap /// /// Either [`core::cmp::Reverse`] or a custom [`Ord`] implementation can be used to /// make `DaryHeap` a min-heap. This makes `heap.pop()` return the smallest /// value instead of the greatest one. /// /// ``` /// use dary_heap::TernaryHeap; /// use std::cmp::Reverse; /// /// let mut heap = TernaryHeap::new(); /// /// // Wrap values in `Reverse` /// heap.push(Reverse(1)); /// heap.push(Reverse(5)); /// heap.push(Reverse(2)); /// /// // If we pop these scores now, they should come back in the reverse order. /// assert_eq!(heap.pop(), Some(Reverse(1))); /// assert_eq!(heap.pop(), Some(Reverse(2))); /// assert_eq!(heap.pop(), Some(Reverse(5))); /// assert_eq!(heap.pop(), None); /// ``` /// /// # Time complexity /// /// | [push] | [pop] | [peek]/[peek\_mut] | /// |---------|---------------|--------------------| /// | *O*(1)~ | *O*(log(*n*)) | *O*(1) | /// /// The value for `push` is an expected cost; the method documentation gives a /// more detailed analysis. /// /// [`core::cmp::Reverse`]: core::cmp::Reverse /// [`Cell`]: core::cell::Cell /// [`RefCell`]: core::cell::RefCell /// [push]: DaryHeap::push /// [pop]: DaryHeap::pop /// [peek]: DaryHeap::peek /// [peek\_mut]: DaryHeap::peek_mut pub struct DaryHeap { data: Vec, } #[cfg(feature = "serde")] mod serde_impl { use super::{DaryHeap, Vec}; use serde::{Deserialize, Deserializer, Serialize, Serializer}; impl Serialize for DaryHeap { fn serialize(&self, serializer: S) -> Result where S: Serializer, { self.data.serialize(serializer) } } impl<'de, T: Ord + Deserialize<'de>, const A: usize> Deserialize<'de> for DaryHeap { fn deserialize(deserializer: D) -> Result where D: Deserializer<'de>, { Vec::deserialize(deserializer).map(Into::into) } fn deserialize_in_place(deserializer: D, place: &mut Self) -> Result<(), D::Error> where D: Deserializer<'de>, { place.data.clear(); let result = Vec::deserialize_in_place(deserializer, &mut place.data); place.rebuild(); result } } } /// Structure wrapping a mutable reference to the greatest item on a /// `DaryHeap`. /// /// This `struct` is created by the [`peek_mut`] method on [`DaryHeap`]. See /// its documentation for more. /// /// [`peek_mut`]: DaryHeap::peek_mut pub struct PeekMut<'a, T: 'a + Ord, const D: usize> { heap: &'a mut DaryHeap, // If a set_len + sift_down are required, this is Some. If a &mut T has not // yet been exposed to peek_mut()'s caller, it's None. original_len: Option, } impl fmt::Debug for PeekMut<'_, T, D> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_tuple("PeekMut").field(&self.heap.data[0]).finish() } } impl Drop for PeekMut<'_, T, D> { fn drop(&mut self) { if let Some(original_len) = self.original_len { // SAFETY: That's how many elements were in the Vec at the time of // the PeekMut::deref_mut call, and therefore also at the time of // the BinaryHeap::peek_mut call. Since the PeekMut did not end up // getting leaked, we are now undoing the leak amplification that // the DerefMut prepared for. unsafe { self.heap.data.set_len(original_len.get()) }; // SAFETY: PeekMut is only instantiated for non-empty heaps. unsafe { self.heap.sift_down(0) }; } } } impl Deref for PeekMut<'_, T, D> { type Target = T; fn deref(&self) -> &T { debug_assert!(!self.heap.is_empty()); // SAFE: PeekMut is only instantiated for non-empty heaps unsafe { self.heap.data.get_unchecked(0) } } } impl DerefMut for PeekMut<'_, T, D> { fn deref_mut(&mut self) -> &mut T { debug_assert!(!self.heap.is_empty()); let len = self.heap.len(); if len > 1 { // Here we preemptively leak all the rest of the underlying vector // after the currently max element. If the caller mutates the &mut T // we're about to give them, and then leaks the PeekMut, all these // elements will remain leaked. If they don't leak the PeekMut, then // either Drop or PeekMut::pop will un-leak the vector elements. // // This is technique is described throughout several other places in // the standard library as "leak amplification". unsafe { // SAFETY: len > 1 so len != 0. self.original_len = Some(NonZeroUsize::new_unchecked(len)); // SAFETY: len > 1 so all this does for now is leak elements, // which is safe. self.heap.data.set_len(1); } } // SAFE: PeekMut is only instantiated for non-empty heaps unsafe { self.heap.data.get_unchecked_mut(0) } } } impl<'a, T: Ord, const D: usize> PeekMut<'a, T, D> { /// Removes the peeked value from the heap and returns it. pub fn pop(mut this: PeekMut<'a, T, D>) -> T { if let Some(original_len) = this.original_len.take() { // SAFETY: This is how many elements were in the Vec at the time of // the BinaryHeap::peek_mut call. unsafe { this.heap.data.set_len(original_len.get()) }; // Unlike in Drop, here we don't also need to do a sift_down even if // the caller could've mutated the element. It is removed from the // heap on the next line and pop() is not sensitive to its value. } this.heap.pop().unwrap() } } impl Clone for DaryHeap { fn clone(&self) -> Self { DaryHeap { data: self.data.clone(), } } fn clone_from(&mut self, source: &Self) { self.data.clone_from(&source.data); } } impl Default for DaryHeap { /// Creates an empty `DaryHeap`. #[inline] fn default() -> DaryHeap { DaryHeap::new() } } impl fmt::Debug for DaryHeap { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_list().entries(self.iter()).finish() } } struct RebuildOnDrop<'a, T: Ord, const D: usize> { heap: &'a mut DaryHeap, rebuild_from: usize, } impl<'a, T: Ord, const D: usize> Drop for RebuildOnDrop<'a, T, D> { fn drop(&mut self) { self.heap.rebuild_tail(self.rebuild_from); } } impl DaryHeap { /// Creates an empty `DaryHeap` as a max-heap. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::QuaternaryHeap; /// let mut heap = QuaternaryHeap::new(); /// heap.push(4); /// ``` #[must_use] pub fn new() -> DaryHeap { DaryHeap { data: vec![] } } /// Creates an empty `DaryHeap` with at least the specific capacity. /// /// The *d*-ary heap will be able to hold at least `capacity` elements without /// reallocating. This method is allowed to allocate for more elements than /// `capacity`. If `capacity` is 0, the *d*-ary heap will not allocate. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::QuaternaryHeap; /// let mut heap = QuaternaryHeap::with_capacity(10); /// heap.push(4); /// ``` #[must_use] pub fn with_capacity(capacity: usize) -> DaryHeap { DaryHeap { data: Vec::with_capacity(capacity), } } /// Returns a mutable reference to the greatest item in the *d*-ary heap, or /// `None` if it is empty. /// /// Note: If the `PeekMut` value is leaked, some heap elements might get /// leaked along with it, but the remaining elements will remain a valid /// heap. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::TernaryHeap; /// let mut heap = TernaryHeap::new(); /// assert!(heap.peek_mut().is_none()); /// /// heap.push(1); /// heap.push(5); /// heap.push(2); /// { /// let mut val = heap.peek_mut().unwrap(); /// *val = 0; /// } /// assert_eq!(heap.peek(), Some(&2)); /// ``` /// /// # Time complexity /// /// If the item is modified then the worst case time complexity is *O*(log(*n*)), /// otherwise it's *O*(1). pub fn peek_mut(&mut self) -> Option> { if self.is_empty() { None } else { Some(PeekMut { heap: self, original_len: None, }) } } /// Removes the greatest item from the *d*-ary heap and returns it, or `None` if it /// is empty. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::BinaryHeap; /// let mut heap = BinaryHeap::from([1, 3]); /// /// assert_eq!(heap.pop(), Some(3)); /// assert_eq!(heap.pop(), Some(1)); /// assert_eq!(heap.pop(), None); /// ``` /// /// # Time complexity /// /// The worst case cost of `pop` on a heap containing *n* elements is *O*(log(*n*)). pub fn pop(&mut self) -> Option { self.data.pop().map(|mut item| { if !self.is_empty() { swap(&mut item, &mut self.data[0]); // SAFETY: !self.is_empty() means that self.len() > 0 unsafe { self.sift_down_to_bottom(0) }; } item }) } /// Pushes an item onto the *d*-ary heap. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::QuaternaryHeap; /// let mut heap = QuaternaryHeap::new(); /// heap.push(3); /// heap.push(5); /// heap.push(1); /// /// assert_eq!(heap.len(), 3); /// assert_eq!(heap.peek(), Some(&5)); /// ``` /// /// # Time complexity /// /// The expected cost of `push`, averaged over every possible ordering of /// the elements being pushed, and over a sufficiently large number of /// pushes, is *O*(1). This is the most meaningful cost metric when pushing /// elements that are *not* already in any sorted pattern. /// /// The time complexity degrades if elements are pushed in predominantly /// ascending order. In the worst case, elements are pushed in ascending /// sorted order and the amortized cost per push is *O*(log(*n*)) against a heap /// containing *n* elements. /// /// The worst case cost of a *single* call to `push` is *O*(*n*). The worst case /// occurs when capacity is exhausted and needs a resize. The resize cost /// has been amortized in the previous figures. pub fn push(&mut self, item: T) { let old_len = self.len(); self.data.push(item); // SAFETY: Since we pushed a new item it means that // old_len = self.len() - 1 < self.len() unsafe { self.sift_up(0, old_len) }; } /// Consumes the `DaryHeap` and returns a vector in sorted /// (ascending) order. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::OctonaryHeap; /// /// let mut heap = OctonaryHeap::from([1, 2, 4, 5, 7]); /// heap.push(6); /// heap.push(3); /// /// let vec = heap.into_sorted_vec(); /// assert_eq!(vec, [1, 2, 3, 4, 5, 6, 7]); /// ``` #[must_use = "`self` will be dropped if the result is not used"] pub fn into_sorted_vec(mut self) -> Vec { let mut end = self.len(); while end > 1 { end -= 1; // SAFETY: `end` goes from `self.len() - 1` to 1 (both included), // so it's always a valid index to access. // It is safe to access index 0 (i.e. `ptr`), because // 1 <= end < self.len(), which means self.len() >= 2. unsafe { let ptr = self.data.as_mut_ptr(); ptr::swap(ptr, ptr.add(end)); } // SAFETY: `end` goes from `self.len() - 1` to 1 (both included) so: // 0 < 1 <= end <= self.len() - 1 < self.len() // Which means 0 < end and end < self.len(). unsafe { self.sift_down_range(0, end) }; } self.into_vec() } // The implementations of sift_up and sift_down use unsafe blocks in // order to move an element out of the vector (leaving behind a // hole), shift along the others and move the removed element back into the // vector at the final location of the hole. // The `Hole` type is used to represent this, and make sure // the hole is filled back at the end of its scope, even on panic. // Using a hole reduces the constant factor compared to using swaps, // which involves twice as many moves. /// # Safety /// /// The caller must guarantee that `pos < self.len()`. unsafe fn sift_up(&mut self, start: usize, pos: usize) -> usize { assert_ne!(D, 0, "Arity should be greater than zero"); // Take out the value at `pos` and create a hole. // SAFETY: The caller guarantees that pos < self.len() let mut hole = Hole::new(&mut self.data, pos); while hole.pos() > start { let parent = (hole.pos() - 1) / D; // SAFETY: hole.pos() > start >= 0, which means hole.pos() > 0 // and so hole.pos() - 1 can't underflow. // This guarantees that parent < hole.pos() so // it's a valid index and also != hole.pos(). if hole.element() <= hole.get(parent) { break; } // SAFETY: Same as above hole.move_to(parent); } hole.pos() } /// Take an element at `pos` and move it down the heap, /// while its children are larger. /// /// # Safety /// /// The caller must guarantee that `pos < end <= self.len()`. unsafe fn sift_down_range(&mut self, pos: usize, end: usize) { assert_ne!(D, 0, "Arity should be greater than zero"); // SAFETY: The caller guarantees that pos < end <= self.len(). let mut hole = Hole::new(&mut self.data, pos); let mut child = D * hole.pos() + 1; // Loop invariant: child == d * hole.pos() + 1. while child <= end.saturating_sub(D) { // compare with the greatest of the d children // SAFETY: child < end - d + 1 < self.len() and // child + d - 1 < end <= self.len(), so they're valid indexes. // child + i == d * hole.pos() + 1 + i != hole.pos() for i >= 0 child = hole.max_sibling::(child); // if we are already in order, stop. // SAFETY: child is now either the old child or valid sibling // We already proven that all are < self.len() and != hole.pos() if hole.element() >= hole.get(child) { return; } // SAFETY: same as above. hole.move_to(child); child = D * hole.pos() + 1; } child = hole.max_sibling_to::(child, end); // SAFETY: && short circuit, which means that in the // second condition it's already true that child < end <= self.len(). if child < end && hole.element() < hole.get(child) { // SAFETY: child is already proven to be a valid index and // child == d * hole.pos() + 1 != hole.pos(). hole.move_to(child); } } /// # Safety /// /// The caller must guarantee that `pos < self.len()`. unsafe fn sift_down(&mut self, pos: usize) { let len = self.len(); // SAFETY: pos < len is guaranteed by the caller and // obviously len = self.len() <= self.len(). self.sift_down_range(pos, len); } /// Take an element at `pos` and move it all the way down the heap, /// then sift it up to its position. /// /// Note: This is faster when the element is known to be large / should /// be closer to the bottom. /// /// # Safety /// /// The caller must guarantee that `pos < self.len()`. unsafe fn sift_down_to_bottom(&mut self, mut pos: usize) { assert_ne!(D, 0, "Arity should be greater than zero"); let end = self.len(); let start = pos; // SAFETY: The caller guarantees that pos < self.len(). let mut hole = Hole::new(&mut self.data, pos); let mut child = D * hole.pos() + 1; // Loop invariant: child == d * hole.pos() + 1. while child <= end.saturating_sub(D) { // SAFETY: child < end - d + 1 < self.len() and // child + d - 1 < end <= self.len(), so they're valid indexes. // child + i == d * hole.pos() + 1 + i != hole.pos() for i >= 0 child = hole.max_sibling::(child); // SAFETY: Same as above hole.move_to(child); child = D * hole.pos() + 1; } child = hole.max_sibling_to::(child, end); if child < end { // SAFETY: child < end <= self.len(), so it's a valid index // and child == d * hole.pos() + i != hole.pos() for i >= 1 hole.move_to(child); } pos = hole.pos(); drop(hole); // SAFETY: pos is the position in the hole and was already proven // to be a valid index. self.sift_up(start, pos); } /// Rebuild assuming data[0..start] is still a proper heap. fn rebuild_tail(&mut self, start: usize) { assert_ne!(D, 0, "Arity should be greater than zero"); if start == self.len() { return; } let tail_len = self.len() - start; // The fix for this lint (usize::BITS) requires Rust 1.53.0, but the // MSRV is currently 1.51.0. #[allow(clippy::manual_bits)] #[inline(always)] fn log2_fast(x: usize) -> usize { 8 * size_of::() - (x.leading_zeros() as usize) - 1 } // `rebuild` takes O(self.len()) operations // and about n * self.len() comparisons in the worst case // with n = d / (d - 1) // while repeating `sift_up` takes O(tail_len * log(start)) operations // and about 1 * tail_len * log(start) comparisons in the worst case, // assuming start >= tail_len. For larger heaps, the crossover point // no longer follows this reasoning and was determined empirically. let better_to_rebuild = if start < tail_len { true } else if self.len() <= 4096 / D { D * self.len() < (D - 1) * tail_len * log2_fast(start) } else { D * self.len() < (D - 1) * tail_len * 13usize.saturating_sub(D) }; if better_to_rebuild { self.rebuild(); } else { for i in start..self.len() { // SAFETY: The index `i` is always less than self.len(). unsafe { self.sift_up(0, i) }; } } } fn rebuild(&mut self) { assert_ne!(D, 0, "Arity should be greater than zero"); if self.len() < 2 { return; } let mut n = (self.len() - 1) / D + 1; while n > 0 { n -= 1; // SAFETY: n starts from (self.len() - 1) / d + 1 and goes down to 0. // The only case when !(n < self.len()) is if // self.len() == 0, but it's ruled out by the loop condition. unsafe { self.sift_down(n) }; } } /// Moves all the elements of `other` into `self`, leaving `other` empty. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::OctonaryHeap; /// /// let mut a = OctonaryHeap::from([-10, 1, 2, 3, 3]); /// let mut b = OctonaryHeap::from([-20, 5, 43]); /// /// a.append(&mut b); /// /// assert_eq!(a.into_sorted_vec(), [-20, -10, 1, 2, 3, 3, 5, 43]); /// assert!(b.is_empty()); /// ``` pub fn append(&mut self, other: &mut Self) { if self.len() < other.len() { swap(self, other); } let start = self.data.len(); self.data.append(&mut other.data); self.rebuild_tail(start); } /// Clears the *d*-ary heap, returning an iterator over the removed elements /// in heap order. If the iterator is dropped before being fully consumed, /// it drops the remaining elements in heap order. /// /// The returned iterator keeps a mutable borrow on the heap to optimize /// its implementation. /// /// Note: /// * `.drain_sorted()` is *O*(*n* \* log(*n*)); much slower than `.drain()`. /// You should use the latter for most cases. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::TernaryHeap; /// /// let mut heap = TernaryHeap::from([1, 2, 3, 4, 5]); /// assert_eq!(heap.len(), 5); /// /// drop(heap.drain_sorted()); // removes all elements in heap order /// assert_eq!(heap.len(), 0); /// ``` #[inline] #[cfg(feature = "unstable")] #[cfg_attr(docsrs, doc(cfg(feature = "unstable")))] pub fn drain_sorted(&mut self) -> DrainSorted<'_, T, D> { DrainSorted { inner: self } } /// Retains only the elements specified by the predicate. /// /// In other words, remove all elements `e` for which `f(&e)` returns /// `false`. The elements are visited in unsorted (and unspecified) order. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::OctonaryHeap; /// /// let mut heap = OctonaryHeap::from([-10, -5, 1, 2, 4, 13]); /// /// heap.retain(|x| x % 2 == 0); // only keep even numbers /// /// assert_eq!(heap.into_sorted_vec(), [-10, 2, 4]) /// ``` pub fn retain(&mut self, mut f: F) where F: FnMut(&T) -> bool, { // rebuild_start will be updated to the first touched element below, and the rebuild will // only be done for the tail. let mut guard = RebuildOnDrop { rebuild_from: self.len(), heap: self, }; // Split the borrow outside of the closure to appease the borrow checker let rebuild_from = &mut guard.rebuild_from; let mut i = 0; guard.heap.data.retain(|e| { let keep = f(e); if !keep && i < *rebuild_from { *rebuild_from = i; } i += 1; keep }); } } impl DaryHeap { /// Returns an iterator visiting all values in the underlying vector, in /// arbitrary order. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::TernaryHeap; /// let heap = TernaryHeap::from([1, 2, 3, 4]); /// /// // Print 1, 2, 3, 4 in arbitrary order /// for x in heap.iter() { /// println!("{x}"); /// } /// ``` pub fn iter(&self) -> Iter<'_, T> { Iter { iter: self.data.iter(), } } /// Returns an iterator which retrieves elements in heap order. /// This method consumes the original heap. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::QuaternaryHeap; /// let heap = QuaternaryHeap::from([1, 2, 3, 4, 5]); /// /// assert_eq!(heap.into_iter_sorted().take(2).collect::>(), [5, 4]); /// ``` #[cfg(feature = "unstable")] #[cfg_attr(docsrs, doc(cfg(feature = "unstable")))] pub fn into_iter_sorted(self) -> IntoIterSorted { IntoIterSorted { inner: self } } /// Returns the greatest item in the *d*-ary heap, or `None` if it is empty. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::BinaryHeap; /// let mut heap = BinaryHeap::new(); /// assert_eq!(heap.peek(), None); /// /// heap.push(1); /// heap.push(5); /// heap.push(2); /// assert_eq!(heap.peek(), Some(&5)); /// /// ``` /// /// # Time complexity /// /// Cost is *O*(1) in the worst case. #[must_use] pub fn peek(&self) -> Option<&T> { self.data.get(0) } /// Returns the number of elements the *d*-ary heap can hold without reallocating. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::OctonaryHeap; /// let mut heap = OctonaryHeap::with_capacity(100); /// assert!(heap.capacity() >= 100); /// heap.push(4); /// ``` #[must_use] pub fn capacity(&self) -> usize { self.data.capacity() } /// Reserves the minimum capacity for at least `additional` elements more than /// the current length. Unlike [`reserve`], this will not /// deliberately over-allocate to speculatively avoid frequent allocations. /// After calling `reserve_exact`, capacity will be greater than or equal to /// `self.len() + additional`. Does nothing if the capacity is already /// sufficient. /// /// [`reserve`]: DaryHeap::reserve /// /// # Panics /// /// Panics if the new capacity overflows [`usize`]. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::OctonaryHeap; /// let mut heap = OctonaryHeap::new(); /// heap.reserve_exact(100); /// assert!(heap.capacity() >= 100); /// heap.push(4); /// ``` /// /// [`reserve`]: DaryHeap::reserve pub fn reserve_exact(&mut self, additional: usize) { self.data.reserve_exact(additional); } /// Reserves capacity for at least `additional` elements more than the /// current length. The allocator may reserve more space to speculatively /// avoid frequent allocations. After calling `reserve`, /// capacity will be greater than or equal to `self.len() + additional`. /// Does nothing if capacity is already sufficient. /// /// # Panics /// /// Panics if the new capacity overflows [`usize`]. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::BinaryHeap; /// let mut heap = BinaryHeap::new(); /// heap.reserve(100); /// assert!(heap.capacity() >= 100); /// heap.push(4); /// ``` pub fn reserve(&mut self, additional: usize) { self.data.reserve(additional); } /// Tries to reserve the minimum capacity for at least `additional` elements /// more than the current length. Unlike [`try_reserve`], this will not /// deliberately over-allocate to speculatively avoid frequent allocations. /// After calling `try_reserve_exact`, capacity will be greater than or /// equal to `self.len() + additional` if it returns `Ok(())`. /// Does nothing if the capacity is already sufficient. /// /// Note that the allocator may give the collection more space than it /// requests. Therefore, capacity can not be relied upon to be precisely /// minimal. Prefer [`try_reserve`] if future insertions are expected. /// /// [`try_reserve`]: DaryHeap::try_reserve /// /// # Errors /// /// If the capacity overflows, or the allocator reports a failure, then an error /// is returned. /// /// # Examples /// /// ``` /// use dary_heap::BinaryHeap; /// use std::collections::TryReserveError; /// /// fn find_max_slow(data: &[u32]) -> Result, TryReserveError> { /// let mut heap = BinaryHeap::new(); /// /// // Pre-reserve the memory, exiting if we can't /// heap.try_reserve_exact(data.len())?; /// /// // Now we know this can't OOM in the middle of our complex work /// heap.extend(data.iter()); /// /// Ok(heap.pop()) /// } /// # find_max_slow(&[1, 2, 3]).expect("why is the test harness OOMing on 12 bytes?"); /// ``` #[cfg(feature = "extra")] #[cfg_attr(docsrs, doc(cfg(feature = "extra")))] pub fn try_reserve_exact(&mut self, additional: usize) -> Result<(), TryReserveError> { self.data.try_reserve_exact(additional) } /// Tries to reserve capacity for at least `additional` elements more than the /// current length. The allocator may reserve more space to speculatively /// avoid frequent allocations. After calling `try_reserve`, capacity will be /// greater than or equal to `self.len() + additional` if it returns /// `Ok(())`. Does nothing if capacity is already sufficient. This method /// preserves the contents even if an error occurs. /// /// # Errors /// /// If the capacity overflows, or the allocator reports a failure, then an error /// is returned. /// /// # Examples /// /// ``` /// use dary_heap::QuaternaryHeap; /// use std::collections::TryReserveError; /// /// fn find_max_slow(data: &[u32]) -> Result, TryReserveError> { /// let mut heap = QuaternaryHeap::new(); /// /// // Pre-reserve the memory, exiting if we can't /// heap.try_reserve(data.len())?; /// /// // Now we know this can't OOM in the middle of our complex work /// heap.extend(data.iter()); /// /// Ok(heap.pop()) /// } /// # find_max_slow(&[1, 2, 3]).expect("why is the test harness OOMing on 12 bytes?"); /// ``` #[cfg(feature = "extra")] #[cfg_attr(docsrs, doc(cfg(feature = "extra")))] pub fn try_reserve(&mut self, additional: usize) -> Result<(), TryReserveError> { self.data.try_reserve(additional) } /// Discards as much additional capacity as possible. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::TernaryHeap; /// let mut heap: TernaryHeap = TernaryHeap::with_capacity(100); /// /// assert!(heap.capacity() >= 100); /// heap.shrink_to_fit(); /// assert!(heap.capacity() == 0); /// ``` pub fn shrink_to_fit(&mut self) { self.data.shrink_to_fit(); } /// Discards capacity with a lower bound. /// /// The capacity will remain at least as large as both the length /// and the supplied value. /// /// If the current capacity is less than the lower limit, this is a no-op. /// /// # Examples /// /// ``` /// use dary_heap::TernaryHeap; /// let mut heap: TernaryHeap = TernaryHeap::with_capacity(100); /// /// assert!(heap.capacity() >= 100); /// heap.shrink_to(10); /// assert!(heap.capacity() >= 10); /// ``` #[inline] #[cfg(feature = "extra")] #[cfg_attr(docsrs, doc(cfg(feature = "extra")))] pub fn shrink_to(&mut self, min_capacity: usize) { self.data.shrink_to(min_capacity) } /// Returns a slice of all values in the underlying vector, in arbitrary /// order. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::OctonaryHeap; /// use std::io::{self, Write}; /// /// let heap = OctonaryHeap::from([1, 2, 3, 4, 5, 6, 7]); /// /// io::sink().write(heap.as_slice()).unwrap(); /// ``` #[cfg(feature = "unstable")] #[cfg_attr(docsrs, doc(cfg(feature = "unstable")))] #[must_use] pub fn as_slice(&self) -> &[T] { self.data.as_slice() } /// Consumes the `DaryHeap` and returns the underlying vector /// in arbitrary order. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::QuaternaryHeap; /// let heap = QuaternaryHeap::from([1, 2, 3, 4, 5, 6, 7]); /// let vec = heap.into_vec(); /// /// // Will print in some order /// for x in vec { /// println!("{x}"); /// } /// ``` #[must_use = "`self` will be dropped if the result is not used"] pub fn into_vec(self) -> Vec { self.into() } /// Returns the length of the *d*-ary heap. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::BinaryHeap; /// let heap = BinaryHeap::from([1, 3]); /// /// assert_eq!(heap.len(), 2); /// ``` #[must_use] pub fn len(&self) -> usize { self.data.len() } /// Checks if the *d*-ary heap is empty. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::BinaryHeap; /// let mut heap = BinaryHeap::new(); /// /// assert!(heap.is_empty()); /// /// heap.push(3); /// heap.push(5); /// heap.push(1); /// /// assert!(!heap.is_empty()); /// ``` #[must_use] pub fn is_empty(&self) -> bool { self.len() == 0 } /// Clears the *d*-ary heap, returning an iterator over the removed elements /// in arbitrary order. If the iterator is dropped before being fully /// consumed, it drops the remaining elements in arbitrary order. /// /// The returned iterator keeps a mutable borrow on the heap to optimize /// its implementation. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::QuaternaryHeap; /// let mut heap = QuaternaryHeap::from([1, 3]); /// /// assert!(!heap.is_empty()); /// /// for x in heap.drain() { /// println!("{x}"); /// } /// /// assert!(heap.is_empty()); /// ``` #[inline] pub fn drain(&mut self) -> Drain<'_, T> { Drain { iter: self.data.drain(..), } } /// Drops all items from the *d*-ary heap. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::TernaryHeap; /// let mut heap = TernaryHeap::from([1, 3]); /// /// assert!(!heap.is_empty()); /// /// heap.clear(); /// /// assert!(heap.is_empty()); /// ``` pub fn clear(&mut self) { self.drain(); } } /// Hole represents a hole in a slice i.e., an index without valid value /// (because it was moved from or duplicated). /// In drop, `Hole` will restore the slice by filling the hole /// position with the value that was originally removed. struct Hole<'a, T: 'a> { data: &'a mut [T], elt: ManuallyDrop, pos: usize, } impl<'a, T> Hole<'a, T> { /// Create a new `Hole` at index `pos`. /// /// Unsafe because pos must be within the data slice. #[inline] unsafe fn new(data: &'a mut [T], pos: usize) -> Self { debug_assert!(pos < data.len()); // SAFE: pos should be inside the slice let elt = ptr::read(data.get_unchecked(pos)); Hole { data, elt: ManuallyDrop::new(elt), pos, } } #[inline] fn pos(&self) -> usize { self.pos } /// Returns a reference to the element removed. #[inline] fn element(&self) -> &T { &self.elt } /// Returns a reference to the element at `index`. /// /// Unsafe because index must be within the data slice and not equal to pos. #[inline] unsafe fn get(&self, index: usize) -> &T { debug_assert!(index != self.pos); debug_assert!(index < self.data.len()); self.data.get_unchecked(index) } /// Move hole to new location /// /// Unsafe because index must be within the data slice and not equal to pos. #[inline] unsafe fn move_to(&mut self, index: usize) { debug_assert!(index != self.pos); debug_assert!(index < self.data.len()); let ptr = self.data.as_mut_ptr(); let index_ptr: *const _ = ptr.add(index); let hole_ptr = ptr.add(self.pos); ptr::copy_nonoverlapping(index_ptr, hole_ptr, 1); self.pos = index; } } impl<'a, T: Ord> Hole<'a, T> { /// Get largest element /// /// Unsafe because both elements must be within the data slice and not equal /// to pos. #[inline] unsafe fn max(&self, elem1: usize, elem2: usize) -> usize { if self.get(elem1) <= self.get(elem2) { elem2 } else { elem1 } } /// Get index of greatest sibling /// /// Unsafe because all siblings must be within the data slice and not equal /// to pos. #[inline] unsafe fn max_sibling(&self, first_sibling: usize) -> usize { let mut sibling = first_sibling; match D { 2 => { sibling += (self.get(sibling) <= self.get(sibling + 1)) as usize; } 3 => { let sibling_a = self.max_sibling::<2>(sibling); let sibling_b = sibling + 2; sibling = self.max(sibling_a, sibling_b); } 4 => { let sibling_a = self.max_sibling::<2>(sibling); let sibling_b = self.max_sibling::<2>(sibling + 2); sibling = self.max(sibling_a, sibling_b); } _ => { for other_sibling in sibling + 1..sibling + D { if self.get(sibling) <= self.get(other_sibling) { sibling = other_sibling; } } } } sibling } /// Get index of greatest sibling within range /// /// Unsafe because end must be the length of the data slice, last sibling /// must be outside of the data slice and no sibling may be equal to pos. /// It is allowed for first_sibling to be outside of the data slice. #[inline] unsafe fn max_sibling_to(&self, first_sibling: usize, end: usize) -> usize { let mut sibling = first_sibling; match D { 2 => {} 3 => { if sibling + 1 < end { sibling = self.max_sibling::<2>(sibling); } } _ => { for other_sibling in sibling + 1..end { if self.get(sibling) <= self.get(other_sibling) { sibling = other_sibling; } } } } sibling } } impl Drop for Hole<'_, T> { #[inline] fn drop(&mut self) { // fill the hole again unsafe { let pos = self.pos; ptr::copy_nonoverlapping(&*self.elt, self.data.get_unchecked_mut(pos), 1); } } } /// An iterator over the elements of a `DaryHeap`. /// /// This `struct` is created by [`DaryHeap::iter()`]. See its /// documentation for more. /// /// [`iter`]: DaryHeap::iter #[must_use = "iterators are lazy and do nothing unless consumed"] pub struct Iter<'a, T: 'a> { iter: slice::Iter<'a, T>, } impl fmt::Debug for Iter<'_, T> { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_tuple("Iter").field(&self.iter.as_slice()).finish() } } // FIXME(#26925) Remove in favor of `#[derive(Clone)]` impl Clone for Iter<'_, T> { fn clone(&self) -> Self { Iter { iter: self.iter.clone(), } } } impl<'a, T> Iterator for Iter<'a, T> { type Item = &'a T; #[inline] fn next(&mut self) -> Option<&'a T> { self.iter.next() } #[inline] fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } #[inline] fn last(self) -> Option<&'a T> { self.iter.last() } } impl<'a, T> DoubleEndedIterator for Iter<'a, T> { #[inline] fn next_back(&mut self) -> Option<&'a T> { self.iter.next_back() } } impl ExactSizeIterator for Iter<'_, T> { #[cfg(feature = "unstable_nightly")] fn is_empty(&self) -> bool { self.iter.is_empty() } } impl FusedIterator for Iter<'_, T> {} /// An owning iterator over the elements of a `DaryHeap`. /// /// This `struct` is created by [`DaryHeap::into_iter()`] /// (provided by the [`IntoIterator`] trait). See its documentation for more. /// /// [`into_iter`]: DaryHeap::into_iter #[derive(Clone)] pub struct IntoIter { iter: vec::IntoIter, } impl fmt::Debug for IntoIter { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { f.debug_tuple("IntoIter") .field(&self.iter.as_slice()) .finish() } } impl Iterator for IntoIter { type Item = T; #[inline] fn next(&mut self) -> Option { self.iter.next() } #[inline] fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl DoubleEndedIterator for IntoIter { #[inline] fn next_back(&mut self) -> Option { self.iter.next_back() } } impl ExactSizeIterator for IntoIter { #[cfg(feature = "unstable_nightly")] fn is_empty(&self) -> bool { self.iter.is_empty() } } impl FusedIterator for IntoIter {} impl Default for IntoIter { /// Creates an empty `dary_heap::IntoIter`. /// /// ``` /// let iter: dary_heap::IntoIter = Default::default(); /// assert_eq!(iter.len(), 0); /// ``` fn default() -> Self { IntoIter { iter: Vec::new().into_iter(), } } } // In addition to the SAFETY invariants of the following two unsafe traits // also refer to the vec::in_place_collect module documentation to get an overview #[cfg(feature = "unstable_nightly")] #[doc(hidden)] unsafe impl core::iter::SourceIter for IntoIter { type Source = IntoIter; #[inline] unsafe fn as_inner(&mut self) -> &mut Self::Source { self } } #[cfg(feature = "unstable_nightly")] #[doc(hidden)] unsafe impl core::iter::InPlaceIterable for IntoIter {} #[must_use = "iterators are lazy and do nothing unless consumed"] #[cfg(feature = "unstable")] #[derive(Clone, Debug)] pub struct IntoIterSorted { inner: DaryHeap, } #[cfg(feature = "unstable")] impl Iterator for IntoIterSorted { type Item = T; #[inline] fn next(&mut self) -> Option { self.inner.pop() } #[inline] fn size_hint(&self) -> (usize, Option) { let exact = self.inner.len(); (exact, Some(exact)) } } #[cfg(feature = "unstable")] impl ExactSizeIterator for IntoIterSorted {} #[cfg(feature = "unstable")] impl FusedIterator for IntoIterSorted {} #[cfg(all(feature = "unstable", feature = "unstable_nightly"))] unsafe impl core::iter::TrustedLen for IntoIterSorted {} /// A draining iterator over the elements of a `DaryHeap`. /// /// This `struct` is created by [`DaryHeap::drain()`]. See its /// documentation for more. /// /// [`drain`]: DaryHeap::drain #[derive(Debug)] pub struct Drain<'a, T: 'a> { iter: vec::Drain<'a, T>, } impl Iterator for Drain<'_, T> { type Item = T; #[inline] fn next(&mut self) -> Option { self.iter.next() } #[inline] fn size_hint(&self) -> (usize, Option) { self.iter.size_hint() } } impl DoubleEndedIterator for Drain<'_, T> { #[inline] fn next_back(&mut self) -> Option { self.iter.next_back() } } impl ExactSizeIterator for Drain<'_, T> { #[cfg(feature = "unstable_nightly")] fn is_empty(&self) -> bool { self.iter.is_empty() } } impl FusedIterator for Drain<'_, T> {} /// A draining iterator over the elements of a `DaryHeap`. /// /// This `struct` is created by [`DaryHeap::drain_sorted()`]. See its /// documentation for more. /// /// [`drain_sorted`]: DaryHeap::drain_sorted #[cfg(feature = "unstable")] #[derive(Debug)] pub struct DrainSorted<'a, T: Ord, const D: usize> { inner: &'a mut DaryHeap, } #[cfg(feature = "unstable")] impl<'a, T: Ord, const D: usize> Drop for DrainSorted<'a, T, D> { /// Removes heap elements in heap order. fn drop(&mut self) { use core::mem::forget; struct DropGuard<'r, 'a, T: Ord, const D: usize>(&'r mut DrainSorted<'a, T, D>); impl<'r, 'a, T: Ord, const D: usize> Drop for DropGuard<'r, 'a, T, D> { fn drop(&mut self) { while self.0.inner.pop().is_some() {} } } while let Some(item) = self.inner.pop() { let guard = DropGuard(self); drop(item); forget(guard); } } } #[cfg(feature = "unstable")] impl Iterator for DrainSorted<'_, T, D> { type Item = T; #[inline] fn next(&mut self) -> Option { self.inner.pop() } #[inline] fn size_hint(&self) -> (usize, Option) { let exact = self.inner.len(); (exact, Some(exact)) } } #[cfg(feature = "unstable")] impl ExactSizeIterator for DrainSorted<'_, T, D> {} #[cfg(feature = "unstable")] impl FusedIterator for DrainSorted<'_, T, D> {} #[cfg(all(feature = "unstable", feature = "unstable_nightly"))] unsafe impl core::iter::TrustedLen for DrainSorted<'_, T, D> {} impl From> for DaryHeap { /// Converts a `Vec` into a `DaryHeap`. /// /// This conversion happens in-place, and has *O*(*n*) time complexity. fn from(vec: Vec) -> DaryHeap { let mut heap = DaryHeap { data: vec }; heap.rebuild(); heap } } impl From<[T; N]> for DaryHeap { /// ``` /// use dary_heap::TernaryHeap; /// /// let mut h1 = TernaryHeap::from([1, 4, 2, 3]); /// let mut h2: TernaryHeap<_> = [1, 4, 2, 3].into(); /// while let Some((a, b)) = h1.pop().zip(h2.pop()) { /// assert_eq!(a, b); /// } /// ``` fn from(arr: [T; N]) -> Self { // With newer Rust versions `Self::from_iter(arr)` should be used, as // using `IntoIter::new` is deprecated from 1.59.0. However, this would // require a MSRV of 1.53.0, and both are equivalent behind the scenes. #[allow(deprecated)] core::array::IntoIter::new(arr).collect() } } impl From> for Vec { /// Converts a `DaryHeap` into a `Vec`. /// /// This conversion requires no data movement or allocation, and has /// constant time complexity. fn from(heap: DaryHeap) -> Vec { heap.data } } impl FromIterator for DaryHeap { fn from_iter>(iter: I) -> DaryHeap { DaryHeap::from(iter.into_iter().collect::>()) } } impl IntoIterator for DaryHeap { type Item = T; type IntoIter = IntoIter; /// Creates a consuming iterator, that is, one that moves each value out of /// the *d*-ary heap in arbitrary order. The *d*-ary heap cannot be used /// after calling this. /// /// # Examples /// /// Basic usage: /// /// ``` /// use dary_heap::BinaryHeap; /// let heap = BinaryHeap::from([1, 2, 3, 4]); /// /// // Print 1, 2, 3, 4 in arbitrary order /// for x in heap.into_iter() { /// // x has type i32, not &i32 /// println!("{x}"); /// } /// ``` fn into_iter(self) -> IntoIter { IntoIter { iter: self.data.into_iter(), } } } impl<'a, T, const D: usize> IntoIterator for &'a DaryHeap { type Item = &'a T; type IntoIter = Iter<'a, T>; fn into_iter(self) -> Iter<'a, T> { self.iter() } } impl Extend for DaryHeap { #[inline] fn extend>(&mut self, iter: I) { let guard = RebuildOnDrop { rebuild_from: self.len(), heap: self, }; guard.heap.data.extend(iter); } #[inline] #[cfg(feature = "unstable_nightly")] fn extend_one(&mut self, item: T) { self.push(item); } #[inline] #[cfg(feature = "unstable_nightly")] fn extend_reserve(&mut self, additional: usize) { self.reserve(additional); } } impl<'a, T: 'a + Ord + Copy, const D: usize> Extend<&'a T> for DaryHeap { fn extend>(&mut self, iter: I) { self.extend(iter.into_iter().cloned()); } #[inline] #[cfg(feature = "unstable_nightly")] fn extend_one(&mut self, &item: &'a T) { self.push(item); } #[inline] #[cfg(feature = "unstable_nightly")] fn extend_reserve(&mut self, additional: usize) { self.reserve(additional); } } #[cfg(any(test, fuzzing))] impl DaryHeap { /// Panics if the heap is in an inconsistent state #[track_caller] pub fn assert_valid_state(&self) { assert_ne!(D, 0, "Arity should be greater than zero"); for (i, v) in self.iter().enumerate() { let children = D * i + 1..D * i + D; if children.start > self.len() { break; } for j in children { if let Some(x) = self.data.get(j) { assert!(v >= x); } } } } } #[cfg(test)] mod tests { use super::*; use rand::{seq::SliceRandom, thread_rng}; fn pop() { let mut rng = thread_rng(); let ntest = if cfg!(miri) { 1 } else { 10 }; let nelem = if cfg!(miri) { 100 } else { 1000 }; for _ in 0..ntest { let mut data: Vec<_> = (0..nelem).collect(); data.shuffle(&mut rng); let mut heap = DaryHeap::<_, D>::from(data); heap.assert_valid_state(); for i in (0..nelem).rev() { assert_eq!(heap.pop(), Some(i)); heap.assert_valid_state(); } assert_eq!(heap.pop(), None); } } #[test] #[should_panic] fn push_d0() { let mut heap = DaryHeap::<_, 0>::new(); heap.push(42); } #[test] #[should_panic] fn from_vec_d0() { let _heap = DaryHeap::<_, 0>::from(vec![42]); } #[test] fn pop_d1() { pop::<1>(); } #[test] fn pop_d2() { pop::<2>(); } #[test] fn pop_d3() { pop::<3>(); } #[test] fn pop_d4() { pop::<4>(); } #[test] fn pop_d5() { pop::<5>(); } #[test] fn pop_d6() { pop::<6>(); } #[test] fn pop_d7() { pop::<7>(); } #[test] fn pop_d8() { pop::<8>(); } #[test] #[cfg(feature = "serde")] fn serde() { use serde_test::Token::{Seq, SeqEnd, I32}; impl PartialEq for DaryHeap { fn eq(&self, other: &Self) -> bool { self.iter().zip(other).all(|(a, b)| a == b) } } let empty = [Seq { len: Some(0) }, SeqEnd]; let part = [Seq { len: Some(3) }, I32(3), I32(1), I32(2), SeqEnd]; let full = [Seq { len: Some(4) }, I32(4), I32(3), I32(2), I32(1), SeqEnd]; let mut dary = BinaryHeap::::new(); serde_test::assert_tokens(&dary, &empty); for i in [1, 2, 3] { dary.push(i); } serde_test::assert_tokens(&dary, &part); dary.push(4); serde_test::assert_tokens(&dary, &full); let mut std = alloc::collections::BinaryHeap::::new(); serde_test::assert_ser_tokens(&std, &empty); for i in [1, 2, 3] { std.push(i); } serde_test::assert_ser_tokens(&std, &part); std.push(4); serde_test::assert_ser_tokens(&std, &full); } }