sized-chunks-0.3.1/.circleci/config.yml010064400017500001750000000077601343401405200161740ustar0000000000000000version: 2.1 executors: default: description: Executor environment for building Rust crates. docker: - image: circleci/rust:1 commands: update_toolchain: description: Update the Rust toolchain to use for building. parameters: toolchain: description: Rust toolchain to use. Overrides the default toolchain (stable) or any toolchain specified in the project via `rust-toolchain`. type: string default: "" steps: - run: name: Update toolchain command: | test -z "<>" || echo "<>" >rust-toolchain rustup show active-toolchain - run: name: Version information command: | rustup --version rustc --version cargo --version build: description: Build all targets of a Rust crate. parameters: release: description: By default, the crate is build in debug mode without optimizations. Set this to true to compile in release mode. type: boolean default: false steps: - run: name: Calculate dependencies command: | rustc --version >rust-version test -e Cargo.lock || cargo generate-lockfile - restore_cache: keys: - v6-cargo-cache-{{arch}}-{{checksum "rust-version"}}-<>-{{checksum "Cargo.lock"}} - run: name: Build all targets command: cargo build --all --all-targets<<#parameters.release>> --release<> - save_cache: paths: - /usr/local/cargo/registry - target key: v6-cargo-cache-{{arch}}-{{checksum "rust-version"}}-<>-{{checksum "Cargo.lock"}} check: description: Check all targets of a Rust crate. steps: - run: name: Calculate dependencies command: test -e Cargo.lock || cargo generate-lockfile - run: name: Check all targets command: | if rustup component add clippy; then cargo clippy --all --all-targets -- -Dwarnings else echo Skipping clippy fi test: description: Run all tests of a Rust crate. Make sure to build first. parameters: release: description: By default, the crate is build in debug mode without optimizations. Set this to true to compile in release mode. type: boolean default: false steps: - run: name: Run all tests command: cargo test --all<<#parameters.release>> --release<> jobs: check: description: Check a Rust crate. parameters: toolchain: description: Rust toolchain to use. Overrides the default toolchain (stable) or any toolchain specified in the project via `rust-toolchain`. type: string default: "" executor: default steps: - checkout - update_toolchain: toolchain: <> - check test: description: Builds a Rust crate and runs all tests. parameters: toolchain: description: Rust toolchain to use. Overrides the default toolchain (stable) or any toolchain specified in the project via `rust-toolchain`. type: string default: "" release: description: By default, the crate is build in debug mode without optimizations. Set this to true to compile in release mode. type: boolean default: false executor: default steps: - checkout - update_toolchain: toolchain: <> - build: release: <> - test: release: <> workflows: Project: jobs: - test: name: cargo test (stable) toolchain: stable - test: name: cargo test (beta) toolchain: beta - test: name: cargo test (nightly) toolchain: nightly sized-chunks-0.3.1/.gitignore010064400017500001750000000000351343302110400143200ustar0000000000000000/target **/*.rs.bk Cargo.locksized-chunks-0.3.1/.travis.yml010064400017500001750000000007121343303705400144560ustar0000000000000000language: rust rust: - stable - beta - nightly cache: directories: - /home/travis/.rustup - /home/travis/.cargo - /home/travis/target install: - rustup update - mkdir -p .cargo && echo '[build]' > .cargo/config && echo 'target-dir = "/home/travis/target"' >> .cargo/config matrix: include: - env: CLIPPY=1 rust: stable install: - rustup component add clippy; true script: cargo clippy -- -D warnings sized-chunks-0.3.1/CHANGELOG.md010064400017500001750000000047511352132032000141520ustar0000000000000000# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/) and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.html). ## [0.3.1] - 2019-08-03 ### ADDED - Chunk sizes up to 256 are now supported. ## [0.3.0] - 2019-05-18 ### ADDED - A new data structure, `InlineArray`, which is a stack allocated array matching the size of a given type, intended for optimising for the case of very small vectors. - `Chunk` has an implementation of `From` which is considerably faster than going via iterators. ## [0.2.2] - 2019-05-10 ### ADDED - `Slice::get` methods now return references with the lifetime of the underlying `RingBuffer` rather than the lifetime of the slice. ## [0.2.1] - 2019-04-15 ### ADDED - A lot of documentation. - `std::io::Read` implementations for `Chunk` and `RingBuffer` to match their `Write` implementations. ## [0.2.0] - 2019-04-14 ### CHANGED - The `capacity()` method has been replacied with a `CAPACITY` const on each type. ### ADDED - There is now a `RingBuffer` implementation, which should be nearly a drop-in replacement for `SizedChunk` but is always O(1) on push and cannot be dereferenced to slices (but it has a set of custom slice-like implementations to make that less of a drawback). - The `Drain` iterator for `SizedChunk` now implements `DoubleEndedIterator`. ### FIXED - `SizedChunk::drain_from_front/back` will now always panic if the iterator underflows, instead of only doing it in debug mode. ## [0.1.3] - 2019-04-12 ### ADDED - `SparseChunk` now has a default length of `U64`. - `Chunk` now has `PartialEq` defined for anything that can be borrowed as a slice. - `SparseChunk` likewise has `PartialEq` defined for `BTreeMap` and `HashMap`. These are intended for debugging and aren't optimally `efficient. - `Chunk` and `SparseChunk` now have a new method `capacity()` which returns its maximum capacity (the number in the type) as a usize. - Added an `entries()` method to `SparseChunk`. - `SparseChunk` now has a `Debug` implementation. ### FIXED - Extensive integration tests were added for `Chunk` and `SparseChunk`. - `Chunk::clear` is now very slightly faster. ## [0.1.2] - 2019-03-11 ### FIXED - Fixed an alignment issue in `Chunk::drain_from_back`. (#1) ## [0.1.1] - 2019-02-19 ### FIXED - Some 2018 edition issues. ## [0.1.0] - 2019-02-19 Initial release. sized-chunks-0.3.1/CODE_OF_CONDUCT.md010064400017500001750000000062321343303530200151410ustar0000000000000000# Contributor Covenant Code of Conduct ## Our Pledge In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, education, socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. ## Our Standards Examples of behavior that contributes to creating a positive environment include: * Using welcoming and inclusive language * Being respectful of differing viewpoints and experiences * Gracefully accepting constructive criticism * Focusing on what is best for the community * Showing empathy towards other community members Examples of unacceptable behavior by participants include: * The use of sexualized language or imagery and unwelcome sexual attention or advances * Trolling, insulting/derogatory comments, and personal or political attacks * Public or private harassment * Publishing others' private information, such as a physical or electronic address, without explicit permission * Other conduct which could reasonably be considered inappropriate in a professional setting ## Our Responsibilities Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. ## Scope This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. ## Enforcement Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at admin@immutable.rs. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. ## Attribution This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html [homepage]: https://www.contributor-covenant.org sized-chunks-0.3.1/Cargo.toml.orig010066600017500001750000000011021352132032000152170ustar0000000000000000[package] name = "sized-chunks" version = "0.3.1" authors = ["Bodil Stokke "] edition = "2018" license = "MPL-2.0+" description = "Efficient sized chunk datatypes" repository = "https://github.com/bodil/sized-chunks" documentation = "http://docs.rs/sized-chunks" readme = "./README.md" categories = ["data-structures"] keywords = ["sparse-array"] exclude = ["release.toml", "proptest-regressions/**"] [badges] travis-ci = { repository = "bodil/sized-chunks" } [dependencies] typenum = "1.10.0" [dev-dependencies] proptest = "0.9.1" proptest-derive = "0.1.0" sized-chunks-0.3.1/Cargo.toml0000644000000021620000000000000115020ustar00# THIS FILE IS AUTOMATICALLY GENERATED BY CARGO # # When uploading crates to the registry Cargo will automatically # "normalize" Cargo.toml files for maximal compatibility # with all versions of Cargo and also rewrite `path` dependencies # to registry (e.g., crates.io) dependencies # # If you believe there's an error in this file please file an # issue against the rust-lang/cargo repository. If you're # editing this file be aware that the upstream Cargo.toml # will likely look very different (and much more reasonable) [package] edition = "2018" name = "sized-chunks" version = "0.3.1" authors = ["Bodil Stokke "] exclude = ["release.toml", "proptest-regressions/**"] description = "Efficient sized chunk datatypes" documentation = "http://docs.rs/sized-chunks" readme = "./README.md" keywords = ["sparse-array"] categories = ["data-structures"] license = "MPL-2.0+" repository = "https://github.com/bodil/sized-chunks" [dependencies.typenum] version = "1.10.0" [dev-dependencies.proptest] version = "0.9.1" [dev-dependencies.proptest-derive] version = "0.1.0" [badges.travis-ci] repository = "bodil/sized-chunks" sized-chunks-0.3.1/LICENCE.md010064400017500001750000000362761343303617700137540ustar0000000000000000Mozilla Public License Version 2.0 ================================== ### 1. Definitions **1.1. “Contributor”** means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. **1.2. “Contributor Version”** means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution. **1.3. “Contribution”** means Covered Software of a particular Contributor. **1.4. “Covered Software”** means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. **1.5. “Incompatible With Secondary Licenses”** means * **(a)** that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or * **(b)** that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. **1.6. “Executable Form”** means any form of the work other than Source Code Form. **1.7. “Larger Work”** means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. **1.8. “License”** means this document. **1.9. “Licensable”** means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. **1.10. “Modifications”** means any of the following: * **(a)** any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or * **(b)** any new file in Source Code Form that contains any Covered Software. **1.11. “Patent Claims” of a Contributor** means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. **1.12. “Secondary License”** means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. **1.13. “Source Code Form”** means the form of the work preferred for making modifications. **1.14. “You” (or “Your”)** means an individual or a legal entity exercising rights under this License. For legal entities, “You” includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, “control” means **(a)** the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or **(b)** ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. ### 2. License Grants and Conditions #### 2.1. Grants Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: * **(a)** under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and * **(b)** under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. #### 2.2. Effective Date The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. #### 2.3. Limitations on Grant Scope The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: * **(a)** for any code that a Contributor has removed from Covered Software; or * **(b)** for infringements caused by: **(i)** Your and any other third party's modifications of Covered Software, or **(ii)** the combination of its Contributions with other software (except as part of its Contributor Version); or * **(c)** under Patent Claims infringed by Covered Software in the absence of its Contributions. This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). #### 2.4. Subsequent Licenses No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). #### 2.5. Representation Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. #### 2.6. Fair Use This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. #### 2.7. Conditions Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. ### 3. Responsibilities #### 3.1. Distribution of Source Form All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form. #### 3.2. Distribution of Executable Form If You distribute Covered Software in Executable Form then: * **(a)** such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and * **(b)** You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License. #### 3.3. Distribution of a Larger Work You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). #### 3.4. Notices You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. #### 3.5. Application of Additional Terms You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. ### 4. Inability to Comply Due to Statute or Regulation If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: **(a)** comply with the terms of this License to the maximum extent possible; and **(b)** describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. ### 5. Termination **5.1.** The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated **(a)** provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and **(b)** on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. **5.2.** If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. **5.3.** In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. ### 6. Disclaimer of Warranty > Covered Software is provided under this License on an “as is” > basis, without warranty of any kind, either expressed, implied, or > statutory, including, without limitation, warranties that the > Covered Software is free of defects, merchantable, fit for a > particular purpose or non-infringing. The entire risk as to the > quality and performance of the Covered Software is with You. > Should any Covered Software prove defective in any respect, You > (not any Contributor) assume the cost of any necessary servicing, > repair, or correction. This disclaimer of warranty constitutes an > essential part of this License. No use of any Covered Software is > authorized under this License except under this disclaimer. ### 7. Limitation of Liability > Under no circumstances and under no legal theory, whether tort > (including negligence), contract, or otherwise, shall any > Contributor, or anyone who distributes Covered Software as > permitted above, be liable to You for any direct, indirect, > special, incidental, or consequential damages of any character > including, without limitation, damages for lost profits, loss of > goodwill, work stoppage, computer failure or malfunction, or any > and all other commercial damages or losses, even if such party > shall have been informed of the possibility of such damages. This > limitation of liability shall not apply to liability for death or > personal injury resulting from such party's negligence to the > extent applicable law prohibits such limitation. Some > jurisdictions do not allow the exclusion or limitation of > incidental or consequential damages, so this exclusion and > limitation may not apply to You. ### 8. Litigation Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims. ### 9. Miscellaneous This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. ### 10. Versions of the License #### 10.1. New Versions Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. #### 10.2. Effect of New Versions You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. #### 10.3. Modified Versions If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). #### 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. ## Exhibit A - Source Code Form License Notice This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. You may add additional accurate notices of copyright ownership. ## Exhibit B - “Incompatible With Secondary Licenses” Notice This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0. sized-chunks-0.3.1/README.md010064400017500001750000000021501343303615500136230ustar0000000000000000# sized-chunks Various fixed length array data types, designed for [immutable.rs]. ## Overview This crate provides the core building blocks for the immutable data structures in [immutable.rs]: a sized array with O(1) amortised double ended push/pop and smarter insert/remove performance (used by `im::Vector` and `im::OrdMap`), and a fixed size sparse array (used by `im::HashMap`). In a nutshell, this crate contains the unsafe bits from [immutable.rs], which may or may not be useful to anyone else, and have been split out for ease of auditing. ## Documentation * [API docs](https://docs.rs/sized-chunks) ## Licence Copyright 2019 Bodil Stokke This software is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. ## Code of Conduct Please note that this project is released with a [Contributor Code of Conduct][coc]. By participating in this project you agree to abide by its terms. [immutable.rs]: https://immutable.rs/ [coc]: https://github.com/bodil/sized-chunks/blob/master/CODE_OF_CONDUCT.md sized-chunks-0.3.1/src/bitmap.rs010064400017500001750000000101011350544210700147460ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! A fixed capacity sparse array. //! //! See [`Bitmap`](struct.Bitmap.html) use std::fmt::{Debug, Error, Formatter}; use crate::types::Bits; /// A compact array of bits. /// /// The bitmap is stored as a primitive type, so the maximum value of `Size` is /// currently 128, corresponding to a type of `u128`. The type used to store the /// bitmap will be the minimum unsigned integer type required to fit the number /// of bits required, from `u8` to `u128`. /// /// # Examples /// /// ```rust /// # #[macro_use] extern crate sized_chunks; /// # extern crate typenum; /// # use sized_chunks::bitmap::Bitmap; /// # use typenum::U10; /// # fn main() { /// let mut bitmap = Bitmap::::new(); /// assert_eq!(bitmap.set(5, true), false); /// assert_eq!(bitmap.set(5, true), true); /// assert_eq!(bitmap.get(5), true); /// assert_eq!(bitmap.get(6), false); /// assert_eq!(bitmap.len(), 1); /// assert_eq!(bitmap.set(3, true), false); /// assert_eq!(bitmap.len(), 2); /// assert_eq!(bitmap.first_index(), Some(3)); /// # } /// ``` pub struct Bitmap { data: Size::Store, } impl Clone for Bitmap { fn clone(&self) -> Self { Bitmap { data: self.data } } } impl Copy for Bitmap {} impl Default for Bitmap { fn default() -> Self { Bitmap { data: Size::Store::default(), } } } impl PartialEq for Bitmap { fn eq(&self, other: &Self) -> bool { self.data == other.data } } impl Debug for Bitmap { fn fmt(&self, f: &mut Formatter) -> Result<(), Error> { self.data.fmt(f) } } impl Bitmap { /// Construct an empty bitmap. #[inline] pub fn new() -> Self { Self::default() } /// Count the number of `true` bits in the bitmap. #[inline] pub fn len(self) -> usize { Size::len(&self.data) } /// Test if the bitmap contains only `false` bits. #[inline] pub fn is_empty(self) -> bool { self.first_index().is_none() } /// Get the value of the bit at a given index. #[inline] pub fn get(self, index: usize) -> bool { Size::get(&self.data, index) } /// Set the value of the bit at a given index. /// /// Returns the previous value of the bit. #[inline] pub fn set(&mut self, index: usize, value: bool) -> bool { Size::set(&mut self.data, index, value) } /// Find the index of the first `true` bit in the bitmap. #[inline] pub fn first_index(self) -> Option { Size::first_index(&self.data) } } impl IntoIterator for Bitmap { type Item = usize; type IntoIter = Iter; fn into_iter(self) -> Self::IntoIter { Iter { index: 0, data: self.data, } } } /// An iterator over the indices in a bitmap which are `true`. pub struct Iter { index: usize, data: Size::Store, } impl Iterator for Iter { type Item = usize; fn next(&mut self) -> Option { if self.index >= Size::USIZE { return None; } if Size::get(&self.data, self.index) { self.index += 1; Some(self.index - 1) } else { self.index += 1; self.next() } } } #[cfg(test)] mod test { use super::*; use proptest::collection::btree_set; use proptest::proptest; use typenum::U64; proptest! { #[test] fn get_set_and_iter(bits in btree_set(0..64usize, 0..64)) { let mut bitmap = Bitmap::::new(); for i in &bits { bitmap.set(*i, true); } for i in 0..64 { assert_eq!(bitmap.get(i), bits.contains(&i)); } assert!(bitmap.into_iter().eq(bits.into_iter())); } } } sized-chunks-0.3.1/src/inline_array.rs010064400017500001750000000330751346763536200162030ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! A fixed capacity array sized to match some other type `T`. //! //! See [`InlineArray`](struct.InlineArray.html) use std::borrow::{Borrow, BorrowMut}; use std::cmp::Ordering; use std::fmt::{Debug, Error, Formatter}; use std::hash::{Hash, Hasher}; use std::iter::{FromIterator, FusedIterator}; use std::marker::PhantomData; use std::mem::{self, ManuallyDrop}; use std::ops::{Deref, DerefMut}; use std::ptr; use std::slice::{from_raw_parts, from_raw_parts_mut, Iter as SliceIter, IterMut as SliceIterMut}; /// A fixed capacity array sized to match some other type `T`. /// /// This works like a vector, but allocated on the stack (and thus marginally /// faster than `Vec`), with the allocated space exactly matching the size of /// the given type `T`. The vector consists of a `usize` tracking its current /// length, followed by zero or more elements of type `A`. The capacity is thus /// `( size_of::() - size_of::() ) / size_of::()`. This could lead /// to situations where the capacity is zero, if `size_of::()` is greater /// than `size_of::() - size_of::()`, which is not an error and /// handled properly by the data structure. /// /// If `size_of::()` is less than `size_of::()`, meaning the vector /// has no space to store its length, `InlineArray::new()` will panic. /// /// This is meant to facilitate optimisations where a list data structure /// allocates a fairly large struct for itself, allowing you to replace it with /// an `InlineArray` until it grows beyond its capacity. This not only gives you /// a performance boost at very small sizes, it also saves you from having to /// allocate anything on the heap until absolutely necessary. /// /// For instance, `im::Vector` in its final form currently looks like this /// (approximately): /// /// ```rust, ignore /// struct RRB { /// length: usize, /// tree_height: usize, /// outer_head: Rc>, /// inner_head: Rc>, /// tree: Rc>, /// inner_tail: Rc>, /// outer_tail: Rc>, /// } /// ``` /// /// That's two `usize`s and five `Rc`s, which comes in at 56 bytes on x86_64 /// architectures. With `InlineArray`, that leaves us with 56 - /// `size_of::()` = 48 bytes we can use before having to expand into the /// full data struture. If `A` is `u8`, that's 48 elements, and even if `A` is a /// pointer we can still keep 6 of them inline before we run out of capacity. /// /// We can declare an enum like this: /// /// ```rust, ignore /// enum VectorWrapper { /// Inline(InlineArray>), /// Full(RRB), /// } /// ``` /// /// Both of these will have the same size, and we can swap the `Inline` case out /// with the `Full` case once the `InlineArray` runs out of capacity. pub struct InlineArray { data: ManuallyDrop, phantom: PhantomData, } impl InlineArray { const HOST_SIZE: usize = mem::size_of::(); const ELEMENT_SIZE: usize = mem::size_of::(); const HEADER_SIZE: usize = mem::size_of::(); pub const CAPACITY: usize = (Self::HOST_SIZE - Self::HEADER_SIZE) / Self::ELEMENT_SIZE; #[inline] #[must_use] unsafe fn len_const(&self) -> *const usize { (&self.data) as *const _ as *const usize } #[inline] #[must_use] pub(crate) unsafe fn len_mut(&mut self) -> *mut usize { (&mut self.data) as *mut _ as *mut usize } #[inline] #[must_use] pub(crate) unsafe fn data(&self) -> *const A { self.len_const().add(1) as *const _ as *const A } #[inline] #[must_use] unsafe fn data_mut(&mut self) -> *mut A { self.len_mut().add(1) as *mut _ as *mut A } #[inline] #[must_use] unsafe fn ptr_at(&self, index: usize) -> *const A { self.data().add(index) } #[inline] #[must_use] unsafe fn ptr_at_mut(&mut self, index: usize) -> *mut A { self.data_mut().add(index) } #[inline] unsafe fn read_at(&self, index: usize) -> A { ptr::read(self.ptr_at(index)) } #[inline] unsafe fn write_at(&mut self, index: usize, value: A) { ptr::write(self.ptr_at_mut(index), value); } /// Get the length of the array. #[inline] #[must_use] pub fn len(&self) -> usize { unsafe { *self.len_const() } } /// Test if the array is empty. #[inline] #[must_use] pub fn is_empty(&self) -> bool { self.len() == 0 } /// Test if the array is at capacity. #[inline] #[must_use] pub fn is_full(&self) -> bool { self.len() >= Self::CAPACITY } /// Construct a new empty array. #[inline] #[must_use] pub fn new() -> Self { debug_assert!(Self::HOST_SIZE > Self::HEADER_SIZE); unsafe { mem::zeroed() } } #[inline] #[must_use] fn get_unchecked(&self, index: usize) -> &A { unsafe { &*self.data().add(index) } } /// Push an item to the back of the array. /// /// Panics if the capacity of the array is exceeded. /// /// Time: O(1) pub fn push(&mut self, value: A) { if self.is_full() { panic!("InlineArray::push: chunk size overflow"); } unsafe { self.write_at(self.len(), value); *self.len_mut() += 1; } } /// Pop an item from the back of the array. /// /// Returns `None` if the array is empty. /// /// Time: O(1) pub fn pop(&mut self) -> Option { if self.is_empty() { None } else { unsafe { *self.len_mut() -= 1; } Some(unsafe { self.read_at(self.len()) }) } } /// Insert a new value at index `index`, shifting all the following values /// to the right. /// /// Panics if the index is out of bounds or the array is at capacity. /// /// Time: O(n) for the number of items shifted pub fn insert(&mut self, index: usize, value: A) { if self.is_full() { panic!("InlineArray::push: chunk size overflow"); } if index > self.len() { panic!("InlineArray::insert: index out of bounds"); } unsafe { let src = self.ptr_at_mut(index); ptr::copy(src, src.add(1), self.len() - index); ptr::write(src, value); *self.len_mut() += 1; } } /// Remove the value at index `index`, shifting all the following values to /// the left. /// /// Returns the removed value, or `None` if the array is empty or the index /// is out of bounds. /// /// Time: O(n) for the number of items shifted pub fn remove(&mut self, index: usize) -> Option { if index >= self.len() { None } else { unsafe { let src = self.ptr_at_mut(index); let value = ptr::read(src); *self.len_mut() -= 1; ptr::copy(src.add(1), src, self.len() - index); Some(value) } } } /// Split an array into two, the original array containing /// everything up to `index` and the returned array containing /// everything from `index` onwards. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items in the new chunk pub fn split_off(&mut self, index: usize) -> Self { if index > self.len() { panic!("InlineArray::split_off: index out of bounds"); } let mut out = Self::new(); if index < self.len() { unsafe { ptr::copy(self.ptr_at(index), out.data_mut(), self.len() - index); *out.len_mut() = self.len() - index; *self.len_mut() = index; } } out } #[inline] fn drop_contents(&mut self) { unsafe { let data = self.data_mut(); for i in 0..self.len() { ptr::drop_in_place(data.add(i)); } } } /// Discard the contents of the array. /// /// Time: O(n) pub fn clear(&mut self) { self.drop_contents(); unsafe { *self.len_mut() = 0; } } /// Construct an iterator that drains values from the front of the array. pub fn drain(&mut self) -> Drain { Drain { array: self } } } impl Drop for InlineArray { fn drop(&mut self) { self.drop_contents() } } impl Default for InlineArray { fn default() -> Self { Self::new() } } // WANT: // impl Copy for InlineArray where A: Copy {} impl Clone for InlineArray where A: Clone, { fn clone(&self) -> Self { let mut copy = Self::new(); for i in 0..self.len() { unsafe { copy.write_at(i, self.get_unchecked(i).clone()); } } unsafe { *copy.len_mut() = self.len(); } copy } } impl Deref for InlineArray { type Target = [A]; fn deref(&self) -> &Self::Target { unsafe { from_raw_parts(self.data(), self.len()) } } } impl DerefMut for InlineArray { fn deref_mut(&mut self) -> &mut Self::Target { unsafe { from_raw_parts_mut(self.data_mut(), self.len()) } } } impl Borrow<[A]> for InlineArray { fn borrow(&self) -> &[A] { self.deref() } } impl BorrowMut<[A]> for InlineArray { fn borrow_mut(&mut self) -> &mut [A] { self.deref_mut() } } impl AsRef<[A]> for InlineArray { fn as_ref(&self) -> &[A] { self.deref() } } impl AsMut<[A]> for InlineArray { fn as_mut(&mut self) -> &mut [A] { self.deref_mut() } } impl PartialEq for InlineArray where Slice: Borrow<[A]>, A: PartialEq, { fn eq(&self, other: &Slice) -> bool { self.deref() == other.borrow() } } impl Eq for InlineArray where A: Eq {} impl PartialOrd for InlineArray where A: PartialOrd, { fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl Ord for InlineArray where A: Ord, { fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl Debug for InlineArray where A: Debug, { fn fmt(&self, f: &mut Formatter) -> Result<(), Error> { f.write_str("Chunk")?; f.debug_list().entries(self.iter()).finish() } } impl Hash for InlineArray where A: Hash, { fn hash(&self, hasher: &mut H) where H: Hasher, { for item in self { item.hash(hasher) } } } impl IntoIterator for InlineArray { type Item = A; type IntoIter = Iter; fn into_iter(self) -> Self::IntoIter { Iter { array: self } } } impl FromIterator for InlineArray { fn from_iter(it: I) -> Self where I: IntoIterator, { let mut chunk = Self::new(); for item in it { chunk.push(item); } chunk } } impl<'a, A, T> IntoIterator for &'a InlineArray { type Item = &'a A; type IntoIter = SliceIter<'a, A>; fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a, A, T> IntoIterator for &'a mut InlineArray { type Item = &'a mut A; type IntoIter = SliceIterMut<'a, A>; fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } impl Extend for InlineArray { /// Append the contents of the iterator to the back of the array. /// /// Panics if the array exceeds its capacity. /// /// Time: O(n) for the length of the iterator fn extend(&mut self, it: I) where I: IntoIterator, { for item in it { self.push(item); } } } impl<'a, A, T> Extend<&'a A> for InlineArray where A: 'a + Copy, { /// Append the contents of the iterator to the back of the array. /// /// Panics if the array exceeds its capacity. /// /// Time: O(n) for the length of the iterator fn extend(&mut self, it: I) where I: IntoIterator, { for item in it { self.push(*item); } } } pub struct Iter { array: InlineArray, } impl Iterator for Iter { type Item = A; fn next(&mut self) -> Option { self.array.remove(0) } fn size_hint(&self) -> (usize, Option) { (self.array.len(), Some(self.array.len())) } } impl DoubleEndedIterator for Iter { fn next_back(&mut self) -> Option { self.array.pop() } } impl ExactSizeIterator for Iter {} impl FusedIterator for Iter {} pub struct Drain<'a, A, T> { array: &'a mut InlineArray, } impl<'a, A, T> Iterator for Drain<'a, A, T> { type Item = A; fn next(&mut self) -> Option { self.array.remove(0) } fn size_hint(&self) -> (usize, Option) { (self.array.len(), Some(self.array.len())) } } impl<'a, A, T> DoubleEndedIterator for Drain<'a, A, T> { fn next_back(&mut self) -> Option { self.array.pop() } } impl<'a, A, T> ExactSizeIterator for Drain<'a, A, T> {} impl<'a, A, T> FusedIterator for Drain<'a, A, T> {} sized-chunks-0.3.1/src/lib.rs010064400017500001750000000077601352006076700142660ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! # Sized Chunks //! //! This crate contains three fixed size low level array like data structures, //! primarily intended for use in [immutable.rs], but fully supported as a //! standalone crate. //! //! Their sizing information is encoded in the type using the //! [`typenum`][typenum] crate, which you may want to take a look at before //! reading on, but usually all you need to know about it is that it provides //! types `U1` to `U128` to represent numbers, which the data types take as type //! parameters, eg. `SparseChunk` would give you a sparse array with //! room for 32 elements of type `A`. You can also omit the size, as they all //! default to a size of 64, so `SparseChunk` would be a sparse array with a //! capacity of 64. //! //! All data structures always allocate the same amount of space, as determined //! by their capacity, regardless of how many elements they contain, and when //! they run out of space, they will panic. //! //! ## Data Structures //! //! | Type | Description | Push | Pop | Deref to `&[A]` | //! | --- | --- | --- | --- | --- | //! | [`Chunk`][Chunk] | Contiguous array | O(1)/O(n) | O(1) | Yes | //! | [`RingBuffer`][RingBuffer] | Non-contiguous array | O(1) | O(1) | No | //! | [`SparseChunk`][SparseChunk] | Sparse array | N/A | N/A | No | //! //! The [`Chunk`][Chunk] and [`RingBuffer`][RingBuffer] are very similar in //! practice, in that they both work like a plain array, except that you can //! push to either end with some expectation of performance. The difference is //! that [`RingBuffer`][RingBuffer] always allows you to do this in constant //! time, but in order to give that guarantee, it doesn't lay out its elements //! contiguously in memory, which means that you can't dereference it to a slice //! `&[A]`. //! //! [`Chunk`][Chunk], on the other hand, will shift its contents around when //! necessary to accommodate a push to a full side, but is able to guarantee a //! contiguous memory layout in this way, so it can always be dereferenced into //! a slice. Performance wise, repeated pushes to the same side will always run //! in constant time, but a push to one side followed by a push to the other //! side will cause the latter to run in linear time if there's no room (which //! there would only be if you've popped from that side). //! //!To choose between them, you can use the following rules: //! - I only ever want to push to the back: you don't need this crate, try //! [`ArrayVec`][ArrayVec]. //! - I need to push to either side but probably not both on the same array: use //! [`Chunk`][Chunk]. //! - I need to push to both sides and I don't need slices: use //! [`RingBuffer`][RingBuffer]. //! - I need to push to both sides but I do need slices: use [`Chunk`][Chunk]. //! //! Finally, [`SparseChunk`][SparseChunk] is a more efficient version of //! `Vec>`: each index is either inhabited or not, but instead of //! using the `Option` discriminant to decide which is which, it uses a compact //! bitmap. You can also think of `SparseChunk` as a `BTreeMap` //! where the `usize` must be less than `N`, but without the performance //! overhead. Its API is also more consistent with a map than an array - there's //! no push, pop, append, etc, just insert, remove and lookup. //! //! [immutable.rs]: https://immutable.rs/ //! [typenum]: https://docs.rs/typenum/ //! [Chunk]: struct.Chunk.html //! [RingBuffer]: struct.RingBuffer.html //! [SparseChunk]: struct.SparseChunk.html //! [ArrayVec]: https://docs.rs/arrayvec/ pub mod bitmap; pub mod inline_array; pub mod ring_buffer; pub mod sized_chunk; pub mod sparse_chunk; pub mod types; #[cfg(test)] mod tests; pub use crate::bitmap::Bitmap; pub use crate::inline_array::InlineArray; pub use crate::ring_buffer::RingBuffer; pub use crate::sized_chunk::Chunk; pub use crate::sparse_chunk::SparseChunk; sized-chunks-0.3.1/src/ring_buffer/index.rs010064400017500001750000000102551345445512400171110ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. use std::iter::FusedIterator; use std::marker::PhantomData; use std::ops::{Add, AddAssign, Sub, SubAssign}; use crate::types::ChunkLength; pub struct RawIndex>(usize, PhantomData<(A, N)>); impl> Clone for RawIndex { #[inline] #[must_use] fn clone(&self) -> Self { self.0.into() } } impl Copy for RawIndex where N: ChunkLength {} impl> RawIndex { #[inline] #[must_use] pub fn to_usize(self) -> usize { self.0 } /// Increments the index and returns a copy of the index /before/ incrementing. #[inline] #[must_use] pub fn inc(&mut self) -> Self { let old = *self; self.0 = if self.0 == N::USIZE - 1 { 0 } else { self.0 + 1 }; old } /// Decrements the index and returns a copy of the new value. #[inline] #[must_use] pub fn dec(&mut self) -> Self { self.0 = if self.0 == 0 { N::USIZE - 1 } else { self.0 - 1 }; *self } } impl> From for RawIndex { #[inline] #[must_use] fn from(index: usize) -> Self { debug_assert!(index < N::USIZE); RawIndex(index, PhantomData) } } impl> PartialEq for RawIndex { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.0 == other.0 } } impl> Eq for RawIndex {} impl> Add for RawIndex { type Output = RawIndex; #[inline] #[must_use] fn add(self, other: Self) -> Self::Output { self + other.0 } } impl> Add for RawIndex { type Output = RawIndex; #[inline] #[must_use] fn add(self, other: usize) -> Self::Output { let mut result = self.0 + other; while result >= N::USIZE { result -= N::USIZE; } result.into() } } impl> AddAssign for RawIndex { #[inline] fn add_assign(&mut self, other: usize) { self.0 += other; while self.0 >= N::USIZE { self.0 -= N::USIZE; } } } impl> Sub for RawIndex { type Output = RawIndex; #[inline] #[must_use] fn sub(self, other: Self) -> Self::Output { self - other.0 } } impl> Sub for RawIndex { type Output = RawIndex; #[inline] #[must_use] fn sub(self, other: usize) -> Self::Output { let mut start = self.0; while other > start { start += N::USIZE; } (start - other).into() } } impl> SubAssign for RawIndex { #[inline] fn sub_assign(&mut self, other: usize) { while other > self.0 { self.0 += N::USIZE; } self.0 -= other; } } pub struct IndexIter> { pub remaining: usize, pub left_index: RawIndex, pub right_index: RawIndex, } impl> Iterator for IndexIter { type Item = RawIndex; #[inline] fn next(&mut self) -> Option { if self.remaining > 0 { self.remaining -= 1; Some(self.left_index.inc()) } else { None } } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.remaining, Some(self.remaining)) } } impl> DoubleEndedIterator for IndexIter { #[inline] fn next_back(&mut self) -> Option { if self.remaining > 0 { self.remaining -= 1; Some(self.right_index.dec()) } else { None } } } impl> ExactSizeIterator for IndexIter {} impl> FusedIterator for IndexIter {} sized-chunks-0.3.1/src/ring_buffer/iter.rs010064400017500001750000000104631345512040400167350ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. use std::iter::FusedIterator; use crate::types::ChunkLength; use super::{index::RawIndex, RingBuffer}; /// A reference iterator over a `RingBuffer`. pub struct Iter<'a, A, N> where N: ChunkLength, { pub(crate) buffer: &'a RingBuffer, pub(crate) left_index: RawIndex, pub(crate) right_index: RawIndex, pub(crate) remaining: usize, } impl<'a, A, N> Iterator for Iter<'a, A, N> where N: ChunkLength, { type Item = &'a A; fn next(&mut self) -> Option { if self.remaining == 0 { None } else { self.remaining -= 1; Some(unsafe { &*self.buffer.ptr(self.left_index.inc()) }) } } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.remaining, Some(self.remaining)) } } impl<'a, A, N> DoubleEndedIterator for Iter<'a, A, N> where N: ChunkLength, { fn next_back(&mut self) -> Option { if self.remaining == 0 { None } else { self.remaining -= 1; Some(unsafe { &*self.buffer.ptr(self.right_index.dec()) }) } } } impl<'a, A, N> ExactSizeIterator for Iter<'a, A, N> where N: ChunkLength {} impl<'a, A, N> FusedIterator for Iter<'a, A, N> where N: ChunkLength {} /// A mutable reference iterator over a `RingBuffer`. pub struct IterMut<'a, A, N> where N: ChunkLength, { pub(crate) buffer: &'a mut RingBuffer, pub(crate) left_index: RawIndex, pub(crate) right_index: RawIndex, pub(crate) remaining: usize, } impl<'a, A, N> Iterator for IterMut<'a, A, N> where N: ChunkLength, { type Item = &'a mut A; fn next(&mut self) -> Option { if self.remaining == 0 { None } else { self.remaining -= 1; Some(unsafe { &mut *self.buffer.mut_ptr(self.left_index.inc()) }) } } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.remaining, Some(self.remaining)) } } impl<'a, A, N> DoubleEndedIterator for IterMut<'a, A, N> where N: ChunkLength, { fn next_back(&mut self) -> Option { if self.remaining == 0 { None } else { self.remaining -= 1; Some(unsafe { &mut *self.buffer.mut_ptr(self.right_index.dec()) }) } } } impl<'a, A, N> ExactSizeIterator for IterMut<'a, A, N> where N: ChunkLength {} impl<'a, A, N> FusedIterator for IterMut<'a, A, N> where N: ChunkLength {} /// A draining iterator over a `RingBuffer`. pub struct Drain<'a, A: 'a, N: ChunkLength + 'a> { pub(crate) buffer: &'a mut RingBuffer, } impl<'a, A: 'a, N: ChunkLength + 'a> Iterator for Drain<'a, A, N> { type Item = A; #[inline] fn next(&mut self) -> Option { self.buffer.pop_front() } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.buffer.len(), Some(self.buffer.len())) } } impl<'a, A: 'a, N: ChunkLength + 'a> DoubleEndedIterator for Drain<'a, A, N> { #[inline] fn next_back(&mut self) -> Option { self.buffer.pop_back() } } impl<'a, A: 'a, N: ChunkLength + 'a> ExactSizeIterator for Drain<'a, A, N> {} impl<'a, A: 'a, N: ChunkLength + 'a> FusedIterator for Drain<'a, A, N> {} /// A consuming iterator over a `RingBuffer`. pub struct OwnedIter> { pub(crate) buffer: RingBuffer, } impl> Iterator for OwnedIter { type Item = A; #[inline] fn next(&mut self) -> Option { self.buffer.pop_front() } #[inline] #[must_use] fn size_hint(&self) -> (usize, Option) { (self.buffer.len(), Some(self.buffer.len())) } } impl> DoubleEndedIterator for OwnedIter { #[inline] fn next_back(&mut self) -> Option { self.buffer.pop_back() } } impl> ExactSizeIterator for OwnedIter {} impl> FusedIterator for OwnedIter {} sized-chunks-0.3.1/src/ring_buffer/mod.rs010064400017500001750000000744251350737433400165740ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! A fixed capacity ring buffer. //! //! See [`RingBuffer`](struct.RingBuffer.html) use std::borrow::Borrow; use std::cmp::Ordering; use std::fmt::{Debug, Error, Formatter}; use std::hash::{Hash, Hasher}; use std::iter::FromIterator; use std::mem::ManuallyDrop; use std::ops::{Bound, Range, RangeBounds}; use std::ops::{Index, IndexMut}; use typenum::U64; use crate::types::ChunkLength; mod index; use index::{IndexIter, RawIndex}; mod iter; pub use iter::{Drain, Iter, IterMut, OwnedIter}; mod slice; pub use slice::{Slice, SliceMut}; /// A fixed capacity ring buffer. /// /// A ring buffer is an array where the first logical index is at some arbitrary /// location inside the array, and the indices wrap around to the start of the /// array once they overflow its bounds. /// /// This gives us the ability to push to either the front or the end of the /// array in constant time, at the cost of losing the ability to get a single /// contiguous slice reference to the contents. /// /// It differs from the [`Chunk`][Chunk] in that the latter will have mostly /// constant time pushes, but may occasionally need to shift its contents around /// to make room. They both have constant time pop, and they both have linear /// time insert and remove. /// /// The `RingBuffer` offers its own [`Slice`][Slice] and [`SliceMut`][SliceMut] /// types to compensate for the loss of being able to take a slice, but they're /// somewhat less efficient, so the general rule should be that you shouldn't /// choose a `RingBuffer` if you really need to take slices - but if you don't, /// it's probably a marginally better choice overall than [`Chunk`][Chunk]. /// /// [Chunk]: ../sized_chunk/struct.Chunk.html /// [Slice]: struct.Slice.html /// [SliceMut]: struct.SliceMut.html pub struct RingBuffer where N: ChunkLength, { origin: RawIndex, length: usize, data: ManuallyDrop, } impl> Drop for RingBuffer { #[inline] fn drop(&mut self) { if std::mem::needs_drop::() { for i in self.range() { unsafe { self.force_drop(i) } } } } } impl RingBuffer where N: ChunkLength, { /// The capacity of this ring buffer, as a `usize`. pub const CAPACITY: usize = N::USIZE; /// Get the raw index for a logical index. #[inline] fn raw(&self, index: usize) -> RawIndex { self.origin + index } #[inline] unsafe fn ptr(&self, index: RawIndex) -> *const A { debug_assert!(index.to_usize() < Self::CAPACITY); (&self.data as *const _ as *const A).add(index.to_usize()) } #[inline] unsafe fn mut_ptr(&mut self, index: RawIndex) -> *mut A { debug_assert!(index.to_usize() < Self::CAPACITY); (&mut self.data as *mut _ as *mut A).add(index.to_usize()) } /// Drop the value at a raw index. #[inline] unsafe fn force_drop(&mut self, index: RawIndex) { std::ptr::drop_in_place(self.mut_ptr(index)) } /// Copy the value at a raw index, discarding ownership of the copied value #[inline] unsafe fn force_read(&self, index: RawIndex) -> A { std::ptr::read(self.ptr(index)) } /// Write a value at a raw index without trying to drop what's already there #[inline] unsafe fn force_write(&mut self, index: RawIndex, value: A) { std::ptr::write(self.mut_ptr(index), value) } /// Copy a range of raw indices from another buffer. unsafe fn copy_from( &mut self, source: &mut Self, from: RawIndex, to: RawIndex, count: usize, ) { #[inline] unsafe fn force_copy_to>( source: &mut RingBuffer, from: RawIndex, target: &mut RingBuffer, to: RawIndex, count: usize, ) { if count > 0 { debug_assert!(from.to_usize() + count <= RingBuffer::::CAPACITY); debug_assert!(to.to_usize() + count <= RingBuffer::::CAPACITY); std::ptr::copy_nonoverlapping(source.mut_ptr(from), target.mut_ptr(to), count) } } if from.to_usize() + count > Self::CAPACITY { let first_length = Self::CAPACITY - from.to_usize(); let last_length = count - first_length; self.copy_from(source, from, to, first_length); self.copy_from(source, 0.into(), to + first_length, last_length); } else if to.to_usize() + count > Self::CAPACITY { let first_length = Self::CAPACITY - to.to_usize(); let last_length = count - first_length; force_copy_to(source, from, self, to, first_length); force_copy_to(source, from + first_length, self, 0.into(), last_length); } else { force_copy_to(source, from, self, to, count); } } /// Copy values from a slice. unsafe fn copy_from_slice(&mut self, source: &[A], to: RawIndex) { let count = source.len(); debug_assert!(to.to_usize() + count <= Self::CAPACITY); if to.to_usize() + count > Self::CAPACITY { let first_length = Self::CAPACITY - to.to_usize(); let first_slice = &source[..first_length]; let last_slice = &source[first_length..]; std::ptr::copy_nonoverlapping( first_slice.as_ptr(), self.mut_ptr(to), first_slice.len(), ); std::ptr::copy_nonoverlapping( last_slice.as_ptr(), self.mut_ptr(0.into()), last_slice.len(), ); } else { std::ptr::copy_nonoverlapping(source.as_ptr(), self.mut_ptr(to), count) } } /// Get an iterator over the raw indices of the buffer from left to right. #[inline] fn range(&self) -> IndexIter { IndexIter { remaining: self.len(), left_index: self.origin, right_index: self.origin + self.len(), } } /// Construct an empty ring buffer. #[inline] #[must_use] pub fn new() -> Self { let mut buffer: Self; unsafe { buffer = std::mem::zeroed(); std::ptr::write(&mut buffer.origin, 0.into()); std::ptr::write(&mut buffer.length, 0); } buffer } /// Construct a ring buffer with a single item. #[inline] #[must_use] pub fn unit(value: A) -> Self { let mut buffer: Self; unsafe { buffer = std::mem::zeroed(); std::ptr::write(&mut buffer.origin, 0.into()); std::ptr::write(&mut buffer.length, 1); buffer.force_write(0.into(), value); } buffer } /// Construct a ring buffer with two items. #[inline] #[must_use] pub fn pair(value1: A, value2: A) -> Self { let mut buffer: Self; unsafe { buffer = std::mem::zeroed(); std::ptr::write(&mut buffer.origin, 0.into()); std::ptr::write(&mut buffer.length, 2); buffer.force_write(0.into(), value1); buffer.force_write(1.into(), value2); } buffer } /// Construct a new ring buffer and move every item from `other` into the /// new buffer. /// /// Time: O(n) #[inline] #[must_use] pub fn drain_from(other: &mut Self) -> Self { Self::from_front(other, other.len()) } /// Construct a new ring buffer and populate it by taking `count` items from /// the iterator `iter`. /// /// Panics if the iterator contains less than `count` items. /// /// Time: O(n) #[must_use] pub fn collect_from(iter: &mut I, count: usize) -> Self where I: Iterator, { let buffer = Self::from_iter(iter.take(count)); if buffer.len() < count { panic!("RingBuffer::collect_from: underfull iterator"); } buffer } /// Construct a new ring buffer and populate it by taking `count` items from /// the front of `other`. /// /// Time: O(n) for the number of items moved #[must_use] pub fn from_front(other: &mut Self, count: usize) -> Self { let mut buffer = Self::new(); buffer.drain_from_front(other, count); buffer } /// Construct a new ring buffer and populate it by taking `count` items from /// the back of `other`. /// /// Time: O(n) for the number of items moved #[must_use] pub fn from_back(other: &mut Self, count: usize) -> Self { let mut buffer = Self::new(); buffer.drain_from_back(other, count); buffer } /// Get the length of the ring buffer. #[inline] #[must_use] pub fn len(&self) -> usize { self.length } /// Test if the ring buffer is empty. #[inline] #[must_use] pub fn is_empty(&self) -> bool { self.len() == 0 } /// Test if the ring buffer is full. #[inline] #[must_use] pub fn is_full(&self) -> bool { self.len() == Self::CAPACITY } /// Get an iterator over references to the items in the ring buffer in /// order. #[inline] #[must_use] pub fn iter(&self) -> Iter<'_, A, N> { Iter { buffer: self, left_index: self.origin, right_index: self.origin + self.len(), remaining: self.len(), } } /// Get an iterator over mutable references to the items in the ring buffer /// in order. #[inline] #[must_use] pub fn iter_mut(&mut self) -> IterMut<'_, A, N> { IterMut { left_index: self.origin, right_index: self.origin + self.len(), remaining: self.len(), buffer: self, } } #[must_use] fn parse_range>(&self, range: R) -> Range { let new_range = Range { start: match range.start_bound() { Bound::Unbounded => 0, Bound::Included(index) => *index, Bound::Excluded(_) => unimplemented!(), }, end: match range.end_bound() { Bound::Unbounded => self.len(), Bound::Included(index) => *index + 1, Bound::Excluded(index) => *index, }, }; if new_range.end > self.len() || new_range.start > new_range.end { panic!("Slice::parse_range: index out of bounds"); } new_range } /// Get a `Slice` for a subset of the ring buffer. #[must_use] pub fn slice>(&self, range: R) -> Slice { Slice { buffer: self, range: self.parse_range(range), } } /// Get a `SliceMut` for a subset of the ring buffer. #[must_use] pub fn slice_mut>(&mut self, range: R) -> SliceMut { SliceMut { range: self.parse_range(range), buffer: self, } } /// Get a reference to the value at a given index. #[must_use] pub fn get(&self, index: usize) -> Option<&A> { if index >= self.len() { None } else { Some(unsafe { &*self.ptr(self.raw(index)) }) } } /// Get a mutable reference to the value at a given index. #[must_use] pub fn get_mut(&mut self, index: usize) -> Option<&mut A> { if index >= self.len() { None } else { Some(unsafe { &mut *self.mut_ptr(self.raw(index)) }) } } /// Get a reference to the first value in the buffer. #[inline] #[must_use] pub fn first(&self) -> Option<&A> { self.get(0) } /// Get a mutable reference to the first value in the buffer. #[inline] #[must_use] pub fn first_mut(&mut self) -> Option<&mut A> { self.get_mut(0) } /// Get a reference to the last value in the buffer. #[inline] #[must_use] pub fn last(&self) -> Option<&A> { if self.is_empty() { None } else { self.get(self.len() - 1) } } /// Get a mutable reference to the last value in the buffer. #[inline] #[must_use] pub fn last_mut(&mut self) -> Option<&mut A> { if self.is_empty() { None } else { self.get_mut(self.len() - 1) } } /// Push a value to the back of the buffer. /// /// Panics if the capacity of the buffer is exceeded. /// /// Time: O(1) pub fn push_back(&mut self, value: A) { if self.is_full() { panic!("RingBuffer::push_back: can't push to a full buffer") } else { unsafe { self.force_write(self.raw(self.length), value) } self.length += 1; } } /// Push a value to the front of the buffer. /// /// Panics if the capacity of the buffer is exceeded. /// /// Time: O(1) pub fn push_front(&mut self, value: A) { if self.is_full() { panic!("RingBuffer::push_front: can't push to a full buffer") } else { let origin = self.origin.dec(); self.length += 1; unsafe { self.force_write(origin, value) } } } /// Pop a value from the back of the buffer. /// /// Returns `None` if the buffer is empty. /// /// Time: O(1) pub fn pop_back(&mut self) -> Option { if self.is_empty() { None } else { self.length -= 1; Some(unsafe { self.force_read(self.raw(self.length)) }) } } /// Pop a value from the front of the buffer. /// /// Returns `None` if the buffer is empty. /// /// Time: O(1) pub fn pop_front(&mut self) -> Option { if self.is_empty() { None } else { self.length -= 1; let index = self.origin.inc(); Some(unsafe { self.force_read(index) }) } } /// Discard all items up to but not including `index`. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items dropped pub fn drop_left(&mut self, index: usize) { if index > 0 { if index > self.len() { panic!("RingBuffer::drop_left: index out of bounds"); } for i in self.range().take(index) { unsafe { self.force_drop(i) } } self.origin += index; self.length -= index; } } /// Discard all items from `index` onward. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items dropped pub fn drop_right(&mut self, index: usize) { if index > self.len() { panic!("RingBuffer::drop_right: index out of bounds"); } if index == self.len() { return; } for i in self.range().skip(index) { unsafe { self.force_drop(i) } } self.length = index; } /// Split a buffer into two, the original buffer containing /// everything up to `index` and the returned buffer containing /// everything from `index` onwards. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items in the new buffer #[must_use] pub fn split_off(&mut self, index: usize) -> Self { if index > self.len() { panic!("RingBuffer::split: index out of bounds"); } if index == self.len() { return Self::new(); } let mut right = Self::new(); let length = self.length - index; unsafe { right.copy_from(self, self.raw(index), 0.into(), length) }; self.length = index; right.length = length; right } /// Remove all items from `other` and append them to the back of `self`. /// /// Panics if the capacity of `self` is exceeded. /// /// `other` will be an empty buffer after this operation. /// /// Time: O(n) for the number of items moved #[inline] pub fn append(&mut self, other: &mut Self) { self.drain_from_front(other, other.len()); } /// Remove `count` items from the front of `other` and append them to the /// back of `self`. /// /// Panics if `self` doesn't have `count` items left, or if `other` has /// fewer than `count` items. /// /// Time: O(n) for the number of items moved pub fn drain_from_front(&mut self, other: &mut Self, count: usize) { let self_len = self.len(); let other_len = other.len(); if self_len + count > Self::CAPACITY { panic!("RingBuffer::drain_from_front: chunk size overflow"); } if other_len < count { panic!("RingBuffer::drain_from_front: index out of bounds"); } unsafe { self.copy_from(other, other.origin, self.raw(self.len()), count) }; other.origin += count; other.length -= count; self.length += count; } /// Remove `count` items from the back of `other` and append them to the /// front of `self`. /// /// Panics if `self` doesn't have `count` items left, or if `other` has /// fewer than `count` items. /// /// Time: O(n) for the number of items moved pub fn drain_from_back(&mut self, other: &mut Self, count: usize) { let self_len = self.len(); let other_len = other.len(); if self_len + count > Self::CAPACITY { panic!("RingBuffer::drain_from_back: chunk size overflow"); } if other_len < count { panic!("RingBuffer::drain_from_back: index out of bounds"); } self.origin -= count; let source_index = other.origin + (other.len() - count); unsafe { self.copy_from(other, source_index, self.origin, count) }; other.length -= count; self.length += count; } /// Update the value at index `index`, returning the old value. /// /// Panics if `index` is out of bounds. /// /// Time: O(1) pub fn set(&mut self, index: usize, value: A) -> A { std::mem::replace(&mut self[index], value) } /// Insert a new value at index `index`, shifting all the following values /// to the right. /// /// Panics if the index is out of bounds. /// /// Time: O(n) for the number of items shifted pub fn insert(&mut self, index: usize, value: A) { if self.is_full() { panic!("RingBuffer::insert: chunk size overflow"); } if index > self.len() { panic!("RingBuffer::insert: index out of bounds"); } if index == 0 { return self.push_front(value); } if index == self.len() { return self.push_back(value); } let right_count = self.len() - index; // Check which side has fewer elements to shift. if right_count < index { // Shift to the right. let mut i = self.raw(self.len() - 1); let target = self.raw(index); while i != target { unsafe { self.force_write(i + 1, self.force_read(i)) }; i -= 1; } unsafe { self.force_write(target + 1, self.force_read(target)) }; self.length += 1; } else { // Shift to the left. self.origin -= 1; self.length += 1; for i in self.range().take(index) { unsafe { self.force_write(i, self.force_read(i + 1)) }; } } unsafe { self.force_write(self.raw(index), value) }; } /// Remove the value at index `index`, shifting all the following values to /// the left. /// /// Returns the removed value. /// /// Panics if the index is out of bounds. /// /// Time: O(n) for the number of items shifted pub fn remove(&mut self, index: usize) -> A { if index >= self.len() { panic!("RingBuffer::remove: index out of bounds"); } let value = unsafe { self.force_read(self.raw(index)) }; let right_count = self.len() - index; // Check which side has fewer elements to shift. if right_count < index { // Shift from the right. self.length -= 1; let mut i = self.raw(index); let target = self.raw(self.len()); while i != target { unsafe { self.force_write(i, self.force_read(i + 1)) }; i += 1; } } else { // Shift from the left. let mut i = self.raw(index); while i != self.origin { unsafe { self.force_write(i, self.force_read(i - 1)) }; i -= 1; } self.origin += 1; self.length -= 1; } value } /// Construct an iterator that drains values from the front of the buffer. pub fn drain(&mut self) -> Drain<'_, A, N> { Drain { buffer: self } } /// Discard the contents of the buffer. /// /// Time: O(n) pub fn clear(&mut self) { for i in self.range() { unsafe { self.force_drop(i) }; } self.origin = 0.into(); self.length = 0; } } impl> Default for RingBuffer { #[inline] #[must_use] fn default() -> Self { Self::new() } } impl> Clone for RingBuffer { fn clone(&self) -> Self { let mut out = Self::new(); out.origin = self.origin; out.length = self.length; for index in out.range() { unsafe { out.force_write(index, (&*self.ptr(index)).clone()) }; } out } } impl Index for RingBuffer where N: ChunkLength, { type Output = A; #[must_use] fn index(&self, index: usize) -> &Self::Output { if index >= self.len() { panic!( "RingBuffer::index: index out of bounds {} >= {}", index, self.len() ); } unsafe { &*self.ptr(self.raw(index)) } } } impl IndexMut for RingBuffer where N: ChunkLength, { #[must_use] fn index_mut(&mut self, index: usize) -> &mut Self::Output { if index >= self.len() { panic!( "RingBuffer::index_mut: index out of bounds {} >= {}", index, self.len() ); } unsafe { &mut *self.mut_ptr(self.raw(index)) } } } impl> PartialEq for RingBuffer { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl PartialEq for RingBuffer where Slice: Borrow<[A]>, A: PartialEq, N: ChunkLength, { #[inline] #[must_use] fn eq(&self, other: &Slice) -> bool { let other = other.borrow(); self.len() == other.len() && self.iter().eq(other.iter()) } } impl> Eq for RingBuffer {} impl> PartialOrd for RingBuffer { #[inline] #[must_use] fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl> Ord for RingBuffer { #[inline] #[must_use] fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl> Extend for RingBuffer { #[inline] fn extend>(&mut self, iter: I) { for item in iter { self.push_back(item); } } } impl<'a, A: Clone + 'a, N: ChunkLength> Extend<&'a A> for RingBuffer { #[inline] fn extend>(&mut self, iter: I) { for item in iter { self.push_back(item.clone()); } } } impl> Debug for RingBuffer { fn fmt(&self, f: &mut Formatter) -> Result<(), Error> { f.write_str("RingBuffer")?; f.debug_list().entries(self.iter()).finish() } } impl> Hash for RingBuffer { #[inline] fn hash(&self, hasher: &mut H) { for item in self { item.hash(hasher) } } } impl> std::io::Write for RingBuffer { fn write(&mut self, mut buf: &[u8]) -> std::io::Result { let max_new = Self::CAPACITY - self.len(); if buf.len() > max_new { buf = &buf[..max_new]; } unsafe { self.copy_from_slice(buf, self.origin + self.len()) }; self.length += buf.len(); Ok(buf.len()) } #[inline] fn flush(&mut self) -> std::io::Result<()> { Ok(()) } } impl> std::io::Read for RingBuffer { fn read(&mut self, buf: &mut [u8]) -> std::io::Result { let read_size = buf.len().min(self.len()); if read_size == 0 { Ok(0) } else { for p in buf.iter_mut().take(read_size) { *p = self.pop_front().unwrap(); } Ok(read_size) } } } impl> FromIterator for RingBuffer { #[must_use] fn from_iter>(iter: I) -> Self { let mut buffer = RingBuffer::new(); buffer.extend(iter); buffer } } impl> IntoIterator for RingBuffer { type Item = A; type IntoIter = OwnedIter; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { OwnedIter { buffer: self } } } impl<'a, A, N: ChunkLength> IntoIterator for &'a RingBuffer { type Item = &'a A; type IntoIter = Iter<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a, A, N: ChunkLength> IntoIterator for &'a mut RingBuffer { type Item = &'a mut A; type IntoIter = IterMut<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } // Tests #[cfg(test)] mod test { use super::*; #[test] fn is_full() { let mut chunk = RingBuffer::<_, U64>::new(); for i in 0..64 { assert_eq!(false, chunk.is_full()); chunk.push_back(i); } assert_eq!(true, chunk.is_full()); } #[test] fn ref_iter() { let chunk: RingBuffer = (0..64).collect(); let out_vec: Vec<&i32> = chunk.iter().collect(); let should_vec_p: Vec = (0..64).collect(); let should_vec: Vec<&i32> = should_vec_p.iter().collect(); assert_eq!(should_vec, out_vec); } #[test] fn mut_ref_iter() { let mut chunk: RingBuffer = (0..64).collect(); let out_vec: Vec<&mut i32> = chunk.iter_mut().collect(); let mut should_vec_p: Vec = (0..64).collect(); let should_vec: Vec<&mut i32> = should_vec_p.iter_mut().collect(); assert_eq!(should_vec, out_vec); } #[test] fn consuming_iter() { let chunk: RingBuffer = (0..64).collect(); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn draining_iter() { let mut chunk: RingBuffer = (0..64).collect(); let mut half: RingBuffer = chunk.drain().take(16).collect(); half.extend(chunk.drain().rev().take(16)); let should: Vec = (16..48).collect(); assert_eq!(chunk, should); let should: Vec = (0..16).chain((48..64).rev()).collect(); assert_eq!(half, should); } #[test] fn io_write() { use std::io::Write; let mut buffer: RingBuffer = (0..32).collect(); let to_write: Vec = (32..128).collect(); assert_eq!(32, buffer.write(&to_write).unwrap()); assert_eq!(buffer, (0..64).collect::>()); } #[test] fn io_read() { use std::io::Read; let mut buffer: RingBuffer = (16..48).collect(); let mut read_buf: Vec = (0..16).collect(); assert_eq!(16, buffer.read(&mut read_buf).unwrap()); assert_eq!(read_buf, (16..32).collect::>()); assert_eq!(buffer, (32..48).collect::>()); assert_eq!(16, buffer.read(&mut read_buf).unwrap()); assert_eq!(read_buf, (32..48).collect::>()); assert_eq!(buffer, vec![]); assert_eq!(0, buffer.read(&mut read_buf).unwrap()); } #[test] fn clone() { let buffer: RingBuffer = (0..50).collect(); assert_eq!(buffer, buffer.clone()); } #[test] fn failing() { let mut buffer: RingBuffer = RingBuffer::new(); buffer.push_front(0); let mut add: RingBuffer = vec![1, 0, 0, 0, 0, 0].into_iter().collect(); buffer.append(&mut add); assert_eq!(1, buffer.remove(1)); let expected = vec![0, 0, 0, 0, 0, 0]; assert_eq!(buffer, expected); } use std::sync::atomic::{AtomicUsize, Ordering}; struct DropTest<'a> { counter: &'a AtomicUsize, } impl<'a> DropTest<'a> { fn new(counter: &'a AtomicUsize) -> Self { counter.fetch_add(1, Ordering::Relaxed); DropTest { counter } } } impl<'a> Drop for DropTest<'a> { fn drop(&mut self) { self.counter.fetch_sub(1, Ordering::Relaxed); } } #[test] fn dropping() { let counter = AtomicUsize::new(0); { let mut chunk: RingBuffer = RingBuffer::new(); for _i in 0..20 { chunk.push_back(DropTest::new(&counter)) } for _i in 0..20 { chunk.push_front(DropTest::new(&counter)) } assert_eq!(40, counter.load(Ordering::Relaxed)); for _i in 0..10 { chunk.pop_back(); } assert_eq!(30, counter.load(Ordering::Relaxed)); } assert_eq!(0, counter.load(Ordering::Relaxed)); } } sized-chunks-0.3.1/src/ring_buffer/slice.rs010064400017500001750000000363251346530063300171030ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. use std::borrow::Borrow; use std::cmp::Ordering; use std::fmt::Debug; use std::fmt::Error; use std::fmt::Formatter; use std::hash::Hash; use std::hash::Hasher; use std::ops::IndexMut; use std::ops::{Bound, Index, Range, RangeBounds}; use crate::types::ChunkLength; use super::{Iter, IterMut, RingBuffer}; /// An indexable representation of a subset of a `RingBuffer`. pub struct Slice<'a, A: 'a, N: ChunkLength + 'a> { pub(crate) buffer: &'a RingBuffer, pub(crate) range: Range, } impl<'a, A: 'a, N: ChunkLength + 'a> Slice<'a, A, N> { /// Get the length of the slice. #[inline] #[must_use] pub fn len(&self) -> usize { self.range.end - self.range.start } /// Test if the slice is empty. #[inline] #[must_use] pub fn is_empty(&self) -> bool { self.len() == 0 } /// Get a reference to the value at a given index. #[inline] #[must_use] pub fn get(&self, index: usize) -> Option<&'a A> { if index >= self.len() { None } else { self.buffer.get(self.range.start + index) } } /// Get a reference to the first value in the slice. #[inline] #[must_use] pub fn first(&self) -> Option<&A> { self.get(0) } /// Get a reference to the last value in the slice. #[inline] #[must_use] pub fn last(&self) -> Option<&A> { if self.is_empty() { None } else { self.get(self.len() - 1) } } /// Get an iterator over references to the items in the slice in order. #[inline] #[must_use] pub fn iter(&self) -> Iter { Iter { buffer: self.buffer, left_index: self.buffer.origin + self.range.start, right_index: self.buffer.origin + self.range.start + self.len(), remaining: self.len(), } } /// Create a subslice of this slice. /// /// This consumes the slice. To create a subslice without consuming it, /// clone it first: `my_slice.clone().slice(1..2)`. #[must_use] pub fn slice>(self, range: R) -> Slice<'a, A, N> { let new_range = Range { start: match range.start_bound() { Bound::Unbounded => self.range.start, Bound::Included(index) => self.range.start + index, Bound::Excluded(_) => unimplemented!(), }, end: match range.end_bound() { Bound::Unbounded => self.range.end, Bound::Included(index) => self.range.start + index + 1, Bound::Excluded(index) => self.range.start + index, }, }; if new_range.start < self.range.start || new_range.end > self.range.end || new_range.start > new_range.end { panic!("Slice::slice: index out of bounds"); } Slice { buffer: self.buffer, range: new_range, } } /// Split the slice into two subslices at the given index. #[must_use] pub fn split_at(self, index: usize) -> (Slice<'a, A, N>, Slice<'a, A, N>) { if index > self.len() { panic!("Slice::split_at: index out of bounds"); } let index = self.range.start + index; ( Slice { buffer: self.buffer, range: Range { start: self.range.start, end: index, }, }, Slice { buffer: self.buffer, range: Range { start: index, end: self.range.end, }, }, ) } /// Construct a new `RingBuffer` by copying the elements in this slice. #[inline] #[must_use] pub fn to_owned(&self) -> RingBuffer where A: Clone, { self.iter().cloned().collect() } } impl<'a, A: 'a, N: ChunkLength + 'a> From<&'a RingBuffer> for Slice<'a, A, N> { #[inline] #[must_use] fn from(buffer: &'a RingBuffer) -> Self { Slice { range: Range { start: 0, end: buffer.len(), }, buffer, } } } impl<'a, A: 'a, N: ChunkLength + 'a> Clone for Slice<'a, A, N> { #[inline] #[must_use] fn clone(&self) -> Self { Slice { buffer: self.buffer, range: self.range.clone(), } } } impl<'a, A: 'a, N: ChunkLength + 'a> Index for Slice<'a, A, N> { type Output = A; #[inline] #[must_use] fn index(&self, index: usize) -> &Self::Output { self.buffer.index(self.range.start + index) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a> PartialEq for Slice<'a, A, N> { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a, S> PartialEq for Slice<'a, A, N> where S: Borrow<[A]>, { #[inline] #[must_use] fn eq(&self, other: &S) -> bool { let other = other.borrow(); self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: Eq + 'a, N: ChunkLength + 'a> Eq for Slice<'a, A, N> {} impl<'a, A: PartialOrd + 'a, N: ChunkLength + 'a> PartialOrd for Slice<'a, A, N> { #[inline] #[must_use] fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl<'a, A: Ord + 'a, N: ChunkLength + 'a> Ord for Slice<'a, A, N> { #[inline] #[must_use] fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl<'a, A: Debug + 'a, N: ChunkLength + 'a> Debug for Slice<'a, A, N> { fn fmt(&self, f: &mut Formatter) -> Result<(), Error> { f.write_str("RingBuffer")?; f.debug_list().entries(self.iter()).finish() } } impl<'a, A: Hash + 'a, N: ChunkLength + 'a> Hash for Slice<'a, A, N> { #[inline] fn hash(&self, hasher: &mut H) { for item in self { item.hash(hasher) } } } impl<'a, A: 'a, N: ChunkLength + 'a> IntoIterator for &'a Slice<'a, A, N> { type Item = &'a A; type IntoIter = Iter<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter() } } // Mutable slice /// An indexable representation of a mutable subset of a `RingBuffer`. pub struct SliceMut<'a, A: 'a, N: ChunkLength + 'a> { pub(crate) buffer: &'a mut RingBuffer, pub(crate) range: Range, } impl<'a, A: 'a, N: ChunkLength + 'a> SliceMut<'a, A, N> { /// Downgrade this slice into a non-mutable slice. #[inline] #[must_use] pub fn unmut(self) -> Slice<'a, A, N> { Slice { buffer: self.buffer, range: self.range, } } /// Get the length of the slice. #[inline] #[must_use] pub fn len(&self) -> usize { self.range.end - self.range.start } /// Test if the slice is empty. #[inline] #[must_use] pub fn is_empty(&self) -> bool { self.len() == 0 } /// Get a reference to the value at a given index. #[inline] #[must_use] pub fn get(&self, index: usize) -> Option<&'a A> { if index >= self.len() { None } else { self.buffer .get(self.range.start + index) .map(|r| unsafe { &*(r as *const _) }) } } /// Get a mutable reference to the value at a given index. #[inline] #[must_use] pub fn get_mut(&mut self, index: usize) -> Option<&'a mut A> { if index >= self.len() { None } else { self.buffer .get_mut(self.range.start + index) .map(|r| unsafe { &mut *(r as *mut _) }) } } /// Get a reference to the first value in the slice. #[inline] #[must_use] pub fn first(&self) -> Option<&A> { self.get(0) } /// Get a mutable reference to the first value in the slice. #[inline] #[must_use] pub fn first_mut(&mut self) -> Option<&mut A> { self.get_mut(0) } /// Get a reference to the last value in the slice. #[inline] #[must_use] pub fn last(&self) -> Option<&A> { if self.is_empty() { None } else { self.get(self.len() - 1) } } /// Get a mutable reference to the last value in the slice. #[inline] #[must_use] pub fn last_mut(&mut self) -> Option<&mut A> { if self.is_empty() { None } else { self.get_mut(self.len() - 1) } } /// Get an iterator over references to the items in the slice in order. #[inline] #[must_use] pub fn iter(&self) -> Iter { Iter { buffer: self.buffer, left_index: self.buffer.origin + self.range.start, right_index: self.buffer.origin + self.range.start + self.len(), remaining: self.len(), } } /// Get an iterator over mutable references to the items in the slice in /// order. #[inline] #[must_use] pub fn iter_mut(&mut self) -> IterMut { let origin = self.buffer.origin; let len = self.len(); IterMut { buffer: self.buffer, left_index: origin + self.range.start, right_index: origin + self.range.start + len, remaining: len, } } /// Create a subslice of this slice. /// /// This consumes the slice. Because the slice works like a mutable /// reference, you can only have one slice over a given subset of a /// `RingBuffer` at any one time, so that's just how it's got to be. #[must_use] pub fn slice>(self, range: R) -> SliceMut<'a, A, N> { let new_range = Range { start: match range.start_bound() { Bound::Unbounded => self.range.start, Bound::Included(index) => self.range.start + index, Bound::Excluded(_) => unimplemented!(), }, end: match range.end_bound() { Bound::Unbounded => self.range.end, Bound::Included(index) => self.range.start + index + 1, Bound::Excluded(index) => self.range.start + index, }, }; if new_range.start < self.range.start || new_range.end > self.range.end || new_range.start > new_range.end { panic!("Slice::slice: index out of bounds"); } SliceMut { buffer: self.buffer, range: new_range, } } /// Split the slice into two subslices at the given index. #[must_use] pub fn split_at(self, index: usize) -> (SliceMut<'a, A, N>, SliceMut<'a, A, N>) { if index > self.len() { panic!("SliceMut::split_at: index out of bounds"); } let index = self.range.start + index; let ptr: *mut RingBuffer = self.buffer; ( SliceMut { buffer: unsafe { &mut *ptr }, range: Range { start: self.range.start, end: index, }, }, SliceMut { buffer: unsafe { &mut *ptr }, range: Range { start: index, end: self.range.end, }, }, ) } /// Update the value at index `index`, returning the old value. /// /// Panics if `index` is out of bounds. #[inline] #[must_use] pub fn set(&mut self, index: usize, value: A) -> A { if index >= self.len() { panic!("SliceMut::set: index out of bounds"); } else { self.buffer.set(self.range.start + index, value) } } /// Construct a new `RingBuffer` by copying the elements in this slice. #[inline] #[must_use] pub fn to_owned(&self) -> RingBuffer where A: Clone, { self.iter().cloned().collect() } } impl<'a, A: 'a, N: ChunkLength + 'a> From<&'a mut RingBuffer> for SliceMut<'a, A, N> { #[must_use] fn from(buffer: &'a mut RingBuffer) -> Self { SliceMut { range: Range { start: 0, end: buffer.len(), }, buffer, } } } impl<'a, A: 'a, N: ChunkLength + 'a> Into> for SliceMut<'a, A, N> { #[inline] #[must_use] fn into(self) -> Slice<'a, A, N> { self.unmut() } } impl<'a, A: 'a, N: ChunkLength + 'a> Index for SliceMut<'a, A, N> { type Output = A; #[inline] #[must_use] fn index(&self, index: usize) -> &Self::Output { self.buffer.index(self.range.start + index) } } impl<'a, A: 'a, N: ChunkLength + 'a> IndexMut for SliceMut<'a, A, N> { #[inline] #[must_use] fn index_mut(&mut self, index: usize) -> &mut Self::Output { self.buffer.index_mut(self.range.start + index) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a> PartialEq for SliceMut<'a, A, N> { #[inline] #[must_use] fn eq(&self, other: &Self) -> bool { self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: PartialEq + 'a, N: ChunkLength + 'a, S> PartialEq for SliceMut<'a, A, N> where S: Borrow<[A]>, { #[inline] #[must_use] fn eq(&self, other: &S) -> bool { let other = other.borrow(); self.len() == other.len() && self.iter().eq(other.iter()) } } impl<'a, A: Eq + 'a, N: ChunkLength + 'a> Eq for SliceMut<'a, A, N> {} impl<'a, A: PartialOrd + 'a, N: ChunkLength + 'a> PartialOrd for SliceMut<'a, A, N> { #[inline] #[must_use] fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl<'a, A: Ord + 'a, N: ChunkLength + 'a> Ord for SliceMut<'a, A, N> { #[inline] #[must_use] fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl<'a, A: Debug + 'a, N: ChunkLength + 'a> Debug for SliceMut<'a, A, N> { fn fmt(&self, f: &mut Formatter) -> Result<(), Error> { f.write_str("RingBuffer")?; f.debug_list().entries(self.iter()).finish() } } impl<'a, A: Hash + 'a, N: ChunkLength + 'a> Hash for SliceMut<'a, A, N> { #[inline] fn hash(&self, hasher: &mut H) { for item in self { item.hash(hasher) } } } impl<'a, 'b, A: 'a, N: ChunkLength + 'a> IntoIterator for &'a SliceMut<'a, A, N> { type Item = &'a A; type IntoIter = Iter<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a, 'b, A: 'a, N: ChunkLength + 'a> IntoIterator for &'a mut SliceMut<'a, A, N> { type Item = &'a mut A; type IntoIter = IterMut<'a, A, N>; #[inline] #[must_use] fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } sized-chunks-0.3.1/src/sized_chunk.rs010064400017500001750000001020451350737462000160170ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! A fixed capacity smart array. //! //! See [`Chunk`](struct.Chunk.html) use crate::inline_array::InlineArray; use std::borrow::{Borrow, BorrowMut}; use std::cmp::Ordering; use std::fmt::{Debug, Error, Formatter}; use std::hash::{Hash, Hasher}; use std::io; use std::iter::{FromIterator, FusedIterator}; use std::mem::{self, replace, ManuallyDrop}; use std::ops::{Deref, DerefMut, Index, IndexMut}; use std::ptr; use std::slice::{ from_raw_parts, from_raw_parts_mut, Iter as SliceIter, IterMut as SliceIterMut, SliceIndex, }; use typenum::U64; use crate::types::ChunkLength; /// A fixed capacity smart array. /// /// An inline array of items with a variable length but a fixed, preallocated /// capacity given by the `N` type, which must be an [`Unsigned`][Unsigned] type /// level numeral. /// /// It's 'smart' because it's able to reorganise its contents based on expected /// behaviour. If you construct one using `push_back`, it will be laid out like /// a `Vec` with space at the end. If you `push_front` it will start filling in /// values from the back instead of the front, so that you still get linear time /// push as long as you don't reverse direction. If you do, and there's no room /// at the end you're pushing to, it'll shift its contents over to the other /// side, creating more space to push into. This technique is tuned for /// `Chunk`'s expected use case in [im::Vector]: usually, chunks always see /// either `push_front` or `push_back`, but not both unless they move around /// inside the tree, in which case they're able to reorganise themselves with /// reasonable efficiency to suit their new usage patterns. /// /// It maintains a `left` index and a `right` index instead of a simple length /// counter in order to accomplish this, much like a ring buffer would, except /// that the `Chunk` keeps all its items sequentially in memory so that you can /// always get a `&[A]` slice for them, at the price of the occasional /// reordering operation. The allocated size of a `Chunk` is thus `usize` * 2 + /// `A` * `N`. /// /// This technique also lets us choose to shift the shortest side to account for /// the inserted or removed element when performing insert and remove /// operations, unlike `Vec` where you always need to shift the right hand side. /// /// Unlike a `Vec`, the `Chunk` has a fixed capacity and cannot grow beyond it. /// Being intended for low level use, it expects you to know or test whether /// you're pushing to a full array, and has an API more geared towards panics /// than returning `Option`s, on the assumption that you know what you're doing. /// Of course, if you don't, you can expect it to panic immediately rather than /// do something undefined and usually bad. /// /// ## Isn't this just a less efficient ring buffer? /// /// You might be wondering why you would want to use this data structure rather /// than a [`RingBuffer`][RingBuffer], which is similar but doesn't need to /// shift its content around when it hits the sides of the allocated buffer. The /// answer is that `Chunk` can be dereferenced into a slice, while a ring buffer /// can not. If you don't need to be able to do that, a ring buffer will /// generally be the marginally more efficient choice. /// /// # Examples /// /// ```rust /// # #[macro_use] extern crate sized_chunks; /// # extern crate typenum; /// # use sized_chunks::Chunk; /// # use typenum::U64; /// # fn main() { /// // Construct a chunk with a 64 item capacity /// let mut chunk = Chunk::::new(); /// // Fill it with descending numbers /// chunk.extend((0..64).rev()); /// // It derefs to a slice so we can use standard slice methods /// chunk.sort(); /// // It's got all the amenities like `FromIterator` and `Eq` /// let expected: Chunk = (0..64).collect(); /// assert_eq!(expected, chunk); /// # } /// ``` /// /// [Unsigned]: https://docs.rs/typenum/1.10.0/typenum/marker_traits/trait.Unsigned.html /// [im::Vector]: https://docs.rs/im/latest/im/vector/enum.Vector.html /// [RingBuffer]: ../ring_buffer/struct.RingBuffer.html pub struct Chunk where N: ChunkLength, { left: usize, right: usize, data: ManuallyDrop, } impl Drop for Chunk where N: ChunkLength, { fn drop(&mut self) { if mem::needs_drop::() { for i in self.left..self.right { unsafe { Chunk::force_drop(i, self) } } } } } impl Clone for Chunk where A: Clone, N: ChunkLength, { fn clone(&self) -> Self { let mut out = Self::new(); out.left = self.left; out.right = self.right; for index in self.left..self.right { unsafe { Chunk::force_write(index, self.values()[index].clone(), &mut out) } } out } } impl Chunk where N: ChunkLength, { pub const CAPACITY: usize = N::USIZE; /// Construct a new empty chunk. pub fn new() -> Self { let mut chunk: Self; unsafe { chunk = mem::zeroed(); ptr::write(&mut chunk.left, 0); ptr::write(&mut chunk.right, 0); } chunk } /// Construct a new chunk with one item. pub fn unit(value: A) -> Self { let mut chunk: Self; unsafe { chunk = mem::zeroed(); ptr::write(&mut chunk.left, 0); ptr::write(&mut chunk.right, 1); Chunk::force_write(0, value, &mut chunk); } chunk } /// Construct a new chunk with two items. pub fn pair(left: A, right: A) -> Self { let mut chunk: Self; unsafe { chunk = mem::zeroed(); ptr::write(&mut chunk.left, 0); ptr::write(&mut chunk.right, 2); Chunk::force_write(0, left, &mut chunk); Chunk::force_write(1, right, &mut chunk); } chunk } /// Construct a new chunk and move every item from `other` into the new /// chunk. /// /// Time: O(n) pub fn drain_from(other: &mut Self) -> Self { let other_len = other.len(); Self::from_front(other, other_len) } /// Construct a new chunk and populate it by taking `count` items from the /// iterator `iter`. /// /// Panics if the iterator contains less than `count` items. /// /// Time: O(n) pub fn collect_from(iter: &mut I, mut count: usize) -> Self where I: Iterator, { let mut chunk = Self::new(); while count > 0 { count -= 1; chunk.push_back( iter.next() .expect("Chunk::collect_from: underfull iterator"), ); } chunk } /// Construct a new chunk and populate it by taking `count` items from the /// front of `other`. /// /// Time: O(n) for the number of items moved pub fn from_front(other: &mut Self, count: usize) -> Self { let other_len = other.len(); debug_assert!(count <= other_len); let mut chunk = Self::new(); unsafe { Chunk::force_copy_to(other.left, 0, count, other, &mut chunk) }; chunk.right = count; other.left += count; chunk } /// Construct a new chunk and populate it by taking `count` items from the /// back of `other`. /// /// Time: O(n) for the number of items moved pub fn from_back(other: &mut Self, count: usize) -> Self { let other_len = other.len(); debug_assert!(count <= other_len); let mut chunk = Self::new(); unsafe { Chunk::force_copy_to(other.right - count, 0, count, other, &mut chunk) }; chunk.right = count; other.right -= count; chunk } /// Get the length of the chunk. #[inline] pub fn len(&self) -> usize { self.right - self.left } /// Test if the chunk is empty. #[inline] pub fn is_empty(&self) -> bool { self.left == self.right } /// Test if the chunk is at capacity. #[inline] pub fn is_full(&self) -> bool { self.left == 0 && self.right == N::USIZE } #[inline] fn values(&self) -> &[A] { unsafe { from_raw_parts(&self.data as *const _ as *const A, N::USIZE) } } #[inline] fn values_mut(&mut self) -> &mut [A] { unsafe { from_raw_parts_mut(&mut self.data as *mut _ as *mut A, N::USIZE) } } /// Copy the value at an index, discarding ownership of the copied value #[inline] unsafe fn force_read(index: usize, chunk: &mut Self) -> A { ptr::read(&chunk.values()[index]) } /// Write a value at an index without trying to drop what's already there #[inline] unsafe fn force_write(index: usize, value: A, chunk: &mut Self) { ptr::write(&mut chunk.values_mut()[index], value) } /// Drop the value at an index #[inline] unsafe fn force_drop(index: usize, chunk: &mut Self) { ptr::drop_in_place(&mut chunk.values_mut()[index]) } /// Copy a range within a chunk #[inline] unsafe fn force_copy(from: usize, to: usize, count: usize, chunk: &mut Self) { if count > 0 { ptr::copy(&chunk.values()[from], &mut chunk.values_mut()[to], count) } } /// Copy a range between chunks #[inline] unsafe fn force_copy_to( from: usize, to: usize, count: usize, chunk: &mut Self, other: &mut Self, ) { if count > 0 { ptr::copy_nonoverlapping(&chunk.values()[from], &mut other.values_mut()[to], count) } } /// Push an item to the front of the chunk. /// /// Panics if the capacity of the chunk is exceeded. /// /// Time: O(1) if there's room at the front, O(n) otherwise pub fn push_front(&mut self, value: A) { if self.is_full() { panic!("Chunk::push_front: can't push to full chunk"); } if self.is_empty() { self.left = N::USIZE; self.right = N::USIZE; } else if self.left == 0 { self.left = N::USIZE - self.right; unsafe { Chunk::force_copy(0, self.left, self.right, self) }; self.right = N::USIZE; } self.left -= 1; unsafe { Chunk::force_write(self.left, value, self) } } /// Push an item to the back of the chunk. /// /// Panics if the capacity of the chunk is exceeded. /// /// Time: O(1) if there's room at the back, O(n) otherwise pub fn push_back(&mut self, value: A) { if self.is_full() { panic!("Chunk::push_back: can't push to full chunk"); } if self.is_empty() { self.left = 0; self.right = 0; } else if self.right == N::USIZE { unsafe { Chunk::force_copy(self.left, 0, self.len(), self) }; self.right = N::USIZE - self.left; self.left = 0; } unsafe { Chunk::force_write(self.right, value, self) } self.right += 1; } /// Pop an item off the front of the chunk. /// /// Panics if the chunk is empty. /// /// Time: O(1) pub fn pop_front(&mut self) -> A { if self.is_empty() { panic!("Chunk::pop_front: can't pop from empty chunk"); } else { let value = unsafe { Chunk::force_read(self.left, self) }; self.left += 1; value } } /// Pop an item off the back of the chunk. /// /// Panics if the chunk is empty. /// /// Time: O(1) pub fn pop_back(&mut self) -> A { if self.is_empty() { panic!("Chunk::pop_back: can't pop from empty chunk"); } else { self.right -= 1; unsafe { Chunk::force_read(self.right, self) } } } /// Discard all items up to but not including `index`. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items dropped pub fn drop_left(&mut self, index: usize) { if index > 0 { if index > self.len() { panic!("Chunk::drop_left: index out of bounds"); } let start = self.left; for i in start..(start + index) { unsafe { Chunk::force_drop(i, self) } } self.left += index; } } /// Discard all items from `index` onward. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items dropped pub fn drop_right(&mut self, index: usize) { if index > self.len() { panic!("Chunk::drop_right: index out of bounds"); } if index == self.len() { return; } let start = self.left + index; for i in start..self.right { unsafe { Chunk::force_drop(i, self) } } self.right = start; } /// Split a chunk into two, the original chunk containing /// everything up to `index` and the returned chunk containing /// everything from `index` onwards. /// /// Panics if `index` is out of bounds. /// /// Time: O(n) for the number of items in the new chunk pub fn split_off(&mut self, index: usize) -> Self { if index > self.len() { panic!("Chunk::split_off: index out of bounds"); } if index == self.len() { return Self::new(); } let mut right_chunk = Self::new(); let start = self.left + index; let len = self.right - start; unsafe { Chunk::force_copy_to(start, 0, len, self, &mut right_chunk) }; right_chunk.right = len; self.right = start; right_chunk } /// Remove all items from `other` and append them to the back of `self`. /// /// Panics if the capacity of the chunk is exceeded. /// /// Time: O(n) for the number of items moved pub fn append(&mut self, other: &mut Self) { let self_len = self.len(); let other_len = other.len(); if self_len + other_len > N::USIZE { panic!("Chunk::append: chunk size overflow"); } if self.right + other_len > N::USIZE { unsafe { Chunk::force_copy(self.left, 0, self_len, self) }; self.right -= self.left; self.left = 0; } unsafe { Chunk::force_copy_to(other.left, self.right, other_len, other, self) }; self.right += other_len; other.left = 0; other.right = 0; } /// Remove `count` items from the front of `other` and append them to the /// back of `self`. /// /// Panics if `self` doesn't have `count` items left, or if `other` has /// fewer than `count` items. /// /// Time: O(n) for the number of items moved pub fn drain_from_front(&mut self, other: &mut Self, count: usize) { let self_len = self.len(); let other_len = other.len(); assert!(self_len + count <= N::USIZE); assert!(other_len >= count); if self.right + count > N::USIZE { unsafe { Chunk::force_copy(self.left, 0, self_len, self) }; self.right -= self.left; self.left = 0; } unsafe { Chunk::force_copy_to(other.left, self.right, count, other, self) }; self.right += count; other.left += count; } /// Remove `count` items from the back of `other` and append them to the /// front of `self`. /// /// Panics if `self` doesn't have `count` items left, or if `other` has /// fewer than `count` items. /// /// Time: O(n) for the number of items moved pub fn drain_from_back(&mut self, other: &mut Self, count: usize) { let self_len = self.len(); let other_len = other.len(); assert!(self_len + count <= N::USIZE); assert!(other_len >= count); if self.left < count { unsafe { Chunk::force_copy(self.left, N::USIZE - self_len, self_len, self) }; self.left = N::USIZE - self_len; self.right = N::USIZE; } unsafe { Chunk::force_copy_to(other.right - count, self.left - count, count, other, self) }; self.left -= count; other.right -= count; } /// Update the value at index `index`, returning the old value. /// /// Panics if `index` is out of bounds. /// /// Time: O(1) pub fn set(&mut self, index: usize, value: A) -> A { replace(&mut self[index], value) } /// Insert a new value at index `index`, shifting all the following values /// to the right. /// /// Panics if the index is out of bounds. /// /// Time: O(n) for the number of items shifted pub fn insert(&mut self, index: usize, value: A) { if self.is_full() { panic!("Chunk::insert: chunk is full"); } if index > self.len() { panic!("Chunk::insert: index out of bounds"); } let real_index = index + self.left; let left_size = index; let right_size = self.right - real_index; if self.right == N::USIZE || (self.left > 0 && left_size < right_size) { unsafe { Chunk::force_copy(self.left, self.left - 1, left_size, self); Chunk::force_write(real_index - 1, value, self); } self.left -= 1; } else { unsafe { Chunk::force_copy(real_index, real_index + 1, right_size, self); Chunk::force_write(real_index, value, self); } self.right += 1; } } /// Remove the value at index `index`, shifting all the following values to /// the left. /// /// Returns the removed value. /// /// Panics if the index is out of bounds. /// /// Time: O(n) for the number of items shifted pub fn remove(&mut self, index: usize) -> A { if index >= self.len() { panic!("Chunk::remove: index out of bounds"); } let real_index = index + self.left; let value = unsafe { Chunk::force_read(real_index, self) }; let left_size = index; let right_size = self.right - real_index - 1; if left_size < right_size { unsafe { Chunk::force_copy(self.left, self.left + 1, left_size, self) }; self.left += 1; } else { unsafe { Chunk::force_copy(real_index + 1, real_index, right_size, self) }; self.right -= 1; } value } /// Construct an iterator that drains values from the front of the chunk. pub fn drain(&mut self) -> Drain<'_, A, N> { Drain { chunk: self } } /// Discard the contents of the chunk. /// /// Time: O(n) pub fn clear(&mut self) { for i in self.left..self.right { unsafe { Chunk::force_drop(i, self) } } self.left = 0; self.right = 0; } /// Get a reference to the contents of the chunk as a slice. pub fn as_slice(&self) -> &[A] { unsafe { from_raw_parts( (&self.data as *const ManuallyDrop as *const A).add(self.left), self.len(), ) } } /// Get a reference to the contents of the chunk as a mutable slice. pub fn as_mut_slice(&mut self) -> &mut [A] { unsafe { from_raw_parts_mut( (&mut self.data as *mut ManuallyDrop as *mut A).add(self.left), self.len(), ) } } } impl Default for Chunk where N: ChunkLength, { fn default() -> Self { Self::new() } } impl Index for Chunk where I: SliceIndex<[A]>, N: ChunkLength, { type Output = I::Output; fn index(&self, index: I) -> &Self::Output { self.as_slice().index(index) } } impl IndexMut for Chunk where I: SliceIndex<[A]>, N: ChunkLength, { fn index_mut(&mut self, index: I) -> &mut Self::Output { self.as_mut_slice().index_mut(index) } } impl Debug for Chunk where A: Debug, N: ChunkLength, { fn fmt(&self, f: &mut Formatter) -> Result<(), Error> { f.write_str("Chunk")?; f.debug_list().entries(self.iter()).finish() } } impl Hash for Chunk where A: Hash, N: ChunkLength, { fn hash(&self, hasher: &mut H) where H: Hasher, { for item in self { item.hash(hasher) } } } impl PartialEq for Chunk where Slice: Borrow<[A]>, A: PartialEq, N: ChunkLength, { fn eq(&self, other: &Slice) -> bool { self.as_slice() == other.borrow() } } impl Eq for Chunk where A: Eq, N: ChunkLength, { } impl PartialOrd for Chunk where A: PartialOrd, N: ChunkLength, { fn partial_cmp(&self, other: &Self) -> Option { self.iter().partial_cmp(other.iter()) } } impl Ord for Chunk where A: Ord, N: ChunkLength, { fn cmp(&self, other: &Self) -> Ordering { self.iter().cmp(other.iter()) } } impl io::Write for Chunk where N: ChunkLength, { fn write(&mut self, buf: &[u8]) -> io::Result { let old_len = self.len(); self.extend(buf.iter().cloned().take(N::USIZE - old_len)); Ok(self.len() - old_len) } fn flush(&mut self) -> io::Result<()> { Ok(()) } } impl> std::io::Read for Chunk { fn read(&mut self, buf: &mut [u8]) -> std::io::Result { let read_size = buf.len().min(self.len()); if read_size == 0 { Ok(0) } else { for p in buf.iter_mut().take(read_size) { *p = self.pop_front(); } Ok(read_size) } } } impl From> for Chunk where N: ChunkLength, { fn from(mut array: InlineArray) -> Self { let mut out = Self::new(); out.left = 0; out.right = array.len(); unsafe { ptr::copy_nonoverlapping(array.data(), &mut out.values_mut()[0], out.right); *array.len_mut() = 0; } out } } impl<'a, A, N, T> From<&'a mut InlineArray> for Chunk where N: ChunkLength, { fn from(array: &mut InlineArray) -> Self { let mut out = Self::new(); out.left = 0; out.right = array.len(); unsafe { ptr::copy_nonoverlapping(array.data(), &mut out.values_mut()[0], out.right); *array.len_mut() = 0; } out } } impl Borrow<[A]> for Chunk where N: ChunkLength, { fn borrow(&self) -> &[A] { self.as_slice() } } impl BorrowMut<[A]> for Chunk where N: ChunkLength, { fn borrow_mut(&mut self) -> &mut [A] { self.as_mut_slice() } } impl AsRef<[A]> for Chunk where N: ChunkLength, { fn as_ref(&self) -> &[A] { self.as_slice() } } impl AsMut<[A]> for Chunk where N: ChunkLength, { fn as_mut(&mut self) -> &mut [A] { self.as_mut_slice() } } impl Deref for Chunk where N: ChunkLength, { type Target = [A]; fn deref(&self) -> &Self::Target { self.as_slice() } } impl DerefMut for Chunk where N: ChunkLength, { fn deref_mut(&mut self) -> &mut Self::Target { self.as_mut_slice() } } impl FromIterator for Chunk where N: ChunkLength, { fn from_iter(it: I) -> Self where I: IntoIterator, { let mut chunk = Self::new(); for item in it { chunk.push_back(item); } chunk } } impl<'a, A, N> IntoIterator for &'a Chunk where N: ChunkLength, { type Item = &'a A; type IntoIter = SliceIter<'a, A>; fn into_iter(self) -> Self::IntoIter { self.iter() } } impl<'a, A, N> IntoIterator for &'a mut Chunk where N: ChunkLength, { type Item = &'a mut A; type IntoIter = SliceIterMut<'a, A>; fn into_iter(self) -> Self::IntoIter { self.iter_mut() } } impl Extend for Chunk where N: ChunkLength, { /// Append the contents of the iterator to the back of the chunk. /// /// Panics if the chunk exceeds its capacity. /// /// Time: O(n) for the length of the iterator fn extend(&mut self, it: I) where I: IntoIterator, { for item in it { self.push_back(item); } } } impl<'a, A, N> Extend<&'a A> for Chunk where A: 'a + Copy, N: ChunkLength, { /// Append the contents of the iterator to the back of the chunk. /// /// Panics if the chunk exceeds its capacity. /// /// Time: O(n) for the length of the iterator fn extend(&mut self, it: I) where I: IntoIterator, { for item in it { self.push_back(*item); } } } pub struct Iter where N: ChunkLength, { chunk: Chunk, } impl Iterator for Iter where N: ChunkLength, { type Item = A; fn next(&mut self) -> Option { if self.chunk.is_empty() { None } else { Some(self.chunk.pop_front()) } } fn size_hint(&self) -> (usize, Option) { (self.chunk.len(), Some(self.chunk.len())) } } impl DoubleEndedIterator for Iter where N: ChunkLength, { fn next_back(&mut self) -> Option { if self.chunk.is_empty() { None } else { Some(self.chunk.pop_back()) } } } impl ExactSizeIterator for Iter where N: ChunkLength {} impl FusedIterator for Iter where N: ChunkLength {} impl IntoIterator for Chunk where N: ChunkLength, { type Item = A; type IntoIter = Iter; fn into_iter(self) -> Self::IntoIter { Iter { chunk: self } } } pub struct Drain<'a, A, N> where A: 'a, N: ChunkLength + 'a, { chunk: &'a mut Chunk, } impl<'a, A, N> Iterator for Drain<'a, A, N> where A: 'a, N: ChunkLength + 'a, { type Item = A; fn next(&mut self) -> Option { if self.chunk.is_empty() { None } else { Some(self.chunk.pop_front()) } } fn size_hint(&self) -> (usize, Option) { (self.chunk.len(), Some(self.chunk.len())) } } impl<'a, A, N> DoubleEndedIterator for Drain<'a, A, N> where A: 'a, N: ChunkLength + 'a, { fn next_back(&mut self) -> Option { if self.chunk.is_empty() { None } else { Some(self.chunk.pop_back()) } } } impl<'a, A, N> ExactSizeIterator for Drain<'a, A, N> where A: 'a, N: ChunkLength + 'a, { } impl<'a, A, N> FusedIterator for Drain<'a, A, N> where A: 'a, N: ChunkLength + 'a, { } #[cfg(test)] mod test { use super::*; #[test] fn is_full() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { assert_eq!(false, chunk.is_full()); chunk.push_back(i); } assert_eq!(true, chunk.is_full()); } #[test] fn push_back_front() { let mut chunk = Chunk::<_, U64>::new(); for i in 12..20 { chunk.push_back(i); } assert_eq!(8, chunk.len()); for i in (0..12).rev() { chunk.push_front(i); } assert_eq!(20, chunk.len()); for i in 20..32 { chunk.push_back(i); } assert_eq!(32, chunk.len()); let right: Vec = chunk.into_iter().collect(); let left: Vec = (0..32).collect(); assert_eq!(left, right); } #[test] fn push_and_pop() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } for i in 0..64 { assert_eq!(i, chunk.pop_front()); } for i in 0..64 { chunk.push_front(i); } for i in 0..64 { assert_eq!(i, chunk.pop_back()); } } #[test] fn drop_left() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..6 { chunk.push_back(i); } chunk.drop_left(3); let vec: Vec = chunk.into_iter().collect(); assert_eq!(vec![3, 4, 5], vec); } #[test] fn drop_right() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..6 { chunk.push_back(i); } chunk.drop_right(3); let vec: Vec = chunk.into_iter().collect(); assert_eq!(vec![0, 1, 2], vec); } #[test] fn split_off() { let mut left = Chunk::<_, U64>::new(); for i in 0..6 { left.push_back(i); } let right = left.split_off(3); let left_vec: Vec = left.into_iter().collect(); let right_vec: Vec = right.into_iter().collect(); assert_eq!(vec![0, 1, 2], left_vec); assert_eq!(vec![3, 4, 5], right_vec); } #[test] fn append() { let mut left = Chunk::<_, U64>::new(); for i in 0..32 { left.push_back(i); } let mut right = Chunk::<_, U64>::new(); for i in (32..64).rev() { right.push_front(i); } left.append(&mut right); let out_vec: Vec = left.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn ref_iter() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } let out_vec: Vec<&i32> = chunk.iter().collect(); let should_vec_p: Vec = (0..64).collect(); let should_vec: Vec<&i32> = should_vec_p.iter().collect(); assert_eq!(should_vec, out_vec); } #[test] fn mut_ref_iter() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } let out_vec: Vec<&mut i32> = chunk.iter_mut().collect(); let mut should_vec_p: Vec = (0..64).collect(); let should_vec: Vec<&mut i32> = should_vec_p.iter_mut().collect(); assert_eq!(should_vec, out_vec); } #[test] fn consuming_iter() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn insert_middle() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..32 { chunk.push_back(i); } for i in 33..64 { chunk.push_back(i); } chunk.insert(32, 32); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn insert_back() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..63 { chunk.push_back(i); } chunk.insert(63, 63); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn insert_front() { let mut chunk = Chunk::<_, U64>::new(); for i in 1..64 { chunk.push_front(64 - i); } chunk.insert(0, 0); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..64).collect(); assert_eq!(should_vec, out_vec); } #[test] fn remove_value() { let mut chunk = Chunk::<_, U64>::new(); for i in 0..64 { chunk.push_back(i); } chunk.remove(32); let out_vec: Vec = chunk.into_iter().collect(); let should_vec: Vec = (0..32).chain(33..64).collect(); assert_eq!(should_vec, out_vec); } use std::sync::atomic::{AtomicUsize, Ordering}; struct DropTest<'a> { counter: &'a AtomicUsize, } impl<'a> DropTest<'a> { fn new(counter: &'a AtomicUsize) -> Self { counter.fetch_add(1, Ordering::Relaxed); DropTest { counter } } } impl<'a> Drop for DropTest<'a> { fn drop(&mut self) { self.counter.fetch_sub(1, Ordering::Relaxed); } } #[test] fn dropping() { let counter = AtomicUsize::new(0); { let mut chunk: Chunk = Chunk::new(); for _i in 0..20 { chunk.push_back(DropTest::new(&counter)) } for _i in 0..20 { chunk.push_front(DropTest::new(&counter)) } assert_eq!(40, counter.load(Ordering::Relaxed)); for _i in 0..10 { chunk.pop_back(); } assert_eq!(30, counter.load(Ordering::Relaxed)); } assert_eq!(0, counter.load(Ordering::Relaxed)); } } sized-chunks-0.3.1/src/sparse_chunk.rs010064400017500001750000000301451352112072500161660ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! A fixed capacity sparse array. //! //! See [`SparseChunk`](struct.SparseChunk.html) use std::collections::{BTreeMap, HashMap}; use std::fmt::{Debug, Error, Formatter}; use std::mem::{self, ManuallyDrop}; use std::ops::Index; use std::ops::IndexMut; use std::ptr; use std::slice::{from_raw_parts, from_raw_parts_mut}; use typenum::U64; use crate::bitmap::{Bitmap, Iter as BitmapIter}; use crate::types::{Bits, ChunkLength}; /// A fixed capacity sparse array. /// /// An inline sparse array of up to `N` items of type `A`, where `N` is an /// [`Unsigned`][Unsigned] type level numeral. You can think of it as an array /// of `Option`, where the discriminant (whether the value is `Some` or /// `None`) is kept in a bitmap instead of adjacent to the value. /// /// Because the bitmap is kept in a primitive type, the maximum value of `N` is /// currently 128, corresponding to a type of `u128`. The type of the bitmap /// will be the minimum unsigned integer type required to fit the number of bits /// required. Thus, disregarding memory alignment rules, the allocated size of a /// `SparseChunk` will be `uX` + `A` * `N` where `uX` is the type of the /// discriminant bitmap, either `u8`, `u16`, `u32`, `u64` or `u128`. /// /// # Examples /// /// ```rust /// # #[macro_use] extern crate sized_chunks; /// # extern crate typenum; /// # use sized_chunks::SparseChunk; /// # use typenum::U20; /// # fn main() { /// // Construct a chunk with a 20 item capacity /// let mut chunk = SparseChunk::::new(); /// // Set the 18th index to the value 5. /// chunk.insert(18, 5); /// // Set the 5th index to the value 23. /// chunk.insert(5, 23); /// /// assert_eq!(chunk.len(), 2); /// assert_eq!(chunk.get(5), Some(&23)); /// assert_eq!(chunk.get(6), None); /// assert_eq!(chunk.get(18), Some(&5)); /// # } /// ``` /// /// [Unsigned]: https://docs.rs/typenum/1.10.0/typenum/marker_traits/trait.Unsigned.html pub struct SparseChunk = U64> { map: Bitmap, data: ManuallyDrop, } impl> Drop for SparseChunk { fn drop(&mut self) { if mem::needs_drop::() { for index in self.map { unsafe { SparseChunk::force_drop(index, self) } } } } } impl> Clone for SparseChunk { fn clone(&self) -> Self { let mut out = Self::new(); for index in self.map { out.insert(index, self[index].clone()); } out } } impl SparseChunk where N: Bits + ChunkLength, { pub const CAPACITY: usize = N::USIZE; #[inline] fn values(&self) -> &[A] { unsafe { from_raw_parts(&self.data as *const _ as *const A, N::USIZE) } } #[inline] fn values_mut(&mut self) -> &mut [A] { unsafe { from_raw_parts_mut(&mut self.data as *mut _ as *mut A, N::USIZE) } } /// Copy the value at an index, discarding ownership of the copied value #[inline] unsafe fn force_read(index: usize, chunk: &Self) -> A { ptr::read(&chunk.values()[index as usize]) } /// Write a value at an index without trying to drop what's already there #[inline] unsafe fn force_write(index: usize, value: A, chunk: &mut Self) { ptr::write(&mut chunk.values_mut()[index as usize], value) } /// Drop the value at an index #[inline] unsafe fn force_drop(index: usize, chunk: &mut Self) { ptr::drop_in_place(&mut chunk.values_mut()[index]) } /// Construct a new empty chunk. pub fn new() -> Self { unsafe { mem::zeroed() } } /// Construct a new chunk with one item. pub fn unit(index: usize, value: A) -> Self { let mut chunk = Self::new(); chunk.insert(index, value); chunk } /// Construct a new chunk with two items. pub fn pair(index1: usize, value1: A, index2: usize, value2: A) -> Self { let mut chunk = Self::new(); chunk.insert(index1, value1); chunk.insert(index2, value2); chunk } /// Get the length of the chunk. #[inline] pub fn len(&self) -> usize { self.map.len() } /// Test if the chunk is empty. #[inline] pub fn is_empty(&self) -> bool { self.map.len() == 0 } /// Test if the chunk is at capacity. #[inline] pub fn is_full(&self) -> bool { self.len() == N::USIZE } /// Insert a new value at a given index. /// /// Returns the previous value at that index, if any. pub fn insert(&mut self, index: usize, value: A) -> Option { if index >= N::USIZE { panic!("SparseChunk::insert: index out of bounds"); } if self.map.set(index, true) { Some(mem::replace(&mut self.values_mut()[index], value)) } else { unsafe { SparseChunk::force_write(index, value, self) }; None } } /// Remove the value at a given index. /// /// Returns the value, or `None` if the index had no value. pub fn remove(&mut self, index: usize) -> Option { if index >= N::USIZE { panic!("SparseChunk::remove: index out of bounds"); } if self.map.set(index, false) { Some(unsafe { SparseChunk::force_read(index, self) }) } else { None } } /// Remove the first value present in the array. /// /// Returns the value that was removed, or `None` if the array was empty. pub fn pop(&mut self) -> Option { self.first_index().and_then(|index| self.remove(index)) } /// Get the value at a given index. pub fn get(&self, index: usize) -> Option<&A> { if index >= N::USIZE { return None; } if self.map.get(index) { Some(&self.values()[index]) } else { None } } /// Get a mutable reference to the value at a given index. pub fn get_mut(&mut self, index: usize) -> Option<&mut A> { if index >= N::USIZE { return None; } if self.map.get(index) { Some(&mut self.values_mut()[index]) } else { None } } /// Make an iterator over the indices which contain values. pub fn indices(&self) -> BitmapIter { self.map.into_iter() } /// Find the first index which contains a value. pub fn first_index(&self) -> Option { self.map.first_index() } /// Make an iterator of references to the values contained in the array. pub fn iter(&self) -> Iter<'_, A, N> { Iter { indices: self.indices(), chunk: self, } } /// Make an iterator of mutable references to the values contained in the /// array. pub fn iter_mut(&mut self) -> IterMut<'_, A, N> { IterMut { indices: self.indices(), chunk: self, } } /// Turn the chunk into an iterator over the values contained within it. pub fn drain(self) -> Drain { Drain { chunk: self } } /// Make an iterator of pairs of indices and references to the values /// contained in the array. pub fn entries(&self) -> impl Iterator { self.indices().zip(self.iter()) } } impl> Index for SparseChunk { type Output = A; #[inline] fn index(&self, index: usize) -> &Self::Output { self.get(index).unwrap() } } impl> IndexMut for SparseChunk { #[inline] fn index_mut(&mut self, index: usize) -> &mut Self::Output { self.get_mut(index).unwrap() } } impl> IntoIterator for SparseChunk { type Item = A; type IntoIter = Drain; #[inline] fn into_iter(self) -> Self::IntoIter { self.drain() } } impl PartialEq for SparseChunk where A: PartialEq, N: Bits + ChunkLength, { fn eq(&self, other: &Self) -> bool { if self.map != other.map { return false; } for index in self.indices() { if self.get(index) != other.get(index) { return false; } } true } } impl PartialEq> for SparseChunk where A: PartialEq, N: Bits + ChunkLength, { fn eq(&self, other: &BTreeMap) -> bool { if self.len() != other.len() { return false; } for index in 0..N::USIZE { if self.get(index) != other.get(&index) { return false; } } true } } impl PartialEq> for SparseChunk where A: PartialEq, N: Bits + ChunkLength, { fn eq(&self, other: &HashMap) -> bool { if self.len() != other.len() { return false; } for index in 0..N::USIZE { if self.get(index) != other.get(&index) { return false; } } true } } impl Eq for SparseChunk where A: Eq, N: Bits + ChunkLength, { } impl Debug for SparseChunk where A: Debug, N: Bits + ChunkLength, { fn fmt(&self, f: &mut Formatter) -> Result<(), Error> { f.write_str("SparseChunk")?; f.debug_map().entries(self.entries()).finish() } } pub struct Iter<'a, A: 'a, N: 'a + Bits + ChunkLength> { indices: BitmapIter, chunk: &'a SparseChunk, } impl<'a, A, N: Bits + ChunkLength> Iterator for Iter<'a, A, N> { type Item = &'a A; fn next(&mut self) -> Option { self.indices.next().map(|index| &self.chunk.values()[index]) } } pub struct IterMut<'a, A: 'a, N: 'a + Bits + ChunkLength> { indices: BitmapIter, chunk: &'a mut SparseChunk, } impl<'a, A, N: Bits + ChunkLength> Iterator for IterMut<'a, A, N> { type Item = &'a mut A; fn next(&mut self) -> Option { if let Some(index) = self.indices.next() { unsafe { let p: *mut A = &mut self.chunk.values_mut()[index]; Some(&mut *p) } } else { None } } } pub struct Drain> { chunk: SparseChunk, } impl<'a, A, N: Bits + ChunkLength> Iterator for Drain { type Item = A; fn next(&mut self) -> Option { self.chunk.pop() } } #[cfg(test)] mod test { use super::*; use typenum::U32; #[test] fn insert_remove_iterate() { let mut chunk: SparseChunk<_, U32> = SparseChunk::new(); assert_eq!(None, chunk.insert(5, 5)); assert_eq!(None, chunk.insert(1, 1)); assert_eq!(None, chunk.insert(24, 42)); assert_eq!(None, chunk.insert(22, 22)); assert_eq!(Some(42), chunk.insert(24, 24)); assert_eq!(None, chunk.insert(31, 31)); assert_eq!(Some(24), chunk.remove(24)); assert_eq!(4, chunk.len()); let indices: Vec<_> = chunk.indices().collect(); assert_eq!(vec![1, 5, 22, 31], indices); let values: Vec<_> = chunk.into_iter().collect(); assert_eq!(vec![1, 5, 22, 31], values); } #[test] fn clone_chunk() { let mut chunk: SparseChunk<_, U32> = SparseChunk::new(); assert_eq!(None, chunk.insert(5, 5)); assert_eq!(None, chunk.insert(1, 1)); assert_eq!(None, chunk.insert(24, 42)); assert_eq!(None, chunk.insert(22, 22)); let cloned = chunk.clone(); let right_indices: Vec<_> = chunk.indices().collect(); let left_indices: Vec<_> = cloned.indices().collect(); let right: Vec<_> = chunk.into_iter().collect(); let left: Vec<_> = cloned.into_iter().collect(); assert_eq!(left, right); assert_eq!(left_indices, right_indices); assert_eq!(vec![1, 5, 22, 24], left_indices); assert_eq!(vec![1, 5, 22, 24], right_indices); } } sized-chunks-0.3.1/src/tests/inline_array.rs010064400017500001750000000063241346763544200173410ustar0000000000000000#![allow(clippy::unit_arg)] use std::panic::{catch_unwind, AssertUnwindSafe}; use proptest::{arbitrary::any, collection::vec, prelude::*, proptest}; use proptest_derive::Arbitrary; use crate::inline_array::InlineArray; type TestType = [usize; 16]; #[derive(Arbitrary, Debug)] enum Action where A: Arbitrary, ::Strategy: 'static, { Push(A), Pop, Set((usize, A)), Insert(usize, A), Remove(usize), SplitOff(usize), Drain, Clear, } proptest! { #[test] fn test_actions(actions in vec(any::>(), 0..super::action_count())) { let capacity = InlineArray::::CAPACITY; let mut chunk = InlineArray::::new(); let mut guide: Vec<_> = chunk.iter().cloned().collect(); for action in actions { match action { Action::Push(value) => { if chunk.is_full() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.push(value))).is_err()); } else { chunk.push(value); guide.push(value); } } Action::Pop => { assert_eq!(chunk.pop(), guide.pop()); } Action::Set((index, value)) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk[index] = value)).is_err()); } else { chunk[index] = value; guide[index] = value; } } Action::Insert(index, value) => { if index >= chunk.len() || chunk.is_full() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.insert(index, value))).is_err()); } else { chunk.insert(index, value); guide.insert(index, value); } } Action::Remove(index) => { if index >= chunk.len() { assert_eq!(None, chunk.remove(index)); } else { assert_eq!(chunk.remove(index), Some(guide.remove(index))); } } Action::SplitOff(index) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.split_off(index))).is_err()); } else { let chunk_off = chunk.split_off(index); let guide_off = guide.split_off(index); assert_eq!(chunk_off, guide_off); } } Action::Drain => { let drained: Vec<_> = chunk.drain().collect(); let drained_guide: Vec<_> = guide.drain(..).collect(); assert_eq!(drained, drained_guide); } Action::Clear => { chunk.clear(); guide.clear(); } } assert_eq!(chunk, guide); assert!(guide.len() <= capacity); } } } sized-chunks-0.3.1/src/tests/mod.rs010064400017500001750000000003301346760720400154260ustar0000000000000000mod inline_array; mod ring_buffer; mod sized_chunk; mod sparse_chunk; pub fn action_count() -> usize { std::env::var("ACTION_COUNT") .ok() .and_then(|s| s.parse().ok()) .unwrap_or(100) } sized-chunks-0.3.1/src/tests/ring_buffer.rs010064400017500001750000000251361345445512400171500ustar0000000000000000#![allow(clippy::unit_arg)] use std::fmt::Debug; use std::iter::FromIterator; use std::panic::{catch_unwind, AssertUnwindSafe}; use proptest::{arbitrary::any, collection::vec, prelude::*, proptest}; use proptest_derive::Arbitrary; use crate::ring_buffer::RingBuffer; #[derive(Debug)] struct InputVec(Vec); impl InputVec { fn unwrap(self) -> Vec { self.0 } } impl Arbitrary for InputVec where A: Arbitrary + Debug, ::Strategy: 'static, { type Parameters = usize; type Strategy = BoxedStrategy>; fn arbitrary_with(_: Self::Parameters) -> Self::Strategy { #[allow(clippy::redundant_closure)] proptest::collection::vec(any::(), 0..RingBuffer::::CAPACITY) .prop_map(|v| InputVec(v)) .boxed() } } #[derive(Arbitrary, Debug)] enum Construct where A: Arbitrary, ::Strategy: 'static, { Empty, Single(A), Pair((A, A)), DrainFrom(InputVec), CollectFrom(InputVec, usize), FromFront(InputVec, usize), FromBack(InputVec, usize), FromIter(InputVec), } #[derive(Arbitrary, Debug)] enum Action where A: Arbitrary, ::Strategy: 'static, { PushFront(A), PushBack(A), PopFront, PopBack, DropLeft(usize), DropRight(usize), SplitOff(usize), Append(Construct), DrainFromFront(Construct, usize), DrainFromBack(Construct, usize), Set(usize, A), Insert(usize, A), Remove(usize), Drain, Clear, } impl Construct where A: Arbitrary + Clone + Debug + Eq, ::Strategy: 'static, { fn make(self) -> RingBuffer { match self { Construct::Empty => { let out = RingBuffer::new(); assert!(out.is_empty()); out } Construct::Single(value) => { let out = RingBuffer::unit(value.clone()); assert_eq!(out, vec![value]); out } Construct::Pair((left, right)) => { let out = RingBuffer::pair(left.clone(), right.clone()); assert_eq!(out, vec![left, right]); out } Construct::DrainFrom(vec) => { let vec = vec.unwrap(); let mut source = RingBuffer::from_iter(vec.iter().cloned()); let out = RingBuffer::drain_from(&mut source); assert!(source.is_empty()); assert_eq!(out, vec); out } Construct::CollectFrom(vec, len) => { let mut vec = vec.unwrap(); if vec.is_empty() { return RingBuffer::new(); } let len = len % vec.len(); let mut source = vec.clone().into_iter(); let out = RingBuffer::collect_from(&mut source, len); let expected_remainder = vec.split_off(len); let remainder: Vec<_> = source.collect(); assert_eq!(expected_remainder, remainder); assert_eq!(out, vec); out } Construct::FromFront(vec, len) => { let mut vec = vec.unwrap(); if vec.is_empty() { return RingBuffer::new(); } let len = len % vec.len(); let mut source = RingBuffer::from_iter(vec.iter().cloned()); let out = RingBuffer::from_front(&mut source, len); let remainder = vec.split_off(len); assert_eq!(source, remainder); assert_eq!(out, vec); out } Construct::FromBack(vec, len) => { let mut vec = vec.unwrap(); if vec.is_empty() { return RingBuffer::new(); } let len = len % vec.len(); let mut source = RingBuffer::from_iter(vec.iter().cloned()); let out = RingBuffer::from_back(&mut source, len); let remainder = vec.split_off(vec.len() - len); assert_eq!(out, remainder); assert_eq!(source, vec); out } Construct::FromIter(vec) => { let vec = vec.unwrap(); let out = vec.clone().into_iter().collect(); assert_eq!(out, vec); out } } } } proptest! { #[test] fn test_constructors(cons: Construct) { cons.make(); } #[test] fn test_actions(cons: Construct, actions in vec(any::>(), 0..super::action_count())) { let capacity = RingBuffer::::CAPACITY; let mut chunk = cons.make(); let mut guide: Vec<_> = chunk.iter().cloned().collect(); println!("{:?}", actions); for action in actions { println!("Executing {:?} on {:?}", action, chunk); match action { Action::PushFront(value) => { if chunk.is_full() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.push_front(value))).is_err()); } else { chunk.push_front(value); guide.insert(0, value); } } Action::PushBack(value) => { if chunk.is_full() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.push_back(value))).is_err()); } else { chunk.push_back(value); guide.push(value); } } Action::PopFront => { assert_eq!(chunk.pop_front(), if guide.is_empty() { None } else { Some(guide.remove(0)) }); } Action::PopBack => { assert_eq!(chunk.pop_back(), guide.pop()); } Action::DropLeft(index) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.drop_left(index))).is_err()); } else { chunk.drop_left(index); guide.drain(..index); } } Action::DropRight(index) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.drop_right(index))).is_err()); } else { chunk.drop_right(index); guide.drain(index..); } } Action::SplitOff(index) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.split_off(index))).is_err()); } else { let chunk_off = chunk.split_off(index); let guide_off = guide.split_off(index); assert_eq!(chunk_off, guide_off); } } Action::Append(other) => { let mut other = other.make(); let mut other_guide: Vec<_> = other.iter().cloned().collect(); if other.len() + chunk.len() > capacity { assert!(catch_unwind(AssertUnwindSafe(|| chunk.append(&mut other))).is_err()); } else { chunk.append(&mut other); guide.append(&mut other_guide); } } Action::DrainFromFront(other, count) => { let mut other = other.make(); let mut other_guide: Vec<_> = other.iter().cloned().collect(); if count >= other.len() || chunk.len() + count > capacity { assert!(catch_unwind(AssertUnwindSafe(|| chunk.drain_from_front(&mut other, count))).is_err()); } else { chunk.drain_from_front(&mut other, count); guide.extend(other_guide.drain(..count)); assert_eq!(other, other_guide); } } Action::DrainFromBack(other, count) => { let mut other = other.make(); let mut other_guide: Vec<_> = other.iter().cloned().collect(); if count >= other.len() || chunk.len() + count > capacity { assert!(catch_unwind(AssertUnwindSafe(|| chunk.drain_from_back(&mut other, count))).is_err()); } else { chunk.drain_from_back(&mut other, count); let other_index = other.len() - count; guide = other_guide.drain(other_index..).chain(guide.into_iter()).collect(); assert_eq!(other, other_guide); } } Action::Set(index, value) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.set(index, value))).is_err()); } else { chunk.set(index, value); guide[index] = value; } } Action::Insert(index, value) => { if index >= chunk.len() || chunk.is_full() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.insert(index, value))).is_err()); } else { chunk.insert(index, value); guide.insert(index, value); } } Action::Remove(index) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.remove(index))).is_err()); } else { assert_eq!(chunk.remove(index), guide.remove(index)); } } Action::Drain => { let drained: Vec<_> = chunk.drain().collect(); let drained_guide: Vec<_> = guide.drain(..).collect(); assert_eq!(drained, drained_guide); } Action::Clear => { chunk.clear(); guide.clear(); } } assert_eq!(chunk, guide); assert!(guide.len() <= capacity); } } } sized-chunks-0.3.1/src/tests/sized_chunk.rs010064400017500001750000000246731346763545500172060ustar0000000000000000#![allow(clippy::unit_arg)] use std::fmt::Debug; use std::iter::FromIterator; use std::panic::{catch_unwind, AssertUnwindSafe}; use proptest::{arbitrary::any, collection::vec, prelude::*, proptest}; use proptest_derive::Arbitrary; use crate::sized_chunk::Chunk; #[derive(Debug)] struct InputVec(Vec); impl InputVec { fn unwrap(self) -> Vec { self.0 } } impl Arbitrary for InputVec where A: Arbitrary + Debug, ::Strategy: 'static, { type Parameters = usize; type Strategy = BoxedStrategy>; fn arbitrary_with(_: Self::Parameters) -> Self::Strategy { #[allow(clippy::redundant_closure)] proptest::collection::vec(any::(), 0..Chunk::::CAPACITY) .prop_map(|v| InputVec(v)) .boxed() } } #[derive(Arbitrary, Debug)] enum Construct where A: Arbitrary, ::Strategy: 'static, { Empty, Single(A), Pair((A, A)), DrainFrom(InputVec), CollectFrom(InputVec, usize), FromFront(InputVec, usize), FromBack(InputVec, usize), } #[derive(Arbitrary, Debug)] enum Action where A: Arbitrary, ::Strategy: 'static, { PushFront(A), PushBack(A), PopFront, PopBack, DropLeft(usize), DropRight(usize), SplitOff(usize), Append(Construct), DrainFromFront(Construct, usize), DrainFromBack(Construct, usize), Set(usize, A), Insert(usize, A), Remove(usize), Drain, Clear, } impl Construct where A: Arbitrary + Clone + Debug + Eq, ::Strategy: 'static, { fn make(self) -> Chunk { match self { Construct::Empty => { let out = Chunk::new(); assert!(out.is_empty()); out } Construct::Single(value) => { let out = Chunk::unit(value.clone()); assert_eq!(out, vec![value]); out } Construct::Pair((left, right)) => { let out = Chunk::pair(left.clone(), right.clone()); assert_eq!(out, vec![left, right]); out } Construct::DrainFrom(vec) => { let vec = vec.unwrap(); let mut source = Chunk::from_iter(vec.iter().cloned()); let out = Chunk::drain_from(&mut source); assert!(source.is_empty()); assert_eq!(out, vec); out } Construct::CollectFrom(vec, len) => { let mut vec = vec.unwrap(); if vec.is_empty() { return Chunk::new(); } let len = len % vec.len(); let mut source = vec.clone().into_iter(); let out = Chunk::collect_from(&mut source, len); let expected_remainder = vec.split_off(len); let remainder: Vec<_> = source.collect(); assert_eq!(expected_remainder, remainder); assert_eq!(out, vec); out } Construct::FromFront(vec, len) => { let mut vec = vec.unwrap(); if vec.is_empty() { return Chunk::new(); } let len = len % vec.len(); let mut source = Chunk::from_iter(vec.iter().cloned()); let out = Chunk::from_front(&mut source, len); let remainder = vec.split_off(len); assert_eq!(source, remainder); assert_eq!(out, vec); out } Construct::FromBack(vec, len) => { let mut vec = vec.unwrap(); if vec.is_empty() { return Chunk::new(); } let len = len % vec.len(); let mut source = Chunk::from_iter(vec.iter().cloned()); let out = Chunk::from_back(&mut source, len); let remainder = vec.split_off(vec.len() - len); assert_eq!(out, remainder); assert_eq!(source, vec); out } } } } proptest! { #[test] fn test_constructors(cons: Construct) { cons.make(); } #[test] fn test_actions(cons: Construct, actions in vec(any::>(), 0..super::action_count())) { let capacity = Chunk::::CAPACITY; let mut chunk = cons.make(); let mut guide: Vec<_> = chunk.iter().cloned().collect(); for action in actions { match action { Action::PushFront(value) => { if chunk.is_full() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.push_front(value))).is_err()); } else { chunk.push_front(value); guide.insert(0, value); } } Action::PushBack(value) => { if chunk.is_full() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.push_back(value))).is_err()); } else { chunk.push_back(value); guide.push(value); } } Action::PopFront => { if chunk.is_empty() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.pop_front())).is_err()); } else { assert_eq!(chunk.pop_front(), guide.remove(0)); } } Action::PopBack => { if chunk.is_empty() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.pop_back())).is_err()); } else { assert_eq!(chunk.pop_back(), guide.pop().unwrap()); } } Action::DropLeft(index) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.drop_left(index))).is_err()); } else { chunk.drop_left(index); guide.drain(..index); } } Action::DropRight(index) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.drop_right(index))).is_err()); } else { chunk.drop_right(index); guide.drain(index..); } } Action::SplitOff(index) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.split_off(index))).is_err()); } else { let chunk_off = chunk.split_off(index); let guide_off = guide.split_off(index); assert_eq!(chunk_off, guide_off); } } Action::Append(other) => { let mut other = other.make(); let mut other_guide: Vec<_> = other.iter().cloned().collect(); if other.len() + chunk.len() > capacity { assert!(catch_unwind(AssertUnwindSafe(|| chunk.append(&mut other))).is_err()); } else { chunk.append(&mut other); guide.append(&mut other_guide); } } Action::DrainFromFront(other, count) => { let mut other = other.make(); let mut other_guide: Vec<_> = other.iter().cloned().collect(); if count >= other.len() || chunk.len() + count > capacity { assert!(catch_unwind(AssertUnwindSafe(|| chunk.drain_from_front(&mut other, count))).is_err()); } else { chunk.drain_from_front(&mut other, count); guide.extend(other_guide.drain(..count)); assert_eq!(other, other_guide); } } Action::DrainFromBack(other, count) => { let mut other = other.make(); let mut other_guide: Vec<_> = other.iter().cloned().collect(); if count >= other.len() || chunk.len() + count > capacity { assert!(catch_unwind(AssertUnwindSafe(|| chunk.drain_from_back(&mut other, count))).is_err()); } else { chunk.drain_from_back(&mut other, count); let other_index = other.len() - count; guide = other_guide.drain(other_index..).chain(guide.into_iter()).collect(); assert_eq!(other, other_guide); } } Action::Set(index, value) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.set(index, value))).is_err()); } else { chunk.set(index, value); guide[index] = value; } } Action::Insert(index, value) => { if index >= chunk.len() || chunk.is_full() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.insert(index, value))).is_err()); } else { chunk.insert(index, value); guide.insert(index, value); } } Action::Remove(index) => { if index >= chunk.len() { assert!(catch_unwind(AssertUnwindSafe(|| chunk.remove(index))).is_err()); } else { assert_eq!(chunk.remove(index), guide.remove(index)); } } Action::Drain => { let drained: Vec<_> = chunk.drain().collect(); let drained_guide: Vec<_> = guide.drain(..).collect(); assert_eq!(drained, drained_guide); } Action::Clear => { chunk.clear(); guide.clear(); } } assert_eq!(chunk, guide); assert!(guide.len() <= capacity); } } } sized-chunks-0.3.1/src/tests/sparse_chunk.rs010064400017500001750000000063471345445512400173500ustar0000000000000000#![allow(clippy::unit_arg)] use std::collections::BTreeMap; use std::fmt::Debug; use std::panic::{catch_unwind, AssertUnwindSafe}; use proptest::{arbitrary::any, collection::vec, prelude::*, proptest}; use proptest_derive::Arbitrary; use crate::sparse_chunk::SparseChunk; #[derive(Arbitrary, Debug)] enum Construct { Empty, Single((usize, A)), Pair((usize, A, usize, A)), } #[derive(Arbitrary, Debug)] enum Action { Insert(usize, A), Remove(usize), Pop, } impl Construct where A: Arbitrary + Clone + Debug + Eq, ::Strategy: 'static, { fn make(self) -> SparseChunk { match self { Construct::Empty => { let out = SparseChunk::new(); assert!(out.is_empty()); out } Construct::Single((index, value)) => { let index = index % SparseChunk::::CAPACITY; let out = SparseChunk::unit(index, value.clone()); let mut guide = BTreeMap::new(); guide.insert(index, value); assert_eq!(out, guide); out } Construct::Pair((left_index, left, right_index, right)) => { let left_index = left_index % SparseChunk::::CAPACITY; let right_index = right_index % SparseChunk::::CAPACITY; let out = SparseChunk::pair(left_index, left.clone(), right_index, right.clone()); let mut guide = BTreeMap::new(); guide.insert(left_index, left); guide.insert(right_index, right); assert_eq!(out, guide); out } } } } proptest! { #[test] fn test_constructors(cons: Construct) { cons.make(); } #[test] fn test_actions(cons: Construct, actions in vec(any::>(), 0..super::action_count())) { let capacity = SparseChunk::::CAPACITY; let mut chunk = cons.make(); let mut guide: BTreeMap<_, _> = chunk.entries().map(|(i, v)| (i, *v)).collect(); for action in actions { match action { Action::Insert(index, value) => { if index >= capacity { assert!(catch_unwind(AssertUnwindSafe(|| chunk.insert(index, value))).is_err()); } else { assert_eq!(chunk.insert(index, value), guide.insert(index, value)); } } Action::Remove(index) => { if index >= capacity { assert!(catch_unwind(AssertUnwindSafe(|| chunk.remove(index))).is_err()); } else { assert_eq!(chunk.remove(index), guide.remove(&index)); } } Action::Pop => { if let Some(index) = chunk.first_index() { assert_eq!(chunk.pop(), guide.remove(&index)); } else { assert_eq!(chunk.pop(), None); } } } assert_eq!(chunk, guide); assert!(guide.len() <= SparseChunk::::CAPACITY); } } } sized-chunks-0.3.1/src/types.rs010064400017500001750000000233561350544117500146620ustar0000000000000000// This Source Code Form is subject to the terms of the Mozilla Public // License, v. 2.0. If a copy of the MPL was not distributed with this // file, You can obtain one at http://mozilla.org/MPL/2.0/. //! Helper types for chunks. use std::fmt::Debug; use std::marker::PhantomData; use typenum::*; // Chunk sizes /// A trait used to decide the size of an array. /// /// `>::SizedType` for a type level integer N will have the /// same size as `[A; N]`. pub trait ChunkLength: Unsigned { type SizedType; } impl ChunkLength for UTerm { type SizedType = (); } #[doc(hidden)] #[allow(dead_code)] pub struct SizeEven { parent1: B, parent2: B, _marker: PhantomData, } #[doc(hidden)] #[allow(dead_code)] pub struct SizeOdd { parent1: B, parent2: B, data: A, } impl ChunkLength for UInt where N: ChunkLength, { type SizedType = SizeEven; } impl ChunkLength for UInt where N: ChunkLength, { type SizedType = SizeOdd; } // Bit field sizes /// A type level number signifying the number of bits in a bitmap. /// /// This trait is implemented for type level numbers from `U1` to `U128`. /// /// # Examples /// /// ```rust /// # #[macro_use] extern crate sized_chunks; /// # extern crate typenum; /// # use sized_chunks::types::Bits; /// # use typenum::U10; /// # fn main() { /// assert_eq!( /// std::mem::size_of::<::Store>(), /// std::mem::size_of::() /// ); /// # } /// ``` pub trait Bits: Unsigned { /// A primitive integer type suitable for storing this many bits. type Store: Default + Copy + PartialEq + Debug; fn get(bits: &Self::Store, index: usize) -> bool; fn set(bits: &mut Self::Store, index: usize, value: bool) -> bool; fn len(bits: &Self::Store) -> usize; fn first_index(bits: &Self::Store) -> Option; } macro_rules! bits_for { ($num:ty, $result:ty) => { impl Bits for $num { type Store = $result; fn get(bits: &$result, index: usize) -> bool { debug_assert!(index < Self::USIZE); bits & (1 << index) != 0 } fn set(bits: &mut $result, index: usize, value: bool) -> bool { debug_assert!(index < Self::USIZE); let mask = 1 << index; let prev = *bits & mask; if value { *bits |= mask; } else { *bits &= !mask; } prev != 0 } fn len(bits: &$result) -> usize { bits.count_ones() as usize } fn first_index(bits: &$result) -> Option { if *bits == 0 { None } else { Some(bits.trailing_zeros() as usize) } } } }; } macro_rules! bits_for_256 { ($num:ty) => { impl Bits for $num { type Store = [u128; 2]; fn get(bits: &Self::Store, index: usize) -> bool { debug_assert!(index < Self::USIZE); if index < 128 { bits[0] & (1 << index) != 0 } else { bits[1] & (1 << (index - 128)) != 0 } } fn set(bits: &mut Self::Store, index: usize, value: bool) -> bool { debug_assert!(index < Self::USIZE); let mask = 1 << (index & 127); let bits = if index < 128 { &mut bits[0] } else { &mut bits[1] }; let prev = *bits & mask; if value { *bits |= mask; } else { *bits &= !mask; } prev != 0 } fn len(bits: &Self::Store) -> usize { (bits[0].count_ones() + bits[1].count_ones()) as usize } fn first_index(bits: &Self::Store) -> Option { if bits[0] == 0 { if bits[1] == 0 { None } else { Some(bits[1].trailing_zeros() as usize + 128) } } else { Some(bits[0].trailing_zeros() as usize) } } } }; } bits_for!(U1, u8); bits_for!(U2, u8); bits_for!(U3, u8); bits_for!(U4, u8); bits_for!(U5, u8); bits_for!(U6, u8); bits_for!(U7, u8); bits_for!(U8, u8); bits_for!(U9, u16); bits_for!(U10, u16); bits_for!(U11, u16); bits_for!(U12, u16); bits_for!(U13, u16); bits_for!(U14, u16); bits_for!(U15, u16); bits_for!(U16, u16); bits_for!(U17, u32); bits_for!(U18, u32); bits_for!(U19, u32); bits_for!(U20, u32); bits_for!(U21, u32); bits_for!(U22, u32); bits_for!(U23, u32); bits_for!(U24, u32); bits_for!(U25, u32); bits_for!(U26, u32); bits_for!(U27, u32); bits_for!(U28, u32); bits_for!(U29, u32); bits_for!(U30, u32); bits_for!(U31, u32); bits_for!(U32, u32); bits_for!(U33, u64); bits_for!(U34, u64); bits_for!(U35, u64); bits_for!(U36, u64); bits_for!(U37, u64); bits_for!(U38, u64); bits_for!(U39, u64); bits_for!(U40, u64); bits_for!(U41, u64); bits_for!(U42, u64); bits_for!(U43, u64); bits_for!(U44, u64); bits_for!(U45, u64); bits_for!(U46, u64); bits_for!(U47, u64); bits_for!(U48, u64); bits_for!(U49, u64); bits_for!(U50, u64); bits_for!(U51, u64); bits_for!(U52, u64); bits_for!(U53, u64); bits_for!(U54, u64); bits_for!(U55, u64); bits_for!(U56, u64); bits_for!(U57, u64); bits_for!(U58, u64); bits_for!(U59, u64); bits_for!(U60, u64); bits_for!(U61, u64); bits_for!(U62, u64); bits_for!(U63, u64); bits_for!(U64, u64); bits_for!(U65, u128); bits_for!(U66, u128); bits_for!(U67, u128); bits_for!(U68, u128); bits_for!(U69, u128); bits_for!(U70, u128); bits_for!(U71, u128); bits_for!(U72, u128); bits_for!(U73, u128); bits_for!(U74, u128); bits_for!(U75, u128); bits_for!(U76, u128); bits_for!(U77, u128); bits_for!(U78, u128); bits_for!(U79, u128); bits_for!(U80, u128); bits_for!(U81, u128); bits_for!(U82, u128); bits_for!(U83, u128); bits_for!(U84, u128); bits_for!(U85, u128); bits_for!(U86, u128); bits_for!(U87, u128); bits_for!(U88, u128); bits_for!(U89, u128); bits_for!(U90, u128); bits_for!(U91, u128); bits_for!(U92, u128); bits_for!(U93, u128); bits_for!(U94, u128); bits_for!(U95, u128); bits_for!(U96, u128); bits_for!(U97, u128); bits_for!(U98, u128); bits_for!(U99, u128); bits_for!(U100, u128); bits_for!(U101, u128); bits_for!(U102, u128); bits_for!(U103, u128); bits_for!(U104, u128); bits_for!(U105, u128); bits_for!(U106, u128); bits_for!(U107, u128); bits_for!(U108, u128); bits_for!(U109, u128); bits_for!(U110, u128); bits_for!(U111, u128); bits_for!(U112, u128); bits_for!(U113, u128); bits_for!(U114, u128); bits_for!(U115, u128); bits_for!(U116, u128); bits_for!(U117, u128); bits_for!(U118, u128); bits_for!(U119, u128); bits_for!(U120, u128); bits_for!(U121, u128); bits_for!(U122, u128); bits_for!(U123, u128); bits_for!(U124, u128); bits_for!(U125, u128); bits_for!(U126, u128); bits_for!(U127, u128); bits_for!(U128, u128); bits_for_256!(U129); bits_for_256!(U130); bits_for_256!(U131); bits_for_256!(U132); bits_for_256!(U133); bits_for_256!(U134); bits_for_256!(U135); bits_for_256!(U136); bits_for_256!(U137); bits_for_256!(U138); bits_for_256!(U139); bits_for_256!(U140); bits_for_256!(U141); bits_for_256!(U142); bits_for_256!(U143); bits_for_256!(U144); bits_for_256!(U145); bits_for_256!(U146); bits_for_256!(U147); bits_for_256!(U148); bits_for_256!(U149); bits_for_256!(U150); bits_for_256!(U151); bits_for_256!(U152); bits_for_256!(U153); bits_for_256!(U154); bits_for_256!(U155); bits_for_256!(U156); bits_for_256!(U157); bits_for_256!(U158); bits_for_256!(U159); bits_for_256!(U160); bits_for_256!(U161); bits_for_256!(U162); bits_for_256!(U163); bits_for_256!(U164); bits_for_256!(U165); bits_for_256!(U166); bits_for_256!(U167); bits_for_256!(U168); bits_for_256!(U169); bits_for_256!(U170); bits_for_256!(U171); bits_for_256!(U172); bits_for_256!(U173); bits_for_256!(U174); bits_for_256!(U175); bits_for_256!(U176); bits_for_256!(U177); bits_for_256!(U178); bits_for_256!(U179); bits_for_256!(U180); bits_for_256!(U181); bits_for_256!(U182); bits_for_256!(U183); bits_for_256!(U184); bits_for_256!(U185); bits_for_256!(U186); bits_for_256!(U187); bits_for_256!(U188); bits_for_256!(U189); bits_for_256!(U190); bits_for_256!(U191); bits_for_256!(U192); bits_for_256!(U193); bits_for_256!(U194); bits_for_256!(U195); bits_for_256!(U196); bits_for_256!(U197); bits_for_256!(U198); bits_for_256!(U199); bits_for_256!(U200); bits_for_256!(U201); bits_for_256!(U202); bits_for_256!(U203); bits_for_256!(U204); bits_for_256!(U205); bits_for_256!(U206); bits_for_256!(U207); bits_for_256!(U208); bits_for_256!(U209); bits_for_256!(U210); bits_for_256!(U211); bits_for_256!(U212); bits_for_256!(U213); bits_for_256!(U214); bits_for_256!(U215); bits_for_256!(U216); bits_for_256!(U217); bits_for_256!(U218); bits_for_256!(U219); bits_for_256!(U220); bits_for_256!(U221); bits_for_256!(U222); bits_for_256!(U223); bits_for_256!(U224); bits_for_256!(U225); bits_for_256!(U226); bits_for_256!(U227); bits_for_256!(U228); bits_for_256!(U229); bits_for_256!(U230); bits_for_256!(U231); bits_for_256!(U232); bits_for_256!(U233); bits_for_256!(U234); bits_for_256!(U235); bits_for_256!(U236); bits_for_256!(U237); bits_for_256!(U238); bits_for_256!(U239); bits_for_256!(U240); bits_for_256!(U241); bits_for_256!(U242); bits_for_256!(U243); bits_for_256!(U244); bits_for_256!(U245); bits_for_256!(U246); bits_for_256!(U247); bits_for_256!(U248); bits_for_256!(U249); bits_for_256!(U250); bits_for_256!(U251); bits_for_256!(U252); bits_for_256!(U253); bits_for_256!(U254); bits_for_256!(U255); bits_for_256!(U256); sized-chunks-0.3.1/.cargo_vcs_info.json0000644000000001120000000000000134750ustar00{ "git": { "sha1": "a1a3c1312b91c37fc96a987939752e77d5efc4d4" } }